id
stringlengths
10
10
title
stringlengths
7
231
abstract
stringlengths
3
2.43k
authors
stringlengths
5
21.5k
published_date
stringlengths
20
20
link
stringlengths
33
34
markdown
stringlengths
133
1.92M
2309.11919
Periodic Center Manifolds for Nonhyperbolic Limit Cycles in ODEs
In this paper, we deal with a classical object, namely, a nonhyperbolic limit cycle in a system of smooth autonomous ordinary differential equations. While the existence of a center manifold near such a cycle was assumed in several studies on cycle bifurcations based on periodic normal forms, no proofs were available in the literature until recently. The main goal of this paper is to give an elementary proof of the existence of a periodic smooth locally invariant center manifold near a nonhyperbolic cycle in finite-dimensional ordinary differential equations by using the Lyapunov-Perron method. In addition, we provide several explicit examples of analytic vector fields admitting (non)-unique, (non)-$C^{\infty}$-smooth and (non)-analytic periodic center manifolds.
Bram Lentjes, Mattias Windmolders, Yuri A. Kuznetsov
2023-09-21T09:30:57Z
http://arxiv.org/abs/2309.11919v3
# Periodic Center Manifolds for Nonhyperbolic Limit Cycles in ODEs # Periodic Center Manifolds for Nonhyperbolic Limit Cycles in ODEs Bram Lentjes Department of Mathematics, Hasselt University, Diepenbeek Campus, 3590 Diepenbeek, Belgium ([email protected]). Mattias Windmolders Department of Mathematics, KU Leuven, 3000 Leuven, Belgium ([email protected]). Yuri A. Kuznetsov Department of Mathematics, Utrecht University, 3508 TA Utrecht, The Netherlands and Department of Applied Mathematics, University of Twente, 7500 AE Enschede, The Netherlands ([email protected]). **Abstract**: In this paper, we deal with a classical object, namely, a nonhyperbolic limit cycle in a system of smooth autonomous ordinary differential equations. While the existence of a center manifold near such a cycle was assumed in several studies on cycle bifurcations based on periodic normal forms, no proofs were available in the literature until recently. The main goal of this paper is to give an elementary proof of the existence of a periodic smooth locally invariant center manifold near a nonhyperbolic cycle in finite-dimensional ordinary differential equations by using the Lyapunov-Perron method. In addition, we provide several explicit examples of analytic vector fields admitting (non)-unique, (non)-\(C^{\infty}\)-smooth and (non)-analytic periodic center manifolds. **Keywords**: Center manifold theorem, nonhyperbolic cycles, ordinary differential equations **MSC**: 34C25, 34C45, 37G15 ## 1 Introduction Center manifold theory is without doubt one of the most well-known and powerful techniques to study local bifurcations of dynamical systems [27, 43]. In its simplest form, center manifold theory allows us to analyze the behavior of a complicated high-dimensional nonlinear dynamical system near a bifurcation by reducing the system to a low-dimensional invariant manifold, called the center manifold. The center manifold theorem for finite-dimensional ordinary differential equations (ODEs) near a nonhyperbolic equilibrium has first been proved in [50, 40] and developed further in [32, 6]. Over the years, the existence of a center manifold near a nonhyperbolic equilibrium has been established for various other classes of dynamical systems by employing different techniques, such as, for example, the graph transform [52, 51], the parametrization method [5, 55] and the Lyapunov-Perron method [57, 30, 26]. Mainly this last method has been proven to be very successful in the setting of infinite-dimensional dynamical systems. For example, the center manifold theorem for equilibria has been obtained under various assumptions for ODEs in Banach spaces [59, 10, 57, 58, 30], partial differential equations [31, 41, 47, 48, 2], stochastic dynamical systems [4, 25, 8, 9, 46], classical delay (differential) equations [7, 23, 29, 24, 3], renewal equations [23, 24, 21], abstract delay (differential) equations [39], impulsive delay differential equations [12], mixed functional differential and difference equations [35, 34]. Various interesting and important qualitative properties of center manifolds for equilibria can be found in [53] and an extensive literature overview on such manifolds in various classes of dynamical systems can be found in [49]. In all cases, the dimension of the center manifold is equal to the number of the critical eigenvalues of the equilibrium, i.e. those with zero real parts. The natural question arises if the whole center manifold construction can be repeated for nonhyperbolic periodic orbits (cycles) in various classes of dynamical systems. While the literature for center manifolds for equilibria is extensive, the same cannot be said for center manifolds near cycles. A first proof on the existence and smoothness of a center manifold for periodic mixed functional differential equations was given in [36] and has been later adapted in [11, 12] to the setting of periodic impulsive delay differential equations. Recently, in [45], the existence of a smooth periodic finite-dimensional center manifold near a cycle for classical delay differential equations has been established using the general sun-star calculus framework [13, 14, 15, 16, 22, 24], which expands its applicability to various other classes of delay equations. Here, the dimension of the center manifold is also determined by the number of the critical multipliers of the cycle, including the trivial (equal to one) multiplier. However, as the state space in all mentioned references on this topic is infinite-dimensional, many proofs are rather involved as one must rely on non-trivial functional analytic techniques. While the resulting center manifold theorems could be applied to finite-dimensional ODEs without delays, this is certainly a redundant overkill. The main goal of this paper is to directly state and prove a center manifold theorem for cycles in finite-dimensional ODEs, using only elementary tools. Essentially, the proofs below are rather straightforward adaptations of those from [45] in a much simpler finite-dimensional context. We already remark that our exposition is based on the classical Lyapunov-Perron method as a variation of constants formula is easily available in this setting. To study stability and bifurcations of limit cycles in ODEs, one can alternatively work with a Poincare map on a cross-section to the cycle [27, 43]. In most cases, this is sufficient, but then we miss one dimension, i.e. the phase coordinate along the cycle. It should also be noted immediately that the existence of a smooth (non-unique) center manifold for the fixed point of the Poincare map on a cross-section to the cycle does not imply directly the existence of a smooth center manifold in a tubular neighborhood of the cycle.1 A motivation in keeping this phase dimension is to directly obtain all information of the dynamics near the cycle. Footnote 1: Notice that one can deduce the existence of the unique stable and unstable manifold near a hyperbolic cycle from the fixed point of the Poincaré map, see [29, Theorem 10.3.2]. We fill two important gaps in the literature. First, from a theoretical point of view, the results from [37, 38] on the existence of a special coordinate system on the center manifold that allows us to describe the local dynamics near a bifurcating cycle in terms of so-called periodic normal forms, rely heavily on the local invariance and smoothness properties of the center manifold. However, no proof nor a reference towards the literature has been provided which ensures the existence of a periodic sufficiently smooth center manifold near a nonhyperbolic cycle. Second, from a more practical point of view, many researchers use nowadays the well-known software package MatCont[19, 20] to study codimension one and two bifurcations of limit cycles in finite-dimensional ODEs. In particular, if one is interested in determining the nature (subcritical, supercritical or degenerate) of a bifurcation, one should compute the critical normal form coefficients of an associated periodic normal form. However, the computation of these coefficients in MatCont employs a combination of the periodic normalization method [44, 17, 18], again based on the smoothness and local invariance of the center manifold, with the special coordinate system and the periodic normal forms mentioned above. ### Statement of the main theorem Let \(f:\mathbb{R}^{n}\to\mathbb{R}^{n}\) be a \(C^{k+1}\)-smooth vector field for some finite \(k\geq 1\) and consider the ordinary differential equation \[\dot{x}(t)=f(x(t)),\] (ODE) where \(x(t)\in\mathbb{R}^{n}\). Assume that (ODE) admits a \(T\)-periodic solution \(\gamma\) for some (minimal) \(T>0\) and let \(\Gamma:=\gamma(\mathbb{R})\) denote the associated (limit) _cycle_. Consider now the variational equation around \(\Gamma\) \[\dot{y}(t)=A(t)y(t), \tag{1}\] where \(A(t):=Df(\gamma(t))\) and \(y(t)\in\mathbb{R}^{n}\). The unique (global) solution \(y\) of (1) is generated by the _fundamental matrix_\(U(t,s)\in\mathbb{R}^{n\times n}\) as \(y(t)=U(t,s)y_{0}\) for all \((t,s)\in\mathbb{R}^{2}\), where \(y_{0}\in\mathbb{R}^{n}\) is an initial condition specified at time \(s\). The eigenvalues of the matrix \(U(s+T,s)\) are called _Floquet multipliers_ (of \(\Gamma\)), and we say that \(\Gamma\) is _nonhyperbolic_ if there are at least \(n_{0}+1\geq 2\) Floquet multipliers on the unit circle that are counted with algebraic multiplicity. Let \(E_{0}(s)\) denote the \((n_{0}+1)\)-dimensional _center subspace_ (at time \(s\)) defined by the direct sum of all generalized eigenspaces with a Floquet multiplier on the unit circle and let \(E_{0}:=\{(s,y_{0})\in\mathbb{R}\times\mathbb{R}^{n}:y_{0}\in E_{0}(s)\}\) denote the _center bundle_. The main result on the existence of a periodic smooth local invariant center manifold near the cycle \(\Gamma\) is summarized in Theorem 1 and two illustrative examples of two-dimensional _local center manifolds around \(\Gamma\)_, denoted by \(\mathcal{W}^{c}_{\mathrm{loc}}(\Gamma)\), can be found in Figure 1. Explicit minimal equations (model systems) of the form (ODE) admitting a \(2\pi\)-periodic two-dimensional center manifold around a nonhyperbolic cycle are given by Example 23 (cylinder) and Example 24 (Mobius band). **Theorem 1** (Center Manifold Theorem for Cycles).: _Consider (ODE) with a \(C^{k+1}\)-smooth right-hand side \(f:\mathbb{R}^{n}\to\mathbb{R}^{n}\) for some finite \(k\geq 1\). Let \(\gamma\) be a \(T\)-periodic solution of (ODE) such that the associated cycle \(\Gamma:=\gamma(\mathbb{R})\) is nonhyperbolic with \((n_{0}+1)\)-dimensional center subspace \(E_{0}(s)\) at time \(s\in\mathbb{R}\). Then there exists a locally defined \(T\)-periodic \(C^{k}\)-smooth \((n_{0}+1)\)-dimensional invariant manifold \(\mathcal{W}^{c}_{\mathrm{loc}}(\Gamma)\) defined around \(\Gamma\) and tangent to the center bundle \(E_{0}\)._ ### Overview The paper is organized as follows. In Section 2 we review some basic principles of Floquet theory for ODEs and elaborate a bit more on spectral decompositions. In Section 3 we use the theory from previous section to prove the existence of a Lipschitz continuous center manifold for (nonlinear) periodic ODEs. In Section 4 we prove that this center manifold is periodic, sufficiently smooth, locally invariant Figure 1: Illustration of two-dimensional local center manifolds around \(\Gamma\) for \(n=3\)[43]. The left figure represents the case when \(-1\) is not a Floquet multiplier and then \(\mathcal{W}^{c}_{\mathrm{loc}}(\Gamma)\) is locally diffeomorphic to a cylinder in a neighborhood of \(\Gamma\). The right figure represents the case when \(-1\) is a Floquet multiplier and then \(\mathcal{W}^{c}_{\mathrm{loc}}(\Gamma)\) is locally diffeomorphic to a Möbius band in a neighborhood of \(\Gamma\). The \((\tau,\xi)\)-coordinate system on the center manifold is the special coordinate system described in [37, 38]. and its tangent bundle is precisely the center bundle. The technical proofs regarding smoothness of the center manifold are relegated to Appendix A. Combining all these results proves Theorem 1. In Section 5 we provide explicit examples of analytic vector fields admitting (non)-unique, (non)-\(C^{\infty}\)-smooth and (non)-analytic periodic center manifolds. ## 2 Floquet theory and spectral decompositions Consider (ODE) admitting a \(T\)-periodic solution \(\gamma\) with associated cycle \(\Gamma:=\gamma(\mathbb{R})\). The aim of this section is to determine the stability of \(\Gamma\) and characterize nonhyperbolicity using Floquet theory. This (linear) theory will allow us to state and prove results regarding spectral properties of our operators and spaces of interest. Standard references for this entire section are the books [28, 38, 43] on ODEs and [1] on Linear Algebra. All unreferenced claims relating to basic properties of Floquet theory and spectral decompositions can be found here. To study the stability of \(\Gamma\), set \(x=\gamma+y\) and notice that \(y\) satisfies the (nonlinear) periodic ODE \[\dot{y}(t)=A(t)y(t)+R(t,y(t)), \tag{2}\] where \(A(t):=Df(\gamma(t))\) and \(R(t,y(t)):=f(\gamma(t)+y(t))-f(\gamma(t))-A(t)y(t)\). Hence, \(A:\mathbb{R}\to\mathbb{R}^{n\times n}\) is a \(T\)-periodic \(C^{k}\)-smooth function and \(R:\mathbb{R}\times\mathbb{R}^{n}\to\mathbb{R}^{n}\) is \(T\)-periodic in the first component, \(C^{k}\)-smooth, \(R(\cdot,0)=0\) and \(D_{2}R(\cdot,0)=0\), i.e. \(R\) consists solely of nonlinear terms. Note that the nonlinearity \(R\) has one degree of smoothness less than the original vector field \(f\). For a starting time \(s\in\mathbb{R}\) and initial condition \(y_{0}\in\mathbb{R}^{n}\) for (2), it follows from the \(C^{k}\)-smoothness of \(R\) and the Picard-Lindelof theorem that (2) admits a unique (maximal) solution for all \(t\in\mathbb{R}\) sufficiently close to \(s\). Hence, let for such \(t\) and \(s\) the map \(S(t,s,\cdot):\mathbb{R}^{n}\to\mathbb{R}^{n}\) denote the (_time-dependent_) _flow_, also called _process_[42], of (2). One can verify by uniqueness of solutions that \[S(t,r,S(r,s,\cdot))=S(t+r,s,\cdot),\quad S(s,s,\cdot)=I,\quad S(t+T,s+T,\cdot )=S(t,s,\cdot), \tag{3}\] for all \(t,r\in\mathbb{R}\) sufficiently close to \(s\). It is clear from this construction that studying solutions of (ODE) near \(\Gamma\) is equivalent to studying solutions of (2) near the origin. Therefore, we start by investigating the linearization of (2) around the origin: \[\dot{y}(t)=A(t)y(t). \tag{4}\] Observe that its (global) solutions are generated by the _fundamental matrix_\(U(t,s)\in\mathbb{R}^{n\times n}\) as \(y(t)=U(t,s)y_{0}\) for all \((t,s)\in\mathbb{R}^{2}\), whenever an initial condition \(y_{0}\in\mathbb{R}^{n}\) at starting time \(s\in\mathbb{R}\) is specified. Moreover, we have that the map \((t,s)\mapsto U(t,s)\) is \(C^{k}\)-smooth. Using uniqueness of solutions for (4), \(T\)-periodicity of \(A\), and the fact that \(U(s,s)=I\) for all \(s\in\mathbb{R}\), one can easily verify that \[U(t,r)U(r,s)=U(t,s),\quad U(t,s)^{-1}=U(s,t),\quad U(t+T,s+T)=U(t,s), \tag{5}\] for all \(t,r,s\in\mathbb{R}\). **Lemma 2**.: _There holds for any \((t,s)\in\mathbb{R}^{2}\) that_ \[\frac{\partial}{\partial t}U(t,s)=A(t)U(t,s),\quad\frac{\partial}{\partial t }U(s,t)=-U(s,t)A(t).\] Proof.: The first equality follows immediately from (4). To prove the second equality, observe from (5) that \(U(s,t)U(t,s)=I\) and so differentiating both sides with respect to \(t\) yields after rearranging \[\bigg{(}\frac{\partial}{\partial t}U(s,t)\bigg{)}U(t,s)=-U(s,t)A(t)U(t,s),\] which proves the claim. Combining the first and third equality of (5) together with induction, one proves that \(U(s+kT,s)=U(s+T,s)^{k}\) for all \(s\in\mathbb{R}\) and \(k\in\mathbb{Z}\) and so \(y(s+kT)=U(s+T,s)^{k}y_{0}\). Hence, the long term behavior of the solution \(y\) is determined by the _monodromy matrix_ (at time \(s\)) \(U(s+T,s)\) and especially its eigenvalues, called _Floquet multipliers_. To develop a spectral theory for our problem of interest, notice that one has to _complexify_ the state space \(\mathbb{R}^{n}\) and all discussed operators defined on \(\mathbb{R}^{n}\), i.e. one has to extend the state space to \(\mathbb{C}^{n}\) and extend all discussed operators to \(\mathbb{C}^{n}\), see [1, Chapter 9] for more information. However, for the sake of simplicity, we will not introduce any additional notation for the complexification of the operators. Let us now study the spectrum \(\sigma(U(s+T,s))\) of \(U(s+T,s)\) in depth. It follows from, e.g., [43, Theorem 1.6] that the Floquet multipliers are independent of the starting time \(s\) and that \(1\) is always a Floquet multiplier. To see this last claim, differentiating \(\dot{\gamma}(t)=f(\gamma(t))\) yields \(\ddot{\gamma}(t)=A(t)\dot{\gamma}(t)\) and so \(\dot{\gamma}\) is a solution of (4), i.e. \(\dot{\gamma}(t)=U(t,s)\dot{\gamma}(s)\). Exploiting \(T\)-periodicity of \(\gamma\) yields \(\dot{\gamma}(s)=\dot{\gamma}(s+T)=U(s+T,s)\dot{\gamma}(s)\), which proves that \(1\) is an eigenvalue of \(U(s+T,s)\) with associated eigenvector \(\dot{\gamma}(s)\). Let \(\lambda\) be a Floquet multiplier of algebraic multiplicity \(m_{\lambda}\), i.e. the \(m_{\lambda}\)-dimensional \(U(s+T,s)\)-invariant subspace \(E_{\lambda}(s):=\ker((U(s+T,s)-\lambda I)^{m_{\lambda}})\) of \(\mathbb{C}^{n}\) is maximal, or equivalently, \(m_{\lambda}\) is the order of the root \(\mu=\lambda\) of the characteristic polynomial \(\det(U(s+T,s)-\mu I)\). This allows us to choose a basis of \(m_{\lambda}\) linearly independent (generalized) eigenvectors \(\zeta_{1}(s),\ldots,\zeta_{m_{\lambda}}(s)\) of \(E_{\lambda}(s)\). Moreover, let \(\pi_{\lambda}(s)\) be the projection from \(\mathbb{C}^{n}\) to \(E_{\lambda}(s)\) along the direct sum \(\oplus_{\mu\neq\lambda}E_{\mu}(s)\). Our next aim is to extend \(\zeta_{1}(s),\ldots,\zeta_{m_{\lambda}}(s)\) and \(\pi_{\lambda}(s)\) forward and backward in time. The results can be found in the following two lemmas. **Lemma 3**.: _Let \(\lambda\) be a Floquet multiplier, then the restriction \(U_{\lambda}(t,s):E_{\lambda}(s)\to E_{\lambda}(t)\) is well-defined and invertible for all \((t,s)\in\mathbb{R}^{2}\). Moreover, there exist \(C^{k}\)-smooth maps \(\zeta_{i}:\mathbb{R}\to\mathbb{C}^{n}\) such that \(\zeta_{1}(t),\ldots,\zeta_{m_{\lambda}}(t)\) is a basis of \(E_{\lambda}(t)\) for all \(t\in\mathbb{R}\)._ Proof.: One can verify easily from the equalities in (5) that \[(U(t+T,t)-\lambda I)U(t,s)=U(t,s)(U(s+T,s)-\lambda I).\] Therefore, for each \(v\in E_{\lambda}(s)\), we get \[(U(t+T,t)-\lambda I)^{m_{\lambda}}U(t,s)v=U(t,s)(U(s+T,s)-\lambda I)^{m_{ \lambda}}v=0,\] which proves that \(U_{\lambda}(t,s)v\in E_{\lambda}(t)\) since the Floquet multipliers are independent of the starting time. As \(U(t,s)\) is invertible, its restriction \(U_{\lambda}(t,s)\) is invertible as well and this proves the first claim. To prove the second claim, let \(\zeta_{1}(s),\ldots,\zeta_{m_{\lambda}}(s)\) be a basis of \(E_{\lambda}(s)\) and define for all \(i\in\{1,\ldots,m_{\lambda}\}\) the \(C^{k}\)-smooth maps \(\zeta_{i}:\mathbb{R}\to\mathbb{C}^{n}\) by \(\zeta_{i}(t):=U_{\lambda}(t,s)\zeta_{i}(s)\). By the first claim, it is clear that \(\zeta_{1}(t),\ldots,\zeta_{m_{\lambda}}(t)\) is a basis of \(E_{\lambda}(t)\) for all \(t\in\mathbb{R}\), and this completes the proof. **Lemma 4** ([38, Proposition III.2]).: _Let \(\lambda\) be a Floquet multiplier, then there exists a \(T\)-periodic \(C^{k}\)-smooth map \(\pi_{\lambda}:\mathbb{R}\to\mathbb{C}^{n\times n}\) such that \(\pi_{\lambda}(t)\) is the projection from \(\mathbb{C}^{n}\) onto \(E_{\lambda}(t)\) for all \(t\in\mathbb{R}\) and satisfies the periodic linear ODE_ \[\dot{\pi}_{\lambda}(t)=A(t)\pi_{\lambda}(t)-\pi_{\lambda}(t)A(t). \tag{6}\] It will be convenient in the sequel to introduce the sets \(\Lambda_{-}:=\{\lambda\in\sigma(U(s+T,s)):|\lambda|<1\},\Lambda_{0}:=\{ \lambda\in\sigma(U(s+T,s)):|\lambda|=1\}\) and \(\Lambda_{+}:=\{\lambda\in\sigma(U(s+T,s)):|\lambda|>1\}\), where the elements of these sets have to be counted with algebraic multiplicity. We say that the cycle \(\Gamma\) is _nonhyperbolic_ if there are at least \(n_{0}+1\geq 2\) Floquet multipliers on the unit circle that are counted with algebraic multiplicity, i.e. the cardinality of \(\Lambda_{0}\) is at least \(2\). **Proposition 5**.: _The following properties hold._ 1. _For each_ \(s\in\mathbb{R}\)_, the Euclidean_ \(n\)_-space admits a direct sum decomposition_ \[\mathbb{R}^{n}=E_{-}(s)\oplus E_{0}(s)\oplus E_{+}(s)\] (7) _in a_ stable subspace_,_ center subspace_, and_ unstable subspace (_at time_ \(s\))_, respectively._ 2. _There exist three_ \(T\)_-periodic_ \(C^{k}\)_-smooth projectors_ \(\pi_{i}:\mathbb{R}\to\mathbb{R}^{n\times n}\) _with_ \(\operatorname{ran}(\pi_{i}(s))=E_{i}(s)\) _for all_ \(s\in\mathbb{R}\) _and_ \(i\in\{-,0,+\}\)_._ 3. _There exists a constant_ \(N\geq 0\) _such that_ \(\sup_{s\in\mathbb{R}}(\|\pi_{-}(s)\|+\|\pi_{0}(s)\|+\|\pi_{+}(s)\|)=N<\infty\)_._ 4. _The projections satisfy:_ \(\pi_{i}(s)\pi_{j}(s)=0\) _for all_ \(s\in\mathbb{R}\) _and_ \(i\neq j\) _with_ \(i,j\in\{-,0,+\}\)_._ 5. _The projections commute with the fundamental matrix:_ \(U(t,s)\pi_{i}(s)=\pi_{i}(t)U(t,s)\) _for all_ \((t,s)\in\mathbb{R}^{2}\) _and_ \(i\in\{-,0,+\}\)_._ 6. _The restrictions_ \(U_{i}(t,s):E_{i}(s)\to E_{i}(t)\) _are well-defined and invertible for all_ \((t,s)\in\mathbb{R}^{2}\) _and_ \(i\in\{-,0,+\}\)_._ 7. _The decomposition (_7_) is an exponential trichotomy on_ \(\mathbb{R}\) _meaning that there exist_ \(a<0<b\) _such that for every_ \(\varepsilon>0\) _there exists a_ \(C_{\varepsilon}>0\) _such that_ \[\|U_{-}(t,s)\| \leq C_{\varepsilon}e^{a(t-s)},\quad t\geq s,\] \[\|U_{0}(t,s)\| \leq C_{\varepsilon}e^{\varepsilon|t-s|},\quad t,s\in\mathbb{R},\] \[\|U_{+}(t,s)\| \leq C_{\varepsilon}e^{b(t-s)},\quad t\leq s.\] Proof.: We verify the seven properties step by step. 1. From the generalized eigenspace decomposition theorem, we have that \[\mathbb{R}^{n}=\bigoplus_{\lambda\in\sigma(U(s+T,s))}E_{\lambda}(s),\] and if we define \(E_{i}(s):=\oplus_{\lambda\in\Lambda_{i}}E_{\lambda}(s)\) for \(i\in\{-,0,+\}\), the result follows. Notice that \(E_{i}(s)\) can be regarded as real vector space since, if \(\lambda\in\Lambda_{i}\), then \(\overline{\lambda}\in\Lambda_{i}\) because \(U\) is a real operator. 2. Define for \(i\in\{-,0,+\}\) the map \(\pi_{i}\) by \(\pi_{i}(s):=\sum_{\lambda\in\Lambda_{i}}\pi_{\lambda}(s)\). It follows from linearity and Lemma 4 that \(\pi_{i}\) is \(T\)-periodic and \(C^{k}\)-smooth. By construction, the range of \(\pi_{i}(s)\) is \(E_{i}(s)\) for all \(s\in\mathbb{R}\). The same argument as in the first assertion shows that \(\pi_{i}\) is a real operator. 3. Because \(\pi_{i}\) and the norm \(\|\cdot\|\) are continuous, we have that the map \(\|\pi_{i}(\cdot)\|:\mathbb{R}\to\mathbb{R}\) is \(T\)-periodic and continuous. The claim follows now from applying three times the extreme value theorem. 4. For \(y\in\mathbb{R}^{n}\) the direct sum (7) admits a unique decomposition \(y=y_{-}+y_{0}+y_{+}\) with \(y_{i}\in E_{i}(s)\). Hence, \(\pi_{i}(s)\pi_{j}(s)y=\pi_{i}(s)y_{j}=0\) if \(i\neq j\) for all \(s\in\mathbb{R}\). 5. Differentiating \(t\mapsto U(t,s)\pi_{i}(s)\) and \(t\mapsto\pi_{i}(t)U(t,s)\) while using (6), one sees that they both satisfy (4). Since they coincide at time \(t=s\), we have by uniqueness that they must be equal. 6. Define for \(i\in\{-,0,+\}\) the map \(U_{i}(t,s)\) by \(U_{i}(t,s):=\oplus_{\lambda\in\Lambda_{i}}U_{\lambda}(t,s)\) for all \((t,s)\in\mathbb{R}^{2}\). The claim follows now from linearity and Lemma 3. 7. We will only prove the \(U_{-}(t,s)\) estimate as the other ones can be proven similarly. Since the spectrum of \(U_{-}(s+T,s)\) lies inside the unit disk, it follows from the spectral radius formula that \[\lim_{m\to\infty}\|U_{-}(s+T,s)^{m}\|^{\frac{1}{m}}=\max_{\lambda\in\sigma(U_ {-}(s+T,s))}|\lambda|<1.\] Hence, there exists an \(a<0\) and an integer \(m>0\) such that \[\|U_{-}(s+T,s)^{m}\|<(1+aT)^{m},\] and by continuity of the map \(t\mapsto U_{-}(t,s)\), there is some \(L>0\) such that \(\sup_{s\leq t\leq s+T}\|U_{-}(t,s)\|\leq L\). Denote \(L_{m}:=L\max_{j=0,\ldots,m-1}\|U_{-}(s+T,s)^{j}\|\) and let \(m_{t}\) be the largest nonnegative integer such that \(s+m_{t}mT\leq t\) and let \(0\leq m_{t}^{\star}\leq m-1\) be the largest integer such that \(s+m_{t}mT+m_{t}^{\star}T\leq t\). Using (5), one obtains \[U_{-}(t,s)=U_{-}(t-m_{t}mT-m_{t}^{\star}T,s)U_{-}(s+T,s)^{m_{t}^{\star}}U_{-}(s +T,s)^{m_{t}m}.\] By the maximum property of \(m_{t}^{\star}\): \(s\leq t-m_{t}mT-m_{t}^{\star}T\leq s+m_{t}mT+(m_{t}^{\star}+1)T-m_{t}mT-m_{t}^ {\star}T=s+T\), and so \[\|U_{-}(t,s)\|\leq L_{n}\|U_{-}(s+T,s)^{m}\|^{m_{t}}\leq L_{m}(1+aT)^{m_{t}m }\leq L_{m}[(1+aT)^{\frac{1}{aT}}]^{a(t-s)}\leq L_{m}e^{a(t-s)},\] where we used in the last equality the fact that the map \(x\mapsto(1+\frac{1}{x})^{x}\) is monotonically increasing on \((-\infty,0)\) and converges to \(e\) as \(x\to-\infty\). For the other estimates, for a given \(\varepsilon>0\) and sufficiently large \(m^{\prime}\in\mathbb{N}\), one finds that there exists a \(M_{\varepsilon}\) and \(N_{m^{\prime}}\) such that \(||U_{0}(t,s)||\leq M_{\varepsilon}e^{\varepsilon|t-s|}\) for \((t,s)\in\mathbb{R}^{2}\) and \(||U_{+}(t,s)||\leq N_{m^{\prime}}e^{b(t-s)}\) for \(t\leq s\). Choosing \(C_{\varepsilon}:=\max\{L_{m},M_{\varepsilon},N_{m^{\prime}}\}\) proves the claim. ## 3 Existence of a Lipschitz center manifold The aim of this section is to prove the existence of a (local) center manifold for (2) around the origin. The proof consists of four steps. In the first step, we show that we can formulate (2) equivalently as an integral equation. In the second step, we determine a pseudo-inverse for solutions of this integral equation on a suitable Banach space. In the third step, we modify our nonlinearity \(R\) outside a ball of radius \(\delta>0\) such that it becomes Lipschitz continuous, and eventually a contraction when \(\delta\) is chosen small enough. In the last step, we construct a (family of) fixed point operators using the pseudo-inverse and modified nonlinearity for a sufficiently small \(\delta\). The fixed points of these contractions constitute the center manifold. **Lemma 6**.: _The ordinary differential equation (2) is equivalent to the integral equation_ \[u(t)=U(t,s)u(s)+\int_{s}^{t}U(t,\tau)R(\tau,u(\tau))d\tau. \tag{8}\] Proof.: Any \(u\) satisfying (8) is clearly differentiable, and it follows from the Leibniz integral rule that \[\dot{u}(t) =A(t)U(t,s)u(s)+U(t,t)R(t,u(t))+\int_{s}^{t}A(t)U(t,\tau)R(\tau,u( \tau))d\tau\] \[=A(t)\left[U(t,s)u(s)+\int_{s}^{t}U(t,\tau)R(\tau,u(\tau))d\tau \right]+R(t,u(t))=A(t)u(t)+R(t,u(t)),\] which proves that \(u\) satisfies (2). Conversely, let \(u\) satisfy (2) and let \(w(t)=U(s,t)u(t)\). Then \[\dot{w}(t)=\bigg{(}\frac{\partial}{\partial t}U(s,t)\bigg{)}u(t)+U(s,t)\dot{u }(t).\] The second equality in Lemma 2 together with the fact that \(u\) satisfies (2) shows that \(\dot{w}(t)=U(s,t)R(t,u(t))\). Integrating both sides with respect to \(t\) yields \[u(t)=U(t,s)u(s)+U(t,s)\int_{s}^{t}U(s,\tau)R(\tau,u(\tau))d\tau=U(t,s)u(s)+ \int_{s}^{t}U(t,\tau)R(\tau,u(\tau))d\tau,\] where we used (5) in the last equality. Let \(C_{b}(\mathbb{R},\mathbb{R}^{n})\) denote the Banach space of \(\mathbb{R}^{n}\)-valued continuous bounded functions defined on \(\mathbb{R}\) equipped with the supremum norm \(\|\cdot\|_{\infty}\). If we want to study solutions of (4) (or equivalently (8) with \(R=0\)) in the center subspace, it turns that such solutions can be unbounded, and so we can not work in the space \(C_{b}(\mathbb{R},\mathbb{R}^{n})\). Instead, we must work in a function space that allows limited (sub)exponential growth both at plus and minus infinity. Therefore, define for any \(\eta,s\in\mathbb{R}\) the normed space \[\mathrm{BC}^{\eta}_{s}:=\bigg{\{}f\in C(\mathbb{R},\mathbb{R}^{n}):\sup_{t\in \mathbb{R}}e^{-\eta|t-s|}\|f(t)\|<\infty\bigg{\}},\] with the weighted supremum norm \[\|f\|_{\eta,s}:=\sup_{t\in\mathbb{R}}e^{-\eta|t-s|}\|f(t)\|.\] Since the linear map \(\iota:(C_{b}(\mathbb{R},\mathbb{R}^{n}),\|\cdot\|_{\infty})\to(\mathrm{BC}^{ \eta}_{s},\|\cdot\|_{\eta,s})\) defined by \(\iota(f)(t):=e^{\eta|t-s|}f(t)\) is an isometry, it is clear that \((\mathrm{BC}^{\eta}_{s},\|\cdot\|_{\eta,s})\) is a Banach space. The following result proves that all solutions of (4) on the center subspace belong to \(\mathrm{BC}^{\eta}_{s}\). **Proposition 7**.: _Let \(\eta\in(0,\min\{-a,b\})\) and \(s\in\mathbb{R}\). Then_ \[E_{0}(s)=\{y_{0}\in\mathbb{R}^{n}:\text{there exists a solution of \eqref{eq:bound} through $y_{0}$ belonging to $\mathrm{BC}^{\eta}_{s}$}\}.\] Proof.: Choose \(y_{0}\in E_{0}(s)\) and define \(y(t)=U_{0}(t,s)y_{0}\), which is indeed a solution of (4) through \(y_{0}\). Let \(\varepsilon\in(0,\eta]\) be given. The exponential trichotomy from Proposition 5 shows that \[e^{-\eta|t-s|}\|y(t)\|=e^{-\eta|t-s|}\|U_{0}(t,s)y_{0}\|\leq C_{\varepsilon}e ^{(\varepsilon-\eta)|t-s|}\|y_{0}\|\leq C_{\varepsilon}\|y_{0}\|,\quad\forall t,s\in\mathbb{R}.\] Taking the supremum over \(t\in\mathbb{R}\) yields \(y\in\mathrm{BC}^{\eta}_{s}\). Conversely, let \(y_{0}\in\mathbb{R}^{n}\) be such that \(y\), defined by \(y(t)=U(t,s)y_{0}\), is in \(\mathrm{BC}^{\eta}_{s}\). For \(t\geq\max\{s,0\}\) and \(\varepsilon\in(0,\eta]\), we get \[\|\pi_{+}(s)y_{0}\|=\|U_{+}(s,t)\pi_{+}(t)y(t)\|\leq C_{\varepsilon}e^{b(s-t) }N\|y(t)\|,\] which shows that \[e^{-\eta|t-s|}\|y(t)\|\geq\frac{e^{(b-\eta)(t-s)}}{C_{\varepsilon}N}\|\pi_{+}( s)y_{0}\|\to\infty,\] as \(t\to\infty\) unless \(\pi_{+}(s)y_{0}=0\) as \(y\in\mathrm{BC}^{\eta}_{s}\). Similarly, one can prove that \(\pi_{-}(s)y_{0}=0\) and so \(y_{0}=(\pi_{-}(s)+\pi_{0}(s)+\pi_{+}(s))y_{0}=\pi_{0}(s)y_{0}\), i.e. \(y_{0}\in E_{0}(s)\). ### Bounded solutions of the linear inhomogeneous equation Let \(f:\mathbb{R}\to\mathbb{R}^{n}\) be a continuous function and consider the linear inhomogeneous integral equation \[u(t)=U(t,s)u(s)+\int_{s}^{t}U(t,\tau)f(\tau)d\tau, \tag{9}\] for all \((t,s)\in\mathbb{R}^{2}\). To prove existence of a center manifold, we need a pseudo-inverse of (exponentially) bounded solutions of (9). To do this, define (formally) for any \(\eta\in(0,\min\{-a,b\})\) and \(s\in\mathbb{R}\) the operator \(\mathcal{K}^{\eta}_{s}:\mathrm{BC}^{\eta}_{s}\to\mathrm{BC}^{\eta}_{s}\) as \[(\mathcal{K}^{\eta}_{s}f)(t):=\int_{s}^{t}U(t,\tau)\pi_{0}(\tau)f(\tau)d\tau+ \int_{\infty}^{t}U(t,\tau)\pi_{+}(\tau)f(\tau)d\tau+\int_{-\infty}^{t}U(t, \tau)\pi_{-}(\tau)f(\tau)d\tau,\] and check that this operator is well-defined. This will be proven in the following proposition and also the fact that \(\mathcal{K}^{\eta}_{s}\) is precisely the pseudo-inverse we are looking for. **Proposition 8**.: _Let \(\eta\in(0,\min\{-a,b\})\) and \(s\in\mathbb{R}\). The following properties hold._ 1. \(\mathcal{K}_{s}^{\eta}\) _is a well-defined bounded linear operator. Moreover, the operator norm_ \(\|\mathcal{K}_{s}^{\eta}\|\) _is bounded above independent of_ \(s\)_._ 2. \(\mathcal{K}_{s}^{\eta}f\) _is the unique solution of (_9_) in_ \(\mathrm{BC}_{s}^{\eta}\) _with vanishing_ \(E_{0}(s)\)_-component at time_ \(s\)_._ Proof.: We start by proving the first assertion. Clearly \(\mathcal{K}_{s}^{\eta}\) is linear. Let \(\varepsilon\in(0,\eta]\) be given and notice that for a given \(f\in\mathrm{BC}_{s}^{\eta}\), we can write as \(\mathcal{K}_{s}^{\eta}f\) as the sum of three integrals, i.e. \(\mathcal{K}_{s}^{\eta}f=I_{0}(\cdot,s)+I_{+}+I_{-}\). We estimate now the norm of each integral step by step. \(I_{0}(\cdot,s)\): The straightforward estimate \[\|I_{0}(t,s)\|\leq C_{\varepsilon}N\|f\|_{\eta,s}\frac{e^{\eta|t-s|}}{\eta- \varepsilon}<\infty,\quad\forall t\in\mathbb{R},\] implies that the norm of \(I_{0}(\cdot,s)\) is bounded above. \(I_{+}:\) Notice that \[\|I_{+}(t)\|\leq C_{\varepsilon}N\|f\|_{\eta,s}e^{bt}\int_{t}^{\infty}e^{-b \tau+\eta|\tau-s|}d\tau,\forall t\in\mathbb{R},\] and to prove norm boundedness of \(I_{+}\), we have to evaluate the integral above. A calculation shows that \[\int_{t}^{\infty}e^{-b\tau+\eta|\tau-s|}d\tau=\begin{cases}\dfrac{e^{-bt}}{b- \eta}e^{\eta(t-s)},&t\geq s\\ \dfrac{e^{-bt}}{b+\eta}e^{\eta(s-t)}-\dfrac{e^{-bs}}{b+\eta}+\dfrac{e^{-bs}}{b -\eta},&t\leq s.\end{cases} \tag{10}\] We want to estimate the \(t\leq s\) case. Notice that for real numbers \(\alpha\geq\beta\) we have \[(\alpha-\beta)\bigg{(}\dfrac{1}{b+\eta}-\dfrac{1}{b-\eta}\bigg{)}=\dfrac{-2 \eta(\alpha-\beta)}{(b+\eta)(b-\eta)}\leq 0,\] since \(\eta<b\) by assumption. Hence, \[\frac{\alpha}{b+\eta}+\frac{\beta}{b-\eta}\leq\frac{\alpha}{b-\eta}+\frac{ \beta}{b+\eta}.\] We want to replace \(\alpha\) by \(e^{-bt+\eta s-\eta t}\) and \(\beta\) by \(e^{-bs}\) and therefore we have to show that \(-bt+\eta s-\eta t+bs\geq 0\) which is true because \(-bt+\eta s-\eta t+bs=(s-t)(b+\eta)\geq 0\) since \(s-t\geq 0\). Filling this into (10) yields \[\int_{t}^{\infty}e^{-b\tau+\eta|\tau-s|}d\tau\leq\dfrac{e^{-bt}}{b-\eta}e^{ \eta|t-s|},\] which shows that \[\|I_{+}(t)\|\leq C_{\varepsilon}N\|f\|_{\eta,s}\dfrac{e^{\eta|t-s|}}{b-\eta}< \infty,\quad\forall t\in\mathbb{R}.\] and so we conclude that \(I_{+}\) is well-defined. \(I_{-}:\) A similar estimate as for the \(I_{+}\)-case shows that \[\|I_{-}(t)\|\leq C_{\varepsilon}N\|f\|_{\eta,s}\dfrac{e^{\eta|t-s|}}{-a-\eta}< \infty,\quad\forall t\in\mathbb{R},\] and so it follows that the operator norm \[\|\mathcal{K}_{s}^{\eta}\|\leq C_{\varepsilon}N\left(\dfrac{1}{\eta- \varepsilon}+\dfrac{1}{b-\eta}+\dfrac{1}{-a-\eta}\right)<\infty,\] is bounded above independent of \(s\). We conclude that \(\mathcal{K}_{s}^{\eta}\) is a bounded linear operator on \(\mathrm{BC}_{s}^{\eta}\). Let us now prove the second assertion by first showing that \(\mathcal{K}_{s}^{\eta}\) is indeed a solution of (9). Let \(f\in\mathrm{BC}_{s}^{\eta}\) and set \(u=\mathcal{K}_{s}^{\eta}f\). Then, a straightforward computation shows that \[U(t,s)u(s)+\int_{s}^{t}U(t,\tau)f(\tau)d\tau=u(t),\] and so \(u\) is indeed a solution of (9). Let us now prove that \(u\) has vanishing \(E_{0}(s)\)-component at time \(s\), i.e. \(\pi_{0}(s)u(s)=0\). The mutual orthogonality of the projectors (Proposition 5) implies \[\pi_{0}(s)u(s)=\int_{\infty}^{s}U(s,\tau)\pi_{0}(\tau)\pi_{+}(\tau)f(\tau)d \tau+\int_{-\infty}^{s}U(s,\tau)\pi_{0}(\tau)\pi_{-}(\tau)f(\tau)d\tau=0.\] It only remains to show that \(u\) is the unique solution of (9) in \(\mathrm{BC}_{s}^{\eta}\). Let \(v\in\mathrm{BC}_{s}^{\eta}\) be another solution of (9) with vanishing \(E_{0}(s)\)-component at time \(s\). Then the function \(w:=u-v\) is an element of \(\mathrm{BC}_{s}^{\eta}\) and satisfies \(w(t)=U(t,s)w(s)\) for all \((t,s)\in\mathbb{R}^{2}\). Proposition 7 shows us that \(w(s)\in E_{0}(s)\) and notice that \(\pi_{0}(s)w(s)=0\) since \(u\) and \(v\) have both vanishing \(E_{0}(s)\)-component at time \(s\). From Proposition 5 we know that \(w(t)=U_{0}(t,s)w(s)\) is in \(E_{0}(t)\) and \[\pi_{0}(t)w(t)=\pi_{0}(t)U_{0}(t,s)w(s)=U_{0}(t,s)\pi_{0}(s)w(s)=0,\] so \(u=v\). ### Modification of the nonlinearity To prove the existence of a center manifold, a key step will be to use the Banach fixed point theorem on some specific fixed point operator. This operator we will be of course linked to the inhomogeneous equation (9). However, we can not expect that any nonlinear operator \(R(t,\cdot):\mathbb{R}^{n}\to\mathbb{R}^{n}\) for fixed \(t\in\mathbb{R}\) will impose a Lipschitz condition on the fixed point operator that will be constructed. As we are only interested in the local behavior of solutions near the origin of (2), we can modify the nonlinearity \(R(t,\cdot)\) outside a ball of radius \(\delta>0\) such that eventually the fixed point operator will become a contraction. To modify this nonlinearity, introduce a \(C^{\infty}\)-smooth cut-off function \(\xi:[0,\infty)\to\mathbb{R}\) as \[\xi(s)\in\begin{cases}\{1\},&0\leq s\leq 1,\\ [0,1],&0\leq s\leq 2,\\ \{0\},&s\geq 2,\end{cases}\] and define for any \(\delta>0\) the _\(\delta\)-modification_ of \(R\) as the operator \(R_{\delta}:\mathbb{R}\times\mathbb{R}^{n}\to\mathbb{R}^{n}\) with action \[R_{\delta}(t,u):=R(t,u)\xi\bigg{(}\frac{\|\pi_{0}(t)u\|}{N\delta}\bigg{)}\xi \bigg{(}\frac{\|(\pi_{-}(t)+\pi_{+}(t))u\|}{N\delta}\bigg{)}.\] Since \(R\) is of the class \(C^{k}\), the cut-off function \(\xi\) is \(C^{\infty}\)-smooth, the Euclidean norm \(\|\cdot\|\) is \(C^{\infty}\)-smooth on \(\mathbb{R}^{n}\setminus\{0\}\) and the projectors \(\pi_{-},\pi_{0},\pi_{+}\) are \(C^{k}\)-smooth (Proposition 5), it is clear that \(R_{\delta}\) is \(C^{k}\)-smooth. This \(\delta\)-modification of \(R\) will ensure that the nonlinearity becomes eventually globally Lipschitz, as will be proven in the upcoming two statements. **Lemma 9**.: _There exist a \(\delta_{1}>0\) and \(l:[0,\delta_{1}]\to[0,\infty)\), continuous at \(0\), such that \(l(0)=0\) and \(l(\delta)=:l_{\delta}\) is a Lipschitz constant for \(R(t,\cdot)\) on the open ball \(B(0,\delta)\) for every \(t\in\mathbb{R}\) and \(\delta\in(0,\delta_{1}]\)._ Proof.: Recall that \(R\) is of the class \(C^{k}\) and that \(R(t,0)=D_{2}R(t,0)=0\) for all \(t\in\mathbb{R}\). By continuity, choose \(\delta_{1}>0\) such that \(\sup\{\|D_{2}R(t,y)\|:y\in B(0,\delta_{1})\}\leq 1\) and define the map \(l\) as \[l(\delta):=\begin{cases}0,&\delta=0,\\ \sup\{\|D_{2}R(t,y)\|:y\in B(0,\delta)\},&\delta\in(0,\delta_{1}].\end{cases}\] By the mean value theorem, \(l(\delta)\) is a Lipschitz constant for \(R(t,\cdot)\) on \(B(0,\delta)\). Moreover, \(l\) is monotonically increasing and observe that for each \(\varepsilon>0\), there exists a \(0<\delta_{\varepsilon}\leq\delta_{1}\) such that \(\sup\{\|D_{2}R(t,y)\|:y\in B(0,\delta_{\varepsilon})\}\leq\varepsilon\). Then for \(0<\delta\leq\delta_{\varepsilon}\) we have that \(0\leq l(\delta)\leq l(\delta_{\varepsilon})\leq\varepsilon\) and so the map \(l\) is continuous at zero. **Proposition 10**.: _For \(\delta>0\) sufficiently small, \(R_{\delta}(t,\cdot)\) is globally Lipschitz continuous for all \(t\in\mathbb{R}\) with Lipschitz constant \(L_{\delta}\to 0\) as \(\delta\downarrow 0\)._ Proof.: Define for any \(\delta>0\) and \(t\in\mathbb{R}\) the maps \(\xi_{\delta},\Xi_{\delta,t}:\mathbb{R}^{n}\to\mathbb{R}\) by \[\xi_{\delta}(y):=\xi\left(\frac{\|y\|}{N\delta}\right),\quad\Xi_{\delta,t}(y) :=\xi_{\delta}(\pi_{0}(t)y)\xi_{\delta}(\pi_{-}(t)y+\pi_{+}(t)y),\] and so \(R_{\delta}(t,y)=\Xi_{\delta,t}(y)R(t,y)\). Note that \(\xi_{\delta},\Xi_{\delta,t}\leq 1\) and let \(C\geq 0\) be a global Lipschitz constant of \(\xi\). Then by composition of Lipschitz functions, \(\xi_{\delta}\) has a global Lipschitz constant \(C/N\delta\). For \(y,z\in\mathbb{R}^{n}\): \[|\Xi_{\delta,t}(y)-\Xi_{\delta,t}(z)| =|[\xi_{\delta}(\pi_{0}(t)y)\xi_{\delta}(\pi_{-}(t)y+\pi_{+}(t)y) -\xi_{\delta}(\pi_{0}(t)y)\xi_{\delta}(\pi_{-}(t)z+\pi_{+}(t)z)]\] \[-[\xi_{\delta}(\pi_{0}(t)z)\xi_{\delta}(\pi_{-}(t)z+\pi_{+}(t)z)- \xi_{\delta}(\pi_{0}(t)y)\xi_{\delta}(\pi_{-}(t)z+\pi_{+}(t)z)]|\] \[\leq\xi_{\delta}(\pi_{0}(t)y)|\xi_{\delta}(\pi_{-}(t)y+\pi_{+}(t) y)-\xi_{\delta}(\pi_{-}(t)z+\pi_{+}(t)z)|\] \[+\xi_{\delta}(\pi_{-}(t)z+\pi_{+}(t)z)|\xi_{\delta}(\pi_{0}(t)y- \xi_{\delta}(\pi_{0}(t)z)|\] \[\leq\frac{2C}{\delta}\|y-z\|.\] Now, note that \(\|y\|\leq\|\pi_{0}(t)y\|+\|(\pi_{-}(t)+\pi_{+}(t))y\|\) for all \(y\in\mathbb{R}^{n}\). If \(\|y\|\geq 4N\delta\), then \(\max\{\|\pi_{0}(t)y\|,\|(\pi_{-}(t)+\pi_{+}(t)y)\|\}\geq 2N\delta\), so that \(\Xi_{\delta,t}(y)=0\). Let \(\delta_{1}>0\) be such as in Lemma 9 and fix \(\delta>0\) such that \(4N\delta\leq\delta_{1}\). For \(y,z\in\mathbb{R}^{n}\): \[\|R_{\delta}(t,y)-R_{\delta}(t,z)\| =\|[\Xi_{\delta,t}(y)R(t,y)-\Xi_{\delta,t}(y)R(t,z)]-[\Xi_{ \delta,t}(z)R(t,z)-\Xi_{\delta,t}(y)R(t,z)]\|\] \[\leq\Xi_{\delta,t}(y)\|R(t,y)-R(t,z)\|+|\Xi_{\delta,t}(y)-\Xi_{ \delta,t}(z)\|R(t,z)\|\] \[\leq\begin{cases}l_{\delta}(4N\delta)\|y-z\|+8CNl_{\delta}(4N \delta)\|y-z\|,&\|y\|,\|z\|<4N\delta,\\ 0,&\|y\|,\|z\|\geq 4N\delta,\\ 8CNl_{\delta}(4N\delta)\|y-z\|,&\|y\|\geq 4N\delta,\|z\|<4N\delta,\end{cases}\] \[\leq l_{\delta}(4N\delta)(1+8CN)\|y-z\|,\] Hence, \(L_{\delta}:=l_{\delta}(4N\delta)(1+8CN)\) is a Lipschitz constant for \(R_{\delta}(t,\cdot)\) for all \(t\in\mathbb{R}\). **Corollary 11**.: _For \(\delta>0\) sufficiently small, \(\|R_{\delta}(t,y)\|\leq 4NL_{\delta}\delta\) for all \((t,y)\in\mathbb{R}\times\mathbb{R}^{n}\)._ Proof.: Note that \(R_{\delta}(t,0)=0\). This means that \(\|R_{\delta}(t,y)\|\leq L_{\delta}\|y\|\). Obviously the claim holds if \(\|y\|\leq 4N\delta\). On the other hand, if \(\|y\|>4N\delta\), then \(R_{\delta}(t,y)=0\) and so the proof is complete. Let us introduce for any \(\eta\in(0,\min\{-a,b\}),s\in\mathbb{R}\) and a given \(\delta\)-modification of \(R\), the _substitution operator_\(\tilde{R}_{\delta}:\mathrm{BC}_{s}^{\eta}\to\mathrm{BC}_{s}^{\eta}\) as \[\tilde{R}_{\delta}(u):=R_{\delta}(\cdot,u(\cdot)),\] and we show that this operator inherits the same properties as \(R_{\delta}\). **Lemma 12**.: _Let \(\eta\in(0,\min\{-a,b\}),s\in\mathbb{R}\) and \(\delta>0\) be sufficiently small. Then the substitution operator \(\tilde{R}_{\delta}\) is well-defined and inherits the Lipschitz properties of \(R_{\delta}\)._ Proof.: It follows from Proposition 10 that \[\|\tilde{R}_{\delta}(u)(t)\|=\|R_{\delta}(t,u(t))\|\leq L_{\delta}\|u(t)\|,\] for all \(u\in\mathrm{BC}^{\eta}_{s}\). Hence, \(\|\tilde{R}_{\delta}(u)\|_{\eta,s}\leq L_{\delta}\|u\|_{\eta,s}<\infty\), i.e. \(\tilde{R}_{\delta}(u)\in\mathrm{BC}^{\eta}_{s}\). The Lipschitz property follows immediately from Proposition 10 since \[\|\tilde{R}_{\delta}(u)-\tilde{R}_{\delta}(v)\|_{\eta,s}\leq L_{\delta}\|u-v\|_ {\eta,s},\] for all \(u,v\in\mathrm{BC}^{\eta}_{s}\), and so \(\|\tilde{R}_{\delta}(u)\|_{\eta,s}\leq 4NL_{\delta}\delta\). Define for any \(\eta\in(0,\min\{-a,b\})\) and \(s\in\mathbb{R}\) the linear operator \(U^{\eta}_{s}:E_{0}(s)\to\mathrm{BC}^{\eta}_{s}\) by \[(U^{\eta}_{s}y_{0})(t):=U(t,s)y_{0}.\] **Lemma 13**.: _Let \(\eta\in(0,\min\{-a,b\})\) and \(s\in\mathbb{R}\). Then the operator \(U^{\eta}_{s}\) is well-defined and bounded._ Proof.: Let \(\varepsilon\in(0,\eta]\) be given. It follows from Proposition 5 that \[\|U^{\eta}_{s}y_{0}\|_{\eta,s}\leq C_{\varepsilon}\|y_{0}\|\sup_{t\in\mathbb{ R}}e^{(\varepsilon-\eta)|t-s|}=C_{\varepsilon}\|y_{0}\|,\] for all \(y_{0}\in E_{0}(s)\), and so \(U^{\eta}_{s}\) is well-defined and bounded. ### Existence of a Lipschitz center manifold Our next goal is to define a parameterized fixed point operator such that its fixed points correspond to (exponentially) bounded solutions on \(\mathbb{R}\) of the modified equation \[u(t)=U(t,s)u(s)+\int_{s}^{t}U(t,\tau)R_{\delta}(\tau,u(\tau))d\tau, \tag{11}\] for all \((t,s)\in\mathbb{R}^{2}\) and some small \(\delta>0\). For a given \(\eta\in(0,\min\{-a,b\}),s\in\mathbb{R}\) and sufficiently small \(\delta>0\), we define the fixed point operator \(\mathcal{G}^{\eta}_{s}:\mathrm{BC}^{\eta}_{s}\times E_{0}(s)\to\mathrm{BC}^{ \eta}_{s}\) as \[\mathcal{G}^{\eta}_{s}(u,y_{0}):=U^{\eta}_{s}y_{0}+\mathcal{K}^{\eta}_{s}( \tilde{R}_{\delta}(u)).\] It follows from Proposition 8, Lemma 12 and Lemma 13 that \(\mathcal{G}^{\eta}_{s}\) is well-defined. We first show that \(\mathcal{G}^{\eta}_{s}(\cdot,y_{0})\) admits a unique fixed point and is globally Lipschitz for all \(y_{0}\in E_{0}(s)\). **Proposition 14**.: _Let \(\eta\in(0,\min\{-a,b\})\) and \(s\in\mathbb{R}\). If \(\delta>0\) is sufficiently small, then the following statements hold._ 1. _For every_ \(y_{0}\in E_{0}(s)\)_, the map_ \(\mathcal{G}^{\eta}_{s}(\cdot,y_{0})\) _has a unique fixed point_ \(\hat{u}^{\eta}_{s}(y_{0})\)_._ 2. _The map_ \(\hat{u}^{\eta}_{s}:E_{0}(s)\to\mathrm{BC}^{\eta}_{s}\) _is globally Lipschitz and_ \(\hat{u}^{\eta}_{s}(0)=0\)_._ Proof.: Fix \(\varepsilon\in(0,\eta]\). For \(u,v\in\mathrm{BC}^{\eta}_{s}\) and \(y_{0},z_{0}\in E_{0}(s)\), we have \[\|\mathcal{G}^{\eta}_{s}(u,y_{0})-\mathcal{G}^{\eta}_{s}(v,z_{0}) \|_{\eta,s} \leq\sup_{t\in\mathbb{R}}e^{-\eta|t-s|}\|U_{0}(t,s)(y_{0}-z_{0}) \|+L_{\delta}\|\mathcal{K}^{\eta}_{s}\|\|u-v\|_{\eta,s}\] \[\leq C_{\varepsilon}\|y_{0}-z_{0}\|+L_{\delta}\|\mathcal{K}^{\eta} _{s}\|\|u-v\|_{\eta,s}.\] To prove the first assertion, set \(y_{0}=z_{0}\) and choose \(\delta>0\) small enough such that \(L_{\delta}\|\mathcal{K}^{\eta}_{s}\|\leq\frac{1}{2}\) (Proposition 10) since then \[\|\mathcal{G}^{\eta}_{s}(u,y_{0})-\mathcal{G}^{\eta}_{s}(v,y_{0})\|_{\eta,s} \leq\frac{1}{2}\|u-v\|_{\eta,s}.\] Since \(\mathrm{BC}^{\eta}_{s}\) is a Banach space, the contracting mapping principle applies and so the contraction \(\mathcal{G}^{\eta}_{s}(\cdot,y_{0})\) has a unique fixed point, say \(\hat{u}^{\eta}_{s}(y_{0})\). To prove the second assertion, let \(\hat{u}_{s}^{\eta}(y_{0})\) and \(\hat{u}_{s}^{\eta}(z_{0})\) be the unique fixed points of \(\mathcal{G}_{s}^{\eta}(\cdot,y_{0})\) and \(\mathcal{G}_{s}^{\eta}(\cdot,z_{0})\) respectively. Then, \[\|\hat{u}_{s}^{\eta}(y_{0})-\hat{u}_{s}^{\eta}(z_{0})\|_{\eta,s}=\|\mathcal{G}_{ s}^{\eta}(\hat{u}_{s}^{\eta}(y_{0}),y_{0})-\mathcal{G}_{s}^{\eta}(\hat{u}_{s}^{ \eta}(z_{0}),z_{0})\|_{\eta,s}\leq C_{\varepsilon}\|y_{0}-z_{0}\|+\frac{1}{2} \|\hat{u}_{s}^{\eta}(y_{0})-\hat{u}_{s}^{\eta}(z_{0})\|_{\eta,s}.\] This implies that \(\|\hat{u}_{s}^{\eta}(y_{0})-\hat{u}_{s}^{\eta}(z_{0})\|_{\eta,s}\leq 2C_{ \varepsilon}\|y_{0}-z_{0}\|\), and so \(\hat{u}_{s}^{\eta}\) is globally Lipschitz. Since \(\hat{u}_{s}^{\eta}(0)=\mathcal{G}_{s}^{\eta}(\hat{u}_{s}^{\eta}(0),0)=0\), the second assertion follows. In order to construct a center manifold, define the _center bundle_\(E_{0}:=\{(s,y_{0})\in\mathbb{R}\times\mathbb{R}^{n}:y_{0}\in E_{0}(s)\}\) and the map \(\mathcal{C}:E_{0}\to\mathbb{R}^{n}\) by \[\mathcal{C}(s,y_{0}):=\hat{u}_{s}^{\eta}(y_{0})(s). \tag{12}\] **Definition 15**.: A _global center manifold_ of (11) is defined as the image \(\mathcal{W}^{c}:=\mathcal{C}(E_{0})\), whose _s-fibers_ are defined as \(\mathcal{W}^{c}(s):=\{\mathcal{C}(s,y_{0})\in\mathbb{R}^{n}:y_{0}\in E_{0}(s)\}\). Recall from Proposition 14 that for a fixed \(s\in\mathbb{R}\), the map \(\hat{u}_{s}^{\eta}\) is globally Lipschitz. Hence, the map \(\mathcal{C}(s,\cdot):E_{0}(s)\to\mathbb{R}^{n}\) is globally Lipschitz, where the Lipschitz constant depends on \(s\), i.e. \(\mathcal{C}\) is only _fiberwise Lipschitz_. The following result shows that the Lipschitz constant can be chosen independently of the fiber, and so we can say that \(\mathcal{W}^{c}\) is a Lipschitz global center manifold of (11). **Lemma 16**.: _There exists a constant \(L>0\) such that \(\|\mathcal{C}(s,y_{0})-\mathcal{C}(s,z_{0})\|\leq L\|y_{0}-z_{0}\|\) for all \((s,y_{0}),(s,z_{0})\in E_{0}\)._ Proof.: Let \((s,y_{0}),(s,z_{0})\in E_{0}\) be given. It follows from Lemma 12 and Proposition 14 that \[\|\mathcal{C}(s,y_{0})-\mathcal{C}(s,z_{0})\| =\|\mathcal{G}_{s}^{\eta}(\hat{u}_{s}^{\eta}(y_{0}),y_{0})(s)- \mathcal{G}_{s}^{\eta}(\hat{u}_{s}^{\eta}(z_{0}),z_{0})(s)\|\] \[\leq\|y_{0}-z_{0}\|+\|\mathcal{K}_{s}^{\eta}\|\|\tilde{R}_{ \delta}(\hat{u}_{s}^{\eta}(y_{0}))-\tilde{R}_{\delta}(\hat{u}_{s}^{\eta}(z_{0 }))\|_{\eta,s}\] \[\leq\|y_{0}-z_{0}\|+L_{\delta}\|\mathcal{K}_{s}^{\eta}\|\|\hat{u} _{s}^{\eta}(y_{0})-\hat{u}_{s}^{\eta}(z_{0})\|_{\eta,s}\leq(1+2C_{\varepsilon }L_{\delta}\|\mathcal{K}_{s}^{\eta}\|)\|y_{0}-z_{0}\|.\] Hence, \(L:=1+2C_{\varepsilon}L_{\delta}\|\mathcal{K}_{s}^{\eta}\|\) is a Lipschitz constant that is independent of \(s\) by Proposition 8. Recall from the definition of the \(\delta\)-modification of \(R\) that \(R_{\delta}=R\) on \(\mathbb{R}\times B(0,\delta)\). Hence, the modified integral equation (11) is equivalent to the original integral equation (8), and by Lemma 6 to the ordinary differential equation (2), on \(B(0,\delta)\). **Definition 17**.: A _local center manifold_ of (2) is defined as the image \[\mathcal{W}^{c}_{\rm loc}:=\mathcal{C}(\{(s,y_{0})\in E_{0}:\mathcal{C}(s,y_{0} )\in B(0,\delta)\}).\] In the definitions of the center manifolds and their associated fiber bundles (Definition 15 and Definition 17), we used the map \(\mathcal{C}\) to explicitly construct these objects. However, sometimes one likes to think of the center manifold as the graph of a function. To obtain such a representation, define the map \(\mathcal{H}:E_{0}\to\mathbb{R}^{n}\) as \(\mathcal{H}(s,y_{0}):=(I-\pi_{0}(s))\mathcal{C}(s,y_{0})\) and notice from Proposition 8 that we have the decomposition \(\mathcal{C}(s,y_{0})=y_{0}+\mathcal{H}(s,y_{0})\) in the nonhyperbolic and hyperbolic part respectively. Hence, we can write for example \[\mathcal{W}_{c}(s)=\{y_{0}+\mathcal{H}(s,y_{0}):y_{0}\in E_{0}(s)\}\cong\{(y_{0 },\mathcal{H}(s,y_{0})):y_{0}\in E_{0}(s)\}=\text{graph}(\mathcal{H}(s,\cdot)), \tag{13}\] and since \(E_{0}(s)\) and \(E_{+}(s)\oplus E_{-}(s)\) have only zero in their intersection (Proposition 5), this identification makes sense. Notice that the map \(\mathcal{H}\), identified as a graph in (13), is strictly speaking a map that takes values in \(E_{+}(s)\oplus E_{-}(s)\). Similar graph-like representations can be obtained for \(\mathcal{W}^{c}\) and \(\mathcal{W}^{c}_{\rm loc}\). Properties of the center manifold In this section, we prove that \(\mathcal{W}^{\mathrm{c}}_{\mathrm{loc}}\) is locally invariant and consists of slow dynamics. Moreover, we prove that the center manifold inherits the same finite order of smoothness as the nonlinearity \(R\) and its tangent bundle is precisely the center bundle \(E_{0}\). Lastly, we prove that the center manifold is \(T\)-periodic in a neighborhood of the origin. At the end of the section, we combine all the results to prove Theorem1. Our first aim is to prove the local invariance property of \(\mathcal{W}^{\mathrm{c}}_{\mathrm{loc}}\). Therefore, let \(S_{\delta}(t,s,\cdot):\mathbb{R}^{n}\to\mathbb{R}^{n}\) denote the (time-dependent) flow of \[\dot{y}(t)=A(t)y(t)+R_{\delta}(t,y(t)). \tag{14}\] Moreover, Lemma6 still holds when \(R\) is replaced by \(R_{\delta}\) and so the ordinary differential equation (14) is equivalent to the integral equation (11). By (local) uniqueness of solutions, we have that (3) still holds with \(S\) replaced by \(S_{\delta}\). The following result is the nonlinear analogue of Proposition7 and is a preliminary result to prove in Proposition19 the local invariance property of the center manifold. **Proposition 18**.: _Let \(\eta\in(0,\min\{-a,b\})\) and \(s\in\mathbb{R}\). Then_ \[\mathcal{W}^{\mathrm{c}}(s)=\{y_{0}\in\mathbb{R}^{n}:\text{ there exists a solution of \eqref{eq:local} through $y_{0}$ belonging to $\mathrm{BC}^{\eta}_{s}$}\}.\] Proof.: Choose \(y_{0}\in\mathcal{W}^{\mathrm{c}}(s)\), then \(y_{0}=\mathcal{C}(s,z_{0})=\hat{u}^{\eta}_{s}(z_{0})(s)\) for some \(z_{0}\in E_{0}(s)\). Proposition8 shows that \(\mathcal{K}^{\eta}_{s}\tilde{R}_{\delta}(u)\) is the unique solution of (9) with \(f=\tilde{R}_{\delta}(u)\). Since \(u=\hat{u}^{\eta}_{s}(z_{0})\) is a fixed point of \(\mathcal{G}^{\eta}_{s}(\cdot,z_{0})\), we get \[u(t) =U(t,s)z_{0}+(\mathcal{K}^{\eta}_{s}\tilde{R}_{\delta}(u))(t)\] \[=U(t,s)z_{0}+U(t,s)(\mathcal{K}^{\eta}_{s}\tilde{R}_{\delta}(u))( s)+\int_{s}^{t}U(t,\tau)R_{\delta}(\tau,u(\tau))d\tau\] \[=U(t,s)u(s)+\int_{s}^{t}U(t,\tau)R_{\delta}(\tau,u(\tau))d\tau.\] for all \((t,s)\in\mathbb{R}^{2}\). Hence, \(u=\hat{u}^{\eta}_{s}(z_{0})\) is a solution of (11), and so (14), through \(u(s)=y_{0}\) which belongs to \(\mathrm{BC}^{\eta}_{s}\). Conversely, let \(y_{0}\in\mathbb{R}^{n}\) such that there exists a solution \(u\) of (14), and so (11), in \(\mathrm{BC}^{\eta}_{s}\) satisfying \(u(s)=y_{0}\). It follows from Proposition8 that \[u(t)=U(t,s)\pi_{0}(s)u(s)+(\mathcal{K}^{\eta}_{s}\tilde{R}_{\delta}(u))(t).\] Hence, \(u=\mathcal{G}^{\eta}_{s}(u,\pi_{0}(s)u(s))\) so \(y_{0}=u(s)=\mathcal{C}(s,\pi_{0}(s)y_{0})\in\mathcal{W}^{\mathrm{c}}(s)\) by uniqueness of the fixed point. **Proposition 19**.: _The local center manifold \(\mathcal{W}^{\mathrm{c}}_{\mathrm{loc}}\) has the following properties._ 1. \(\mathcal{W}^{\mathrm{c}}_{\mathrm{loc}}\) _is_ locally invariant_: if \((s,y_{0})\in\mathbb{R}\times\mathcal{W}^{\mathrm{c}}_{\mathrm{loc}}\) and \(t_{-},t_{+}\in\mathbb{R}\) with \(s\in(t_{-},t_{+})\) such that \(S(t,s,y_{0})\in B(0,\delta)\) for all \(t\in(t_{-},t_{+})\), then \(S(t,s,y_{0})\in\mathcal{W}^{\mathrm{c}}_{\mathrm{loc}}\)._ 2. \(\mathcal{W}^{\mathrm{c}}_{\mathrm{loc}}\) _contains every solution of (_2_) that exists on_ \(\mathbb{R}\) _and remains sufficiently small for all positive and negative time: if_ \(u:\mathbb{R}\to B(0,\delta)\) _is a solution of (_2_), then_ \(u(t)\in\mathcal{W}^{\mathrm{c}}_{\mathrm{loc}}\) _for all_ \(t\in\mathbb{R}\)_._ 3. _If_ \((s,y_{0})\in\mathbb{R}\times\mathcal{W}^{\mathrm{c}}_{\mathrm{loc}}\)_, then_ \(S(t,s,y_{0})=\hat{u}^{\eta}_{t}(\pi_{0}(t)S(t,s,y_{0}))(t)=\mathcal{C}(t,\pi_{ 0}(t)S(t,s,y_{0}))\) _for all_ \(t\in(t_{-},t_{+})\)_._ 4. \(0\in\mathcal{W}^{\mathrm{c}}_{\mathrm{loc}}\) _and_ \(\mathcal{C}(t,0)=0\) _for all_ \(t\in\mathbb{R}\) Proof.: We prove the four assertions step by step. 1. By Proposition 18, choose a solution \(u\in\mathrm{BC}^{\eta}_{s}\) of (14) such that \(u(s)=y_{0}\). Note that \(u(s)=S_{\delta}(s,s,y_{0})\), so by uniqueness \(u(t)=S_{\delta}(t,s,y_{0})\) for all \(t\in(t_{-},t_{+})\). Then \(S_{\delta}(t,s,y_{0})\in\mathcal{W}^{c}(t)\subset\mathcal{W}^{c}\). Since \(S_{\delta}(t,s,y_{0})\in B(0,\delta)\), it follows that \(S(t,s,y_{0})=S_{\delta}(t,s,y_{0})\in\mathcal{W}^{c}_{\mathrm{loc}}\). 2. Recall that (2) and (14) are equal on \(B(0,\delta)\). If \(u\) is such a solution, then \(u\in\mathrm{BC}^{\eta}_{s}\). The assumption that \(u\) takes values in \(B(0,\delta)\) and Proposition 18 together imply with the first assertion the result. 3. In the proof of Proposition 18 it is shown that \(y_{0}=\mathcal{C}(s,\pi_{0}(s)y_{0})\) for any \(y_{0}\in\mathcal{W}^{c}(s)\). So it is certainly true for \(y_{0}\in\mathcal{W}^{c}_{\mathrm{loc}}\) that \(S_{\delta}(s,s,y_{0})=y_{0}=\hat{u}^{\eta}_{s}(\pi_{0}(s)S_{\delta}(s,s,y_{0} ))(s)=\mathcal{C}(s,\pi_{0}(s)S_{\delta}(s,s,y_{0}))\). Because \(\mathcal{W}^{c}_{\mathrm{loc}}\) is locally invariant, we have that \(S_{\delta}(t,s,y_{0})\in\mathcal{W}^{c}_{\mathrm{loc}}\) for all \(t\in\mathbb{R}\) sufficiently close to \(s\) and by uniqueness of solutions, \(S_{\delta}(t,s,y_{0})=\hat{u}^{\eta}_{t}(\pi_{0}(t)S_{\delta}(t,s,y_{0}))= \mathcal{C}(t,\pi_{0}(t)S_{\delta}(t,s,y_{0}))\). Since we are on the local center manifold, we can replace \(S_{\delta}\) with \(S\). 4. Notice that \(\mathcal{C}(t,0)=\hat{u}^{\eta}_{t}(0)(t)=0\) for all \(t\in\mathbb{R}\), where the last equality follows from Proposition 14. Clearly, \(0=\mathcal{C}(t,0)\in\mathcal{W}^{c}_{\mathrm{loc}}\) and so the proof is complete. It is now possible to explain the fact that the dynamics on the center manifold is rather slow. Indeed, the local invariance of \(\mathcal{W}^{c}_{\mathrm{loc}}\) (Proposition 19) in combination with Proposition 18 shows that solutions on the center manifold are in \(\mathrm{BC}^{\eta}_{s}\) for some sufficiently small \(\eta>0\), i.e. their asymptotic behavior forward and backward in time can only be a limited exponential. The next step is to show that the map \(\mathcal{C}\) inherits the same order of smoothness as the time-dependent nonlinear perturbation \(R\). Proving additional smoothness of center manifolds requires work. A well-known technique to increase smoothness of center manifolds is via the theory of contraction on scales of Banach spaces [59]. Since this part of the theory is rather technical, it is delegated to Appendix A. The main result is presented in Theorem 34 and simply states that the map \(\mathcal{C}\) is \(C^{k}\)-smooth and so \(\mathcal{W}^{c}\) and \(\mathcal{W}^{c}_{\mathrm{loc}}\) are both \(C^{k}\)-smooth manifolds in \(\mathbb{R}^{n}\). The additional regularity of the center manifold allows us to study its tangent bundle. **Proposition 20**.: _The tangent bundle of \(\mathcal{W}^{c}\) and \(\mathcal{W}^{c}_{\mathrm{loc}}\) is \(E_{0}\): \(D_{2}\mathcal{C}(s,0)y_{0}=y_{0}\) for all \((s,y_{0})\in E_{0}\)._ Proof.: Let \(\eta\in[\eta_{-},\eta_{+}]\subset(0,\min\{-a,b\})\) such that \(k\eta_{-}<\eta_{+}\). Differentiating \[\hat{u}^{\eta_{-}}_{s}(y_{0})=U^{\eta_{-}}_{s}y_{0}+\mathcal{K}^{\eta_{-}}_{s} \circ\tilde{R}_{\delta}(\hat{u}^{\eta_{-}}_{s}(y_{0}))\] with respect to \(y_{0}\) yields \[D\hat{u}^{\eta_{-}}_{s}(y_{0})=U^{\eta_{-}}_{s}+\mathcal{K}^{\eta_{-}}_{s} \circ\tilde{R}^{(1)}_{\delta}(\hat{u}^{\eta_{-}}_{s}(y_{0}))\circ D\hat{u}^{ \eta_{-}}_{s}(y_{0}).\] Setting \(y_{0}=0\) and recalling the fact that \(\hat{u}^{\eta_{-}}_{s}(0)=\tilde{R}^{(1)}_{\delta}(0)=0\) shows that \(D\hat{u}^{\eta_{-}}_{s}(0)=U^{\eta_{-}}_{s}\). If \(\mathrm{ev}_{s}:\mathrm{BC}^{\eta}_{s}\to\mathbb{R}^{n}:f\mapsto f(s)\) denotes the bounded linear evolution operator (at time \(s\)), then \[D_{2}\mathcal{C}(s,0)=\mathrm{ev}_{s}(D(\mathcal{J}^{\eta,\eta_{-}}_{s}\circ \hat{u}^{\eta_{-}}_{s})(0))=\mathrm{ev}_{s}(U^{\eta}_{s})=I,\] which proves the claim. Since our original system (2) is \(T\)-periodic, it is not surprising that the center manifold itself is \(T\)-periodic in a neighborhood of zero. To prove this, let us define for all \(s\in\mathbb{R}\) and sufficiently small \(\delta>0\) the map \(N_{s}:E_{0}(s)\to E_{0}(s)\) by \[N_{s}(y_{0}):=\pi_{0}(s)S_{\delta}(s+T,s,\mathcal{C}(s,y_{0})).\] **Lemma 21**.: _The function \(N_{s}\) is invertible in a neighborhood of the origin. Moreover, this neighborhood can be written as \(U\cap E_{0}(s)\) for some open neighborhood \(U\subset\mathbb{R}^{n}\) of zero, independent of \(s\)._ Proof.: Recall the well-known standard result: \(D[S_{\delta}(t,s,\cdot)](0)=U(t,s)\) for all \((t,s)\in\mathbb{R}^{2}\), see for instance [33, Section 17.6]. The differential of \(N_{s}\) in \(0\) is given by \[DN_{s}(0) =\pi_{0}(s)\circ D[S_{\delta}(s+T,s,\cdot)](\mathcal{C}(s,0))\circ D [\mathcal{C}(s,\cdot)](0)\] \[=\pi_{0}(s)\circ D[S_{\delta}(s+T,s,\cdot)](0)\] \[=\pi_{0}(s)U_{0}(s+T,s)=U_{0}(s+T,s),\] where we used Proposition 5, Proposition 20, and the fact that \(U_{0}(s+T,s)y_{0}\in E_{0}(s+T)=E_{0}(s)\). It follows from Proposition 5 that \(DN_{s}(0)=U_{0}(s+T,s)\) is a bounded linear isomorphism and so \(N_{s}\) is locally invertible by the inverse function theorem. To prove that the neighborhood may be written as claimed, let us first observe that for a given \(\varepsilon>0\) there holds \[\|DN_{s}(y_{0})-DN_{s}(0)\| \leq\|U_{0}(s+T,s)\pi_{0}(s)\|\|D\mathcal{C}(s,y_{0})-D\mathcal{ C}(s,0)\|\] \[\leq NC_{\varepsilon}e^{cT}\|D\mathcal{C}(s,y_{0})-D\mathcal{C} (s,0)\|\] \[\leq NC_{\varepsilon}e^{cT}L(1)\|y_{0}\|\to 0,\quad\text{ as }y_{0}\to 0,\] due to Proposition 5 and Corollary 35. Hence, \(DN_{s}(y_{0})\) is uniformly convergent (in the variable \(s\)) as \(y_{0}\to 0\) and so the implicit function may be defined on a neighborhood that does not depend on \(s\). **Proposition 22**.: _There exists a \(\delta>0\) such that \(\mathcal{C}(s+T,y_{0})=\mathcal{C}(s,y_{0})\) for all \((s,y_{0})\in E_{0}\) satisfying \(\|y_{0}\|\leq\delta\)._ Proof.: Let \((s,y_{0})\in E_{0}\) be given. By Lemma 21, choose \(\delta>0\) such that if \(\|y_{0}\|\leq\delta\), it is possible to write \(y_{0}=N_{s}(z_{0})\). It follows from Proposition 5, Proposition 19 and (3) that \[\mathcal{C}(s+T,y_{0}) =\mathcal{C}(s+T,\pi_{0}(s)S_{\delta}(s+T,s,\mathcal{C}(s,z_{0})))\] \[=S_{\delta}(s+T,s,\mathcal{C}(s,z_{0}))\] \[=S_{\delta}(s,s-T,\mathcal{C}(s,z_{0}))\] \[=\mathcal{C}(s,\pi_{0}(s)S_{\delta}(s,s-T,\mathcal{C}(s,z_{0})))\] \[=\mathcal{C}(s,\pi_{0}(s)S_{\delta}(s+T,s,\mathcal{C}(s,z_{0}))) =\mathcal{C}(s,y_{0}).\] This proves the \(T\)-periodicity of the center manifold. Recall that (2) was just a time-dependent translation of (ODE) via the given periodic solution. Hence, if \(x\) is a solution of (ODE) then \(y=x-\gamma\) is a solution of (2) and so \[\mathcal{W}^{c}_{\mathrm{loc}}(\Gamma):=\{\gamma(s)+\mathcal{C}(s,y_{0})\in \mathbb{R}^{n}:(s,y_{0})\in E_{0}\text{ and }\mathcal{C}(s,y_{0})\in B(0,\delta)\} \tag{15}\] is a \(T\)-periodic \(C^{k}\)-smooth \((n_{0}+1)\)-dimensional manifold in \(\mathbb{R}^{n}\) defined in the vicinity of \(\Gamma\) for a sufficiently small \(\delta>0\). To see this, recall that \(\gamma\) is \(T\)-periodic and \(C^{k}\)-smooth together with the fact that \(\mathcal{C}\) is \(T\)-periodic in the first component and \(C^{k}\)-smooth. Recall from Proposition 19 that \(\mathcal{C}(t,0)=0\) and so \(\Gamma\subset\mathcal{W}^{c}_{\mathrm{loc}}(\Gamma)\). We call \(\mathcal{W}^{c}_{\mathrm{loc}}(\Gamma)\) a _local center manifold around \(\Gamma\)_ and notice that this manifold inherits all the properties of \(\mathcal{W}^{c}_{\mathrm{loc}}\), which proves Theorem 1. ## 5 Examples and counterexamples It is widely known that center manifolds for equilibria have interesting qualitative properties [27, 43, 53]. For example, such center manifolds are not necessarily unique and are not necessarily of the class \(C^{\infty}\) even if the vector field is \(C^{\infty}\)-smooth. Of course, there always exists for \(C^{\infty}\)-smooth systems an open neighborhood \(U_{k}\) around the equilibrium such that a center manifold is \(C^{k}\)-smooth on \(U_{k}\). However, the neighborhood \(U_{k}\) may shrink towards the equilibrium as \(k\to\infty\), see [56, 54, 53] for explicit examples. When the vector field is analytic, there is the possibility of the existence of a non-analytic \(C^{\infty}\)-smooth center manifold, see [40, 53] for explicit examples. It is studied in [53] under which conditions a unique, \(C^{\infty}\)-smooth or analytic center manifold exists. The aim of this section is to provide several explicit examples illustrating similar behavior for the periodic center manifolds. For example, we provide in Example 23 an analytic \(2\pi\)-periodic two-dimensional center manifold near a nonhyperbolic cycle that is a cylinder. To complete the periodic two-dimensional center manifold theory, we provide in Example 24 a system that admits a \(2\pi\)-periodic two-dimensional center manifold near a nonhyperbolic cycle that is a Mobius band. Both examples are minimal polynomial vector fields admitting a cylinder or Mobius band as a periodic center manifold. To illustrate the existence of a non-unique and non-analytic \(C^{\infty}\)-smooth periodic center manifold, we will study the analytic nonlinear periodically driven system \[\begin{cases}\dot{x}=-x^{2},\\ \dot{y}=-y+\sin(t)x^{2}.\end{cases} \tag{16}\] This vector field is a modification of the vector field used in [40] to illustrate the non-unique behavior of center manifolds for equilibria. Note that the system (16) is not of the form (ODE) but already written in the style of (2) where \(A\) is autonomous and \(R\) is \(2\pi\)-periodic in the first argument. The assumption of an autonomous linear part is not a restriction. Indeed, the Floquet normal form \(U(t,0)=Q(t)e^{Bt}\), where \(Q\) is \(T\)-periodic, \(Q(0)=I\), \(Q(t)\) is invertible for all \(t\in\mathbb{R}\), and the matrix \(B\in\mathbb{C}^{n\times n}\) that satisfies \(U(T,0)=e^{BT}\) shows that a general system of the form (2) is equivalent to the nonlinear periodically driven system \[\dot{z}(t)=Bz(t)+G(t,z(t)), \tag{17}\] where \(G(t,z(t)):=Q(t)^{-1}R(t,Q(t)z(t))\), and \(z\) satisfies the Lyapunov-Floquet transformation \(y(t)=Q(t)z(t)\). Clearly (17) has an autonomous linear part and \(G\) is \(T\)-periodic in the first component since \(R\) and \(Q\) are both \(T\)-periodic. Notice that the whole periodic center manifold construction from previous subsections still applies for systems of the form (17). The reason we study systems of the form (17) instead of a general system of the form (ODE) is to keep the calculations rather simple. Indeed, if one would like to cook up an explicit non-trivial example of a periodic center manifold near a nonhyperbolic cycle, one needs to be able to compute explicitly the periodic solution, the fundamental matrix and its associated Floquet multipliers, which is rather difficult for general systems of the form (ODE). We remark that the computations for the periodic center manifolds of simple periodically driven systems considered in this section are rather tedious compared with their equilibrium analogues. In Example 25 we show that (16) admits a non-analytic \(2\pi\)-periodic center manifold. Next, we show in Example 26 that (16) admits a \(2\pi\)-periodic center manifold that is _locally (non)-unique_, i.e. there exist subneighborhoods of any neighborhood of \(\mathbb{R}\times\{0\}\) where the center manifold is unique and others where it is not unique. This freedom of different center manifolds allows us to choose in Example 27 a particular \(2\pi\)-periodic center manifold for (16) that is \(C^{\infty}\)-smooth. Hence, we have shown that analytic vector fields can admit non-analytic \(C^{\infty}\)-smooth periodic center manifolds. To complete this list of examples, we will show in Example 28 that the \(C^{\infty}\)-smooth (analytic) nonlinear periodically driven system \[\begin{cases}\dot{x}=xz-x^{3},\\ \dot{y}=y+(1+\sin(t))x^{2},\\ \dot{z}=0.\end{cases} \tag{18}\] admits a \(2\pi\)-periodic non-\(C^{\infty}\)-smooth center manifold. This vector field is a modification of the vector field used in [56] to illustrate the non-\(C^{\infty}\)-smoothness of center manifolds near equilibria. Hence, we have proven that there exists analytic vector fields admitting locally (non)-unique, (non)-\(C^{\infty}\)-smooth and (non)-analytic periodic center manifolds. **Example 23**.: _The analytic system_ \[\begin{cases}\dot{x}_{1}=x_{1}-x_{2}-x_{1}(x_{1}^{2}+x_{2}^{2}),\\ \dot{x}_{2}=x_{1}+x_{2}-x_{2}(x_{1}^{2}+x_{2}^{2}),\\ \dot{x}_{3}=0,\end{cases} \tag{19}\] _admits an analytic \(2\pi\)-periodic two-dimensional center manifold that is a cylinder._ Proof.: Notice that (19) admits a \(2\pi\)-periodic solution \(\gamma(t)=(\cos(t),\sin(t),0)\). The system around \(\Gamma=\gamma(\mathbb{R})\) can be written in coordinates \(x=\gamma+y\) as \[\begin{cases}\dot{y}_{1}=-2\cos^{2}(t)y_{1}-(1+2\sin(t)\cos(t))y_{2}-3\cos^{2} (t)y_{1}^{2}-2\sin(t)y_{1}y_{2}-\cos(t)y_{2}^{2}-y_{1}^{3}-y_{1}y_{2}^{2},\\ \dot{y}_{2}=(1-2\sin(t)\cos(t))y_{1}-2\sin^{2}(t)y_{2}-\sin(t)y_{1}^{2}-2\cos( t)y_{1}y_{2}-3\sin^{2}(t)y_{2}^{2}-y_{1}^{2}y_{2}-y_{2}^{3},\\ \dot{y}_{3}=0,\end{cases} \tag{20}\] where \(y:=(y_{1},y_{2},y_{3})\) and \(x:=(x_{1},x_{2},x_{3})\). The linearization around the origin of (20) reads \[A(t)=\begin{pmatrix}-2\cos^{2}(t)&-2\sin(t)\cos(t)-1&0\\ -2\sin(t)\cos(t)+1&-2\sin^{2}(t)&0\\ 0&0&0\end{pmatrix}\] and so the solution of the variational equation around \(\Gamma\) is generated by the fundamental matrix \(U(t,s)=V(t)V(s)^{-1}\) where \[V(t)=\begin{pmatrix}e^{-2t}\cos(t)&-\sin(t)&0\\ e^{-2t}\sin(t)&\cos(t)&0\\ 0&0&1\end{pmatrix}.\] The Floquet multipliers are given by \(\lambda_{1}=1,\lambda_{2}=e^{-4\pi}\) and \(\lambda_{3}=1\). Hence, the center subspace and stable subspace (at time \(t\)) can be obtained as \(E_{0}(t)=\operatorname{span}\{\zeta_{1}(t),\zeta_{3}(t)\}\) and \(E_{-}(t)=\operatorname{span}\{\zeta_{2}(t)\}\) respectively, where \(\zeta_{1}(t)=(-\sin(t),\cos(t),0),\zeta_{2}(t)=(\cos(t),\sin(t),0)\) and \(\zeta_{3}(t)=(0,0,1)\). The center bundle \(E_{0}\) parametrizes a cylinder as a ruled surface since \[(x_{1}(t,v),x_{2}(t,v),x_{3}(t,v))=\gamma(t)+v\zeta_{3}(t),\] for all \(t,v\in\mathbb{R}\). It follows from Section 4 that for any \(k\geq 1\) there exists a \(2\pi\)-periodic \(C^{k}\)-smooth two-dimensional locally invariant center manifold \(\mathcal{W}^{c}_{\text{loc}}\) for (20) around the origin that is tangent to \(E_{0}\). To obtain this center manifold, let us transform (20) into eigenbasis, i.e. we perform the change of variables from \(y\) to \(z:=(z_{1},z_{2},z_{3})\) as \[z_{1} =-\sin(t)y_{1}+\cos(t)y_{2},\] \[z_{2} =\cos(t)y_{1}+\sin(t)y_{2}, \tag{21}\] \[z_{3} =y_{3},\] to obtain the autonomous system \[\begin{cases}\dot{z}_{1}=-z_{1}(z_{1}^{2}+z_{2}^{2}+2z_{2}),\\ \dot{z}_{2}=-(z_{2}+1)(z_{1}^{2}+z_{2}^{2}+2z_{2}),\\ \dot{z}_{3}=0.\end{cases} \tag{22}\] The \(z_{1}z_{3}\)-plane corresponds to the center subspace while the \(z_{2}\)-axis corresponds to the stable subspace. Therefore, the center manifold is parametrized by \(z_{2}(t)=\mathcal{H}(t,z_{1},z_{3})\), where \(\mathcal{H}\) is \(2\pi\)-periodic in the first variable and consists solely of nonlinear terms in the last two variables. Because (22) is an autonomous system, we have that \(\mathcal{H}\) is constant in the first variable, and so we can write \(\mathcal{H}(t,z_{1},z_{3})=H(z_{1},z_{3})\) for all \(t\in\mathbb{R}\). Because \(\mathcal{W}^{c}_{\mathrm{loc}}\) is locally invariant, we must have that \[z_{1}(z_{1}^{2}+H(z_{1},z_{3})^{2}+2H(z_{1},z_{3}))\frac{\partial}{\partial z_ {1}}H(z_{1},z_{3})=(H(z_{1},z_{3})+1)(z_{1}^{2}+H(z_{1},z_{3})^{2}+2H(z_{1},z_ {3})).\] If \(z_{1}^{2}+H(z_{1},z_{3})^{2}+2H(z_{1},z_{3})\neq 0\), then \(H\) must satisfy \[\frac{\partial}{\partial z_{1}}H(z_{1},z_{3})=\frac{1}{z_{1}}(H(z_{1},z_{3})+1 ),\quad H(0,0)=0,\] which has obviously no solution. Hence \(z_{1}^{2}+(H(z_{1},z_{3})+1)^{2}=1\) and so \(H(z_{1},z_{3})=\sqrt{1-z_{1}^{2}}-1\) since \(H(0,0)=0\). Clearly \(H\) is analytic on \((-1,1)\times\mathbb{R}\) since \[H(z_{1},z_{3})=\sum_{k=1}^{\infty}(-1)^{k}\binom{\frac{1}{2}}{k}z_{1}^{2k},\] for all \((z_{1},z_{3})\in(-1,1)\times\mathbb{R}\) due to the Binomial series. Hence, \(\mathcal{H}\) is analytic on \(\mathbb{R}\times(-1,1)\times\mathbb{R}\), which proves the claim. Transforming the map \(\mathcal{H}\) back into original \(y\)-coordinates using (21) shows that the center manifold \(\mathcal{W}^{c}_{\mathrm{loc}}\) is parametrized as \[(y_{1}(t)+\cos(t))^{2}+(y_{2}(t)+\sin(t))^{2}=1,\quad y_{3}(t)=c_{3},\quad c_ {3}\in\mathbb{R}. \tag{23}\] Writing this back into \(x\)-coordinates yields \[(x_{1}(t),x_{2}(t),x_{3}(t))=(y_{1}(t)+\cos(t),y_{2}(t)+\sin(t),c_{3}),\quad c _{3}\in\mathbb{R},\] but due to (23) we have that \(x_{1}(t)^{2}+x_{2}(t)^{2}=1\) and so \[\mathcal{W}^{c}_{\mathrm{loc}}(\Gamma)=\{(x_{1},x_{2},x_{3})\in\mathbb{R}^{3}: x_{1}^{2}+x_{2}^{2}=1\}.\] Notice that \(\mathcal{W}^{c}_{\mathrm{loc}}\) and \(\mathcal{W}^{c}_{\mathrm{loc}}(\Gamma)\) do not depend on a choice of \(\delta>0\). The reason is clear as for example the cylinder \(\mathcal{W}^{c}_{\mathrm{loc}}(\Gamma)\) is an invariant manifold of (19) since the function \(V:\mathbb{R}^{3}\to\mathbb{R}\) defined by \[V(x_{1},x_{2},x_{3}):=x_{1}^{2}+x_{2}^{2}-1\] is constant along the trajectories whose points are contained in \(\mathcal{W}^{c}_{\mathrm{loc}}(\Gamma)=V^{-1}(\{0\})\). **Example 24**.: _The analytic system_ \[\begin{cases}\dot{x}_{1}=-x_{2}+x_{1}\Phi(x_{1},x_{2}),\\ \dot{x}_{2}=x_{1}+x_{2}\Phi(x_{1},x_{2}),\\ \dot{x}_{3}=\frac{1}{4}(1-\sigma x_{2})(x_{1}^{2}+x_{2}^{2}-1)+\frac{\sigma x_ {3}}{2}(1+x_{1}),\end{cases} \tag{24}\] _where_ \[\Phi(x_{1},x_{2}):=\frac{\sigma}{4}(1-x_{1})(x_{1}^{2}+x_{2}^{2}-1)-\frac{x_{ 3}}{2}(1+\sigma x_{2}),\] _admits for \(\sigma\neq 0\) a \(2\pi\)-periodic two-dimensional \(C^{k}\)-smooth center manifold that is locally diffeomorphic to a Mobius band for every \(k\geq 1\). If \(\sigma=0\), then the \(2\pi\)-periodic center manifold is the whole state space \(\mathbb{R}^{3}\) and (24) admits a family of invariant tori._ Proof.: Notice that (19) admits for all \(\sigma\in\mathbb{R}\) a \(2\pi\)-periodic solution \(\gamma_{\sigma}(t)=(\cos(t),\sin(t),0)\). Hence, the solution of the variational equation around \(\Gamma_{\sigma}\) is generated by the fundamental matrix \(U_{\sigma}(t,s)=V_{\sigma}(t)V_{\sigma}(s)^{-1}\) where \[V_{\sigma}(t)=\begin{pmatrix}\cos(t)\cos\bigl{(}\tfrac{t}{2}\bigr{)}&-\sin(t)& -e^{\sigma t}\sin\bigl{(}\tfrac{t}{2}\bigr{)}\cos(t)\\ \sin(t)\cos\bigl{(}\tfrac{t}{2}\bigr{)}&\cos(t)&-e^{\sigma t}\sin\bigl{(} \tfrac{t}{2}\bigr{)}\sin(t)\\ \sin\bigl{(}\tfrac{t}{2}\bigr{)}&0&e^{\sigma t}\cos\bigl{(}\tfrac{t}{2}\bigr{)} \end{pmatrix}.\] The Floquet multipliers are given by \(\lambda_{1}=1,\lambda_{2}=-1\) and \(\lambda_{3,\sigma}=-e^{2\pi\sigma}\). Let \(E_{0}^{\sigma}(t)\) and \(E_{\pm}^{\sigma}(t)\) denote the center and (un)stable subspace (at time \(t\)) at parameter value \(\sigma\) respectively. For the center subspace, we have that \(E_{0}^{\sigma}(t)=\operatorname{span}\{\zeta_{1}(t),\zeta_{2}(t)\}\) for \(\sigma\neq 0\) and \(E_{0}^{0}(t)=\operatorname{span}\{\zeta_{1}(t),\zeta_{2}(t),\zeta_{3}(t)\}\). For the (un)stable subspace we obtain \(E_{-}^{\sigma}(t)=\operatorname{span}\{\zeta_{3}(t)\}\) for \(\sigma<0\) and \(E_{+}^{\sigma}(t)=\operatorname{span}\{\zeta_{3}(t)\}\) for \(\sigma>0\) where \[\zeta_{1}(t) =(-\sin(t),\cos(t),0),\] \[\zeta_{2}(t) =\biggl{(}\cos(t)\cos\biggl{(}\frac{t}{2}\biggr{)},\sin(t)\cos \biggl{(}\frac{t}{2}\biggr{)},\sin\biggl{(}\frac{t}{2}\biggr{)}\biggr{)},\] \[\zeta_{3}(t) =\biggl{(}-\cos(t)\sin\biggl{(}\frac{t}{2}\biggr{)},-\sin(t)\sin \biggl{(}\frac{t}{2}\biggr{)},\cos\biggl{(}\frac{t}{2}\biggr{)}\biggr{)}.\] Notice that the eigenvector \(\zeta_{3}(t)\) is perpendicular to the plane spanned by \(\zeta_{1}(t)\) and \(\zeta_{2}(t)\). Let us first discuss the case \(\sigma\neq 0\). Observe that the center bundle \(E_{0}^{\sigma}\) at parameter value \(\sigma\) parametrizes locally a Mobius band as a ruled surface since \[(x_{1}(t,v),x_{2}(t,v),x_{3}(t,v))=\gamma_{\sigma}(t)+v\zeta_{2}(t),\] for all \(t\in\mathbb{R}\) and \(v\in[-1,1]\). Theorem 1 provides us for any \(k\geq 1\) a \(2\pi\)-periodic \(C^{k}\)-smooth two-dimensional locally invariant center manifold \(\mathcal{W}_{\text{loc}}^{\infty}(\Gamma_{\sigma})\) for (24) around \(\Gamma_{\sigma}\) tangent to the center bundle \(E_{0}^{\sigma}\) that is locally diffeomorphic to a Mobius band, see Figure 2. When \(\sigma=0\), it is clear that the Floquet multipliers are all on the unit circle where \(1\) is simple and \(-1\) has algebraic multiplicity \(2\). Hence, the \(2\pi\)-periodic center manifold is \(3\)-dimensional, i.e. the whole state space \(\mathbb{R}^{3}\). Moreover, (24) admits a family of invariant tori \(\{\mathbb{T}_{l}:l\geq 0\}\) at \(\sigma=0\) with major radius \(1\) and minor radius \(r_{l}\) since the function \(V_{l}:\mathbb{R}^{3}\to\mathbb{R}\) defined by \[V_{l}(x_{1},x_{2},x_{3}):=\biggl{(}\sqrt{x_{1}^{2}+x_{2}^{2}}-1\biggr{)}^{2}+x_ {3}^{2}-r_{l}^{2}\biggl{(}\sqrt{x_{1}^{2}+x_{2}^{2}}\biggr{)}\] where \[r_{l}^{2}(u):=l+\frac{1}{2}u(u-4)+\ln(u)+\frac{3}{2},\] is constant along the trajectories whose points are contained in \(\mathbb{T}_{l}:=V_{l}^{-1}\{\{0\}\}\). We claim that the family of tori is rooted at the cycle, i.e. \(\mathbb{T}_{0}=\Gamma_{0}\). It is clear that solving \(V_{0}(x_{1},x_{2},x_{3})=0\) is equivalent to \[\frac{1}{2}u^{2}-\ln(u)=\frac{1}{2}-x_{3}^{2},\quad u=\sqrt{x_{1}^{2}+x_{2}^{2 }},\] which has only one real solution at \(x_{3}=0\) since \(\frac{1}{2}u^{2}-\ln(u)\geq\frac{1}{2}\) for all \(u\geq 0\). Clearly this solution corresponds to the cycle \(\Gamma_{0}\). Examples of such invariant tori can be found in Figure 2. **Example 25**.: _The analytic system (16) admits a non-analytic \(2\pi\)-periodic center manifold._ Proof.: The fundamental matrix for the linearization around the origin of (16) reads \[U(t,s)=\begin{pmatrix}1&0\\ 0&e^{s-t}\end{pmatrix}\] for all \((t,s)\in\mathbb{R}^{2}\). The Floquet multipliers are given by \(\lambda_{1}=1\) and \(\lambda_{2}=e^{-2\pi}\) and the center space (at time \(t\)) is given by \(E_{0}(t)=\text{span}\{(1,0)\}\) while the stable space (at time \(t\)) is given by \(E_{-}(t)=\text{span}\{(0,1)\}\). Hence, the \(x\)-axis corresponds to the center space and so the center manifold can be parametrized by \(y(t)=\mathcal{H}(t,x)\), where \(\mathcal{H}\) is \(2\pi\)-periodic in the first argument and consists solely of nonlinear terms. Because the center manifold is locally invariant, the map \(\mathcal{H}\) must satisfy \[\frac{\partial\mathcal{H}}{\partial t}(t,x)-x^{2}\frac{\partial\mathcal{H}}{ \partial x}(t,x)=-\mathcal{H}(t,x)+\sin(t)x^{2}. \tag{25}\] Assume that \(\mathcal{H}\) is analytic on an open neighborhood of \(\mathbb{R}\times\{0\}\), then we can write locally \(\mathcal{H}(t,x)=\sum_{n\geq 2}a_{n}(t)x^{n}\) for \(2\pi\)-periodic functions \(a_{n}\). Filling this expansion into (25) and comparing terms in \(x^{n}\) shows that the \(2\pi\)-periodic functions \(a_{n}\) must satisfy \[\begin{cases}\dot{a}_{2}(t)+a_{2}(t)=\sin(t),&n=2,\\ \dot{a}_{n}(t)+a_{n}(t)=(n-1)a_{n-1}(t),&n\geq 3.\end{cases}\] Hence, \(a_{2}(t)=\alpha_{2}\sin(t)+\beta_{2}\cos(t)\), where \(\alpha_{2}=\frac{1}{2}\) and \(\beta_{2}=-\frac{1}{2}\), and \[a_{n}(t)=\frac{(n-1)e^{-t}}{e^{2\pi}-1}\bigg{(}\int_{t}^{2\pi}e^{\tau}a_{n-1} (\tau)d\tau+e^{2\pi}\int_{0}^{t}e^{\tau}a_{n-1}(\tau)d\tau\bigg{)}.\] Let us prove by induction for \(n\geq 2\) that \(a_{n}\) is a linear combination of sines and cosines. If \(n=2\), then the result is clear. Assume that the claim holds for a certain \(n\geq 3\), it follows from the induction Figure 2: The left figure represents several forward orbits on a local \(2\pi\)-periodic two-dimensional center manifold (Möbius band) around \(\Gamma_{\sigma}\) for (24) at parameter value \(\sigma=-1\). The right figure represents two forward orbits on two different invariant tori for (24) at parameter value \(\sigma=0\). The forward orbits are obtained by numerical integration and each orbit is represented by different color. hypothesis and applying integration by parts twice on both integrals that \[a_{n}(t) =\frac{(n-1)e^{-t}}{e^{2\pi}-1}\bigg{(}\int_{t}^{2\pi}e^{\tau}( \alpha_{n-1}\sin(\tau)+\beta_{n-1}\cos(\tau))d\tau+e^{2\pi}\int_{0}^{t}e^{\tau}( \alpha_{n-1}\sin(\tau)+\beta_{n-1}\cos(\tau))d\tau\bigg{)}\] \[=\frac{n-1}{2}((\alpha_{n-1}+\beta_{n-1})\sin(t)+(-\alpha_{n-1}+ \beta_{n-1})\cos(t)),\] which proves the claim. From the proof of the induction step, we obtain \[\begin{pmatrix}\alpha_{n}\\ \beta_{n}\end{pmatrix}=\frac{n-1}{2}\begin{pmatrix}1&1\\ -1&1\end{pmatrix}\begin{pmatrix}\alpha_{n-1}\\ \beta_{n-1}\end{pmatrix},\quad n\geq 3.\] This is a linear system of difference equation and can be solved explicitly by computing the diagonalization of the associated matrix. The final result reads \[\begin{pmatrix}\alpha_{n}\\ \beta_{n}\end{pmatrix}=\frac{(n-1)!}{2^{\frac{n-1}{2}}}\begin{pmatrix}\cos( \frac{(n-1)\pi}{4})\\ -\sin(\frac{(n-1)\pi}{4})\end{pmatrix}.\] Hence, the \(2\pi\)-periodic functions \(a_{n}\) are given by \[a_{n}(t)=\frac{(n-1)!}{2^{\frac{n-1}{2}}}\bigg{(}\cos(\frac{(n-1)\pi}{4})\sin (t)-\sin(\frac{(n-1)\pi}{4})\cos(t)\bigg{)},\quad n\geq 2.\] Using the angle addition and subtractions formula for cosines yields the center manifold expansion \[\mathcal{H}(t,x)=\sum_{n\geq 2}\frac{(n-1)!}{2^{\frac{n-1}{2}}}\sin(t-\frac{(n- 1)\pi}{4})x^{n}. \tag{26}\] We will prove that the radius of convergence \(R(t)\) at time \(t\in\mathbb{R}\) of (26) is zero. Let us first observe that for any \(n\geq 2\) one has \[\sup_{k\geq n}\bigg{(}k!\bigg{|}\sin(t-\frac{k\pi}{4})\bigg{|}\bigg{)}^{\frac {1}{k}}\geq((4n)!|\sin(t)|)^{\frac{1}{4n}},\] for \(t\neq l\pi\) and \(\ l\in\mathbb{Z}\). To bound this supremum from below when \(t=l\pi\) for some \(l\in\mathbb{Z}\), choose \(m\in\mathbb{Z}\) such that \(r=2(l-1-m)\geq n\) because then \[\sup_{k\geq n}\bigg{(}k!\bigg{|}\sin(l\pi-\frac{k\pi}{4})\bigg{|}\bigg{)}^{ \frac{1}{k}}\geq(r!)^{\frac{1}{r}}\geq(n!)^{\frac{1}{n}}.\] The Cauchy-Hadamard theorem tells us that \[\frac{1}{R(t)}=\limsup_{n\to\infty}|a_{n}(t)|^{\frac{1}{n}}\geq\sqrt{2}\min\{ \lim_{n\to\infty}(n!|\sin(t)|)^{\frac{1}{n}},\lim_{n\to\infty}(n!)^{\frac{1}{n }}\}=\infty,\] where in the first argument of the minimum it is assumed that \(t\neq l\pi\) for all \(l\in\mathbb{Z}\). This proves \(R(t)=0\) for all \(t\in\mathbb{R}\), i.e. \(\mathcal{H}\) is not analytic. **Example 26**.: _The analytic system (16) admits a locally (non)-unique family of \(2\pi\)-periodic center manifolds._ Proof.: Recall from Example 25 that the parametrization \(\mathcal{H}\) of the center manifold must satisfy (25) in an open neighborhood of \(\mathbb{R}\times\{0\}\). To construct the map \(\mathcal{H}\) explicitly, let us first introduce (formally) for arbitrary constants \(\alpha,\beta\in\mathbb{R}\) the family of functions \(I_{\alpha,\beta}:\mathbb{R}\times\mathbb{R}\setminus\{0\}\to\mathbb{R}\) as \[I_{\alpha,\beta}(t,x):=-\sqrt{2}e^{-\frac{1}{x}}\bigg{(}\cos\biggl{(}\frac{1}{ x}-t\biggr{)}I_{\alpha}^{1}(x)+\sin\biggl{(}\frac{1}{x}-t\biggr{)}I_{\beta}^{2}(x) \biggr{)}, \tag{27}\] where the functions \(I_{\alpha}^{1}\) and \(I_{\beta}^{2}\) are defined by \[I_{\alpha}^{1}(x):=\int_{\alpha}^{x}\frac{e^{\frac{1}{x}}}{s}\sin\biggl{(} \frac{1}{s}+\frac{\pi}{4}\biggr{)}ds,\quad I_{\beta}^{2}(x):=\int_{\beta}^{x} \frac{e^{\frac{1}{s}}}{s}\sin\biggl{(}\frac{1}{s}-\frac{\pi}{4}\biggr{)}ds. \tag{28}\] It turns out that the higher order derivatives of \(I_{\alpha,\beta}(t,\cdot)\) will be important for the construction of the map \(\mathcal{H}\). Therefore, let us first determine all values of \(\alpha\) and \(\beta\) for which (27) is well-defined on \(\mathbb{R}\times(-\infty,0)\). Clearly \(I_{\alpha,\beta}\) is ill-defined on \(\mathbb{R}\times(-\infty,0)\) whenever \(\alpha,\beta>0\) due to the singularities at zero for the functions defined in (28). Hence, we must have that \(\alpha,\beta\leq 0\). We will show that \(\alpha=\beta=0\) are the only values for which \(I_{\alpha,\beta}\) is well-defined on \(\mathbb{R}\times(-\infty,0)\). Notice that \[\biggl{|}e^{-\frac{1}{x}}\cos\biggl{(}\frac{1}{x}-t\biggr{)}I_{0}^{1}(x) \biggr{|}\leq e^{-\frac{1}{x}}\int_{x}^{0}\frac{e^{\frac{1}{x}}}{s}ds\to 0, \tag{29}\] as \(x\uparrow 0\) by an application of L'Hopital's rule. A similar computation for the second term in (27) shows that \(I_{0,0}\) is well-defined. To show that the function \(I_{\alpha,\beta}\) is ill-defined on \(\mathbb{R}\times(-\infty,0)\) for all \(\alpha,\beta<0\), consider for a fixed \(t\in\mathbb{R}\) the sequence \((x_{m})_{m\geq m_{0}}\) defined by \(x_{m}:=\frac{1}{t-m\pi}\), where the integer \(m_{0}\geq 0\) is chosen large enough to guarantee that \(t-m_{0}\pi<0\). Hence, \[e^{-\frac{1}{x_{m}}}\cos\biggl{(}\frac{1}{x_{m}}-t\biggr{)}I_{\alpha}^{1}(x_{ m})=(-1)^{m}e^{-\frac{1}{x_{m}}}\int_{\alpha}^{x_{m}}\frac{e^{\frac{1}{s}}}{s} \sin\biggl{(}\frac{1}{s}+\frac{\pi}{4}\biggr{)}ds. \tag{30}\] Because the integrand is continuous and bounded above on \([\alpha,x_{m}]\subset[\alpha,0)\) for large enough \(m\geq m_{0}\), it can be extended from the left continuously at zero such that it attains the value \(M_{\alpha}<\infty\). If we set the integral in (30) to be \(M_{\alpha}^{m}\), then \(M_{\alpha}^{m}\to M_{\alpha}\) when \(m\to\infty\). Hence, \[e^{-\frac{1}{x_{m}}}\cos\biggl{(}\frac{1}{x_{m}}-t\biggr{)}I_{\alpha}^{1}(x_{ m})=e^{-\frac{1}{x_{m}}}(-1)^{m}M_{\alpha}^{m}\] which is undetermined when \(m\to\infty\). A similar reasoning shows that the second term in (27) is ill-defined on \(\mathbb{R}\times(-\infty,0)\) when \(\beta<0\) and so \(I_{\alpha,\beta}\) is only well-defined on \(\mathbb{R}\times(-\infty,0)\) whenever \(\alpha=\beta=0\). It can be proven similarly as in (29) that \(I_{\alpha,\beta}\) is well-defined on \(\mathbb{R}\times(0,\infty)\) and that \(\lim_{x\downarrow 0}I_{\alpha,\beta}(\cdot,x)=0\) for all \(\alpha,\beta>0\). Our next goal is to determine the higher order partial derivatives of the second component of \(I_{\alpha,\beta}\) evaluated at zero. A straightforward computation shows already that \[\frac{\partial}{\partial x}I_{\alpha,\beta}(t,x)=\frac{\sqrt{2}}{x^{2}}\biggl{(} I_{\alpha,\beta}\biggl{(}t+\frac{\pi}{4},x\biggr{)}-x\sin\Bigl{(}t+\frac{\pi}{4} \Bigr{)}\biggr{)}. \tag{31}\] Let us write \(I_{\alpha,\beta}(t,x)=\sum_{n=0}^{N}b_{n}(t)x^{n}+R_{N}(t,x)\) as a Taylor polynomial where \(R_{N}\) is the remainder for some \(N\in\mathbb{N}\). Filling in this Taylor polynomial into (31), we see that \(b_{0}\) is the zero function, \(b_{1}(t)=\sin(t)\) and \(b_{n}(t)=\frac{n-1}{\sqrt{2}}b_{n-1}(t-\frac{\pi}{4})\) for all \(n=2,\ldots,N\). This recurrence relation shows that \[\frac{\partial^{n}}{\partial x^{n}}I_{\alpha,\beta}(t,0)=n!b_{n}(t)=n!\frac{(n- 1)!}{2^{\frac{n-1}{2}}}\sin\biggl{(}t-\frac{(n-1)\pi}{4}\biggr{)}, \tag{32}\] for all \(n=1,\ldots,N\), where \(N\) can be taken arbitrary large. The construction of the map \(\mathcal{H}\) will consists of two parts, namely \(x\in(-\infty,0]\) and \(x\in[0,\infty)\). For the first part, let \(\phi:\mathbb{R}\to\mathbb{R}\) be any \(2\pi\)-periodic differentiable function and observe that the map \(\mathcal{H}_{-}:\mathbb{R}\times(-\infty,0]\to\mathbb{R}\) defined by \[\mathcal{H}_{-}(t,x):=\begin{cases}e^{-\frac{1}{x}}\phi\bigg{(}\frac{1}{x}-t \bigg{)}+I_{0,0}(t,x)-\sin(t)x,&t\in\mathbb{R},\ x\in(-\infty,0),\\ 0,&t\in\mathbb{R},\ x=0,\end{cases}\] satisfies the local invariance equation (25). However, for \(\mathcal{H}_{-}\) to be a parametrization of a center manifold on \(\mathbb{R}\times(-\infty,0]\), we must have that \(\lim_{x\uparrow 0}\mathcal{H}_{-}(\cdot,x)=0\). Since \(\lim_{x\uparrow 0}I_{0,0}(\cdot,x)\), it is clear that \[e^{-\frac{1}{x}}\phi\bigg{(}\frac{1}{x}-t\bigg{)}\to 0,\quad x\uparrow 0.\] We claim \(\phi\) must be the zero function. Consider for fixed \(t\in\mathbb{R}\) and \(r\in\mathbb{Q}\) the sequence \((y_{m})_{m\geq m_{1}}\) defined by \(y_{m}:=\frac{1}{t-r-2m\pi}\) where the integer \(m_{1}\geq 0\) is chosen large enough to guarantee that \(t-r-2m_{1}\pi<0\). The \(2\pi\)-periodicity of \(\phi\) implies that \[e^{-\frac{1}{y_{m}}}\phi\bigg{(}\frac{1}{y_{m}}-t\bigg{)}=\phi(r)e^{-\frac{1}{ y_{m}}}\to\infty,\quad m\to\infty,\] unless \(\phi(r)=0\). As \(r\in\mathbb{Q}\) is arbitrary we have that \(\phi\) is the zero function on \(\mathbb{Q}\) and because \(\phi\) is (at least) continuous, we have that \(\phi\) is the zero function on \(\mathbb{R}\). Moreover, we obtain from (32) directly that \(\lim_{x\uparrow 0}\frac{\partial}{\partial x}\mathcal{H}_{-}(\cdot,x)=0\) and so \(\mathcal{H}_{-}\) is indeed a parametrization of a center manifold on \(\mathbb{R}\times(-\infty,0]\). In addition, it follows directly from (31) that \(\mathcal{H}_{-}\) is \(C^{k}\)-smooth on \(\mathbb{R}\times(-\infty,0]\). For the second part, let \(\phi:\mathbb{R}\to\mathbb{R}\) be any \(2\pi\)-periodic \(C^{k}\)-smooth function and observe that for any \(\alpha,\beta>0\) and sufficiently small \(\delta_{k}>0\) the map \(\mathcal{H}_{+,\phi}^{\alpha,\beta}:\mathbb{R}\times[0,\delta_{k})\to\mathbb{R}\) defined by \[\mathcal{H}_{+,\phi}^{\alpha,\beta}(t,x):=\begin{cases}e^{-\frac{1}{x}}\phi \bigg{(}\frac{1}{x}-t\bigg{)}+I_{\alpha,\beta}(t,x)-\sin(t)x,&t\in\mathbb{R},\ x\in(0,\delta_{k}),\\ 0,&t\in\mathbb{R},\ x=0,\end{cases}\] satisfies the local invariance equation (25). Since \(\phi\) is \(C^{k}\)-smooth and \(2\pi\)-periodic, we have for any \(l=0,\ldots,k\) that its \(l\)th derivative is bounded above by some real number \(0<M_{l}<\infty\). Hence, \[\bigg{|}e^{-\frac{1}{x}}\phi\bigg{(}\frac{1}{x}-t\bigg{)}\bigg{|}\leq M_{0}e^ {-\frac{1}{x}}\to 0,\quad x\downarrow 0,\] which already proves that \(\lim_{x\downarrow 0}\mathcal{H}_{+,\phi}^{\alpha,\beta}(\cdot,x)=0\). To prove that \(\mathcal{H}_{+,\phi}^{\alpha,\beta}\) is tangent to the center bundle and \(C^{k}\)-smooth (at the origin), note for any \(l=0,\ldots,k\) that \[\bigg{|}\frac{d^{l}}{dx^{l}}\bigg{[}e^{-\frac{1}{x}}\phi\bigg{(}\frac{1}{x}-t \bigg{)}\bigg{]}\bigg{|}\leq e^{-\frac{1}{x}}\sum_{q=0}^{l}p_{q}\bigg{(}\frac {1}{|x|}\bigg{)}\to 0,\] as \(x\downarrow 0\) due to the general Leibniz rule, Faa di Bruno's formula and the fact that \(p_{q}\), dependent on \(M_{0},\ldots,M_{q}\), is a polynomial for all \(q=0,\ldots,l\). Hence, \(\mathcal{H}_{+,\phi}^{\alpha,\beta}\) is a \(C^{k}\)-smooth function on \(\mathbb{R}\times[0,\delta_{k})\) for some \(\delta_{k}>0\). As a consequence of the results derived above, the map \(\mathcal{H}_{\phi}^{\alpha,\beta}:\mathbb{R}\times(-\infty,\delta_{k})\) defined by \[\mathcal{H}_{\phi}^{\alpha,\beta}(t,x):=\begin{cases}\mathcal{H}_{-}(t,x),&t \in\mathbb{R},\ x\in(-\infty,0],\\ \mathcal{H}_{+,\phi}^{\alpha,\beta}(t,x),&t\in\mathbb{R},\ x\in[0,\delta_{k}), \end{cases} \tag{33}\] parametrizes a locally (non)-unique family of \(2\pi\)-periodic \(C^{k}\)-smooth center manifolds around \(\mathbb{R}\times\{0\}\) of (16). Two different \(2\pi\)-periodic center manifolds for (16) are visualized in Figure 3. **Example 27**.: _The analytic system (16) admits a \(2\pi\)-periodic \(C^{\infty}\)-smooth center manifold._ Proof.: The map \(\mathcal{H}_{0}^{1,1}\) from (33) provides us a \(2\pi\)-periodic \(C^{\infty}\)-smooth center manifold for (16) on \((-\infty,\delta_{\infty})\) with \(\delta_{\infty}>0\) since \(\mathcal{H}_{0}^{1,1}\) is \(C^{\infty}\)-smooth in an open neighborhood of \(\mathbb{R}\times\{0\}\). **Example 28**.: _The analytic system (18) admits for any \(k\geq 0\) a \(2\pi\)-periodic \(C^{k}\)-smooth center manifold, but not a \(2\pi\)-periodic \(C^{\infty}\)-smooth center manifold._ Proof.: The fundamental matrix for the linearization around the origin of (18) reads \[U(t,s)=\begin{pmatrix}1&0&0\\ 0&e^{t-s}&0\\ 0&0&1\end{pmatrix}\] for all \((t,s)\in\mathbb{R}^{2}\). The Floquet multipliers are given by \(\lambda_{1}=1,\lambda_{2}=e^{2\pi}\) and \(\lambda_{3}=1\). The center space (at time \(t\)) is given by \(E_{0}(t)=\mathrm{span}\{(1,0,0),(0,0,1)\}\) while the unstable space (at time \(t\)) is given by \(E_{+}(t)=\mathrm{span}\{(0,1,0)\}\). Hence, the \(xz\)-plane corresponds to the center space and so the center manifold can be parametrized by \(y(t)=\mathcal{H}(t,x,z)\), where \(\mathcal{H}\) is \(2\pi\)-periodic in the first argument and only consists of nonlinear terms. It follows from Theorem 1 that there exists for any \(k\geq 1\) an open neighborhood \(U_{2k}\) of \(\mathbb{R}\times\{0\}\times\{0\}\) such that the map \(\mathcal{H}\) is \(C^{2k}\)-smooth on \(U_{2k}\). Hence, one can write \[\mathcal{H}(t,x,z)=\sum_{n=2}^{2k}a_{n}(t,z)x^{n}+\mathcal{O}(x^{2k+1}) \tag{34}\] in the neighborhood \(U_{2k}\). Because the center manifold is locally invariant, the map \(\mathcal{H}\) must satisfy \[\frac{\partial\mathcal{H}}{\partial t}(t,x,z)+x(z-x^{2})\frac{\partial \mathcal{H}}{\partial x}(t,x,z)=\mathcal{H}(t,x,z)+(1+\sin(t))x^{2} \tag{35}\] Figure 3: Two different \(C^{\infty}\)-smooth non-analytic \(2\pi\)-periodic center manifolds around \(\mathbb{R}\times\{0\}\) for the analytic system (16) parametrized by the maps \(\mathcal{H}_{0}^{1,1}\) (left) and \(\mathcal{H}_{0}^{2,3}\) (right), respectively. Substituting (34) into (35) and comparing terms in \(x^{n}\) for \(n=2,\ldots,2k\) shows that the functions \(a_{n}\), which are \(2\pi\)-periodic in the first component, must satisfy \[\begin{cases}\frac{\partial a_{2}}{\partial t}(t,z)+(2z-1)a_{2}(t,z)=1+\sin(t),&n =2,\\ \frac{\partial a_{n}}{\partial t}(t,z)+(nz-1)a_{n}(t,z)=(n-2)a_{n-2}(t,z),&n=3, \ldots,2k.\end{cases} \tag{36}\] Because \(\mathcal{H}\) consists only of nonlinear terms, we can assume that \(a_{1}=0\). Solving the \(2\pi\)-periodic boundary value problem (36) for \(n=2\) yields \[a_{2}(t,z)=\frac{1}{2z-1}+\frac{2z-1}{(2z-1)^{2}+1}\sin(t)-\frac{1}{(2z-1)^{2 }+1}\cos(t),\quad z\neq\frac{1}{2}. \tag{37}\] Furthermore, at \(z=\frac{1}{2}\) one verifies easily from (36) that \(a_{2}(\cdot,\frac{1}{2})\) does not admit a \(2\pi\)-periodic solution. Moreover, all odd coefficients \(a_{1},a_{3},\ldots,a_{2k-1}\) are zero and the even coefficients \(a_{2},a_{4},\ldots,a_{2k}\) are recursively given by \[a_{2n}(t,z) =\frac{2(n-1)e^{-(2nz-1)t}}{e^{2\pi(2nz-1)}-1}\bigg{(}\int_{t}^{2 \pi}e^{(2nz-1)\tau}a_{2(n-1)}(\tau,z)d\tau\] \[+e^{2\pi(2nz-1)}\int_{0}^{t}e^{(2nz-1)\tau}a_{2(n-1)}(\tau,z)d \tau\bigg{)},\quad z\neq\frac{1}{2n}.\] To obtain a semi-explicit representation for \(a_{2n}\), we will prove by induction on \(n=1,2,\ldots,k\) that \[a_{2n}(t,z)=2^{n-1}(n-1)!\prod_{l=1}^{n}\frac{1}{2lz-1}+\alpha_{n}(z)\sin(t) +\beta_{n}(z)\cos(t),\quad z\neq\frac{1}{2n}, \tag{38}\] where \(\alpha_{n}\) and \(\beta_{n}\) are well-defined rational functions on \(\mathbb{R}\). Clearly, the claim holds for \(n=1\) due to (37). Assume that the claim holds for a certain \(n\geq 2\). Along the same lines of the induction step in Example 25, one derives \[a_{2n}(t,z)=2^{n-1}(n-1)!\prod_{l=1}^{n}\frac{1}{2lz-1} +\frac{2(n-1)!([2nz-1)\alpha_{n-1}(z)+\beta_{n-1}(z)]}{(2nz-1)^{ 2}+1}\sin(t)\] \[+\frac{2(n-1)[(2nz-1)\beta_{n-1}(z)-\alpha_{n-1}(z)]}{(2nz-1)^{2} +1}\cos(t).\] It remains to show that the coefficients in front of the sine and cosine are well-defined rational functions on \(\mathbb{R}\). Clearly, \[\begin{pmatrix}\alpha_{n}(z)\\ \beta_{n}(z)\end{pmatrix}=\frac{2(n-1)}{(2nz-1)^{2}+1}\begin{pmatrix}2nz-1&1 \\ -1&2nz-1\end{pmatrix}\begin{pmatrix}\alpha_{n-1}(z)\\ \beta_{n-1}(z)\end{pmatrix},\quad n\geq 3,\] with initial condition \[\alpha_{2}(z)=\frac{2z-1}{(2z-1)^{2}+1},\quad\beta_{2}(z)=\frac{1}{(2z-1)^{2} +1}.\] Solving this linear system of difference equations semi-explicitly yields \[\begin{pmatrix}\alpha_{n}(z)\\ \beta_{n}(z)\end{pmatrix}=2^{n-1}(n-1)!\bigg{[}\prod_{l=2}^{n}\frac{1}{(2lz-1 )^{2}+1}\begin{pmatrix}2lz-1&1\\ -1&2lz-1\end{pmatrix}\bigg{]}\begin{pmatrix}\alpha_{2}(z)\\ \beta_{2}(z)\end{pmatrix}.\] Hence, \(\alpha_{n}\) and \(\beta_{n}\) are both rational functions that are well-defined on \(\mathbb{R}\) since \((2lz-1)^{2}+1\geq 0\) for all \(l=1,\ldots,n\). This concludes the induction step. On the other hand, if \(z=\frac{1}{2n}\), then one can verify rather easily from (36) that \(a_{n}(\cdot,\frac{1}{2n})\) has no \(2\pi\)-periodic solution. Using (38) in combination with (34), we see that \(\mathcal{H}(t,x,\cdot)\) is not \(C^{2k}\)-smooth on \((-\frac{1}{2k},\frac{1}{2k}]\) since \(a_{2k}(\cdot,\frac{1}{2k})\) is simply undefined. Suppose now that \(\mathcal{H}\) is \(C^{\infty}\)-smooth on \(\mathbb{R}\times\{0\}\times\{0\}\), then for fixed \(t\in\mathbb{R}\) and non-zero \(x\in\mathbb{R}\) there exists an \(\varepsilon>0\) such that \(\mathcal{H}(t,x,\cdot)\) is \(C^{\infty}\)-smooth on \((-\varepsilon,\varepsilon)\). Now, if \(k\geq 1\) is an integer that satisfies \(k>\frac{1}{2\varepsilon}\), then \(\mathcal{H}(t,x,\cdot)\) is not \(C^{2k}\)-smooth on \((-\varepsilon,\varepsilon)\). This contradicts the assumption that (18) admits a \(C^{\infty}\)-smooth \(2\pi\)-periodic center manifold at the origin. To illustrate the cascade of singularities of the periodic center manifold towards the origin, second- and fourth-order approximations at different time steps of \(\mathcal{H}\) are presented in Figure 4. ## 6 Conclusion and outlook We have proven the existence of a periodic smooth locally invariant center manifold near a nonhyperbolic cycle in the setting of finite-dimensional ordinary differential equations. Our results are based on rather simple consequences of Floquet theory in combination with a fixed point argument on the easily available variation of constants formula for periodic (nonlinear) ODEs. In addition, we have provided several examples of (non)-unique, (non)-\(C^{\infty}\)-smooth and (non)-analytic periodic center manifolds to illustrate that periodic center manifolds admit similar interesting qualitative properties as center manifolds for equilibria. Despite our illustrations from Section 5 being very insightful on the nature of periodic center manifolds, it is not clear under which conditions a periodic center manifold is unique, non-unique or locally (non)-unique. To answer the first question, we believe that one must generalize techniques from [53] towards periodic center manifolds to state and prove a similar result as in [53, Theorem 3.2]. Moreover, if a periodic center manifold is not uniquely determined, how much can two periodic center manifolds differ from each other? Such results have already been established in [53, Section 4] for center Figure 4: In \((a)\) a second-order approximation of \(\mathcal{H}(0,\cdot,\cdot)\) and in \((b)\) a second-order approximation of \(\mathcal{H}(\pi,\cdot,\cdot)\). In \((c)\) a fourth-order approximation of \(\mathcal{H}(0,\cdot,\cdot)\) and in \((d)\) a fourth-order approximation of \(\mathcal{H}(\pi,\cdot,\cdot)\). The red vertical planes indicate the singularities at \(z=\frac{1}{4}\) and \(z=\frac{1}{2}\). manifolds for equilibria, but the question remains unanswered for periodic center manifolds. However, we have already seen in Example 26 that periodic center manifolds may differ from each other by a factor of \(e^{-\frac{1}{x}}\phi(\frac{1}{x}-t)\), where \(\phi\) is any \(T\)-periodic (at least) differentiable function. Furthermore, it is not clear under which conditions a \(C^{\infty}\)-smooth or analytic periodic center manifold may exist while this question is addressed and answered in [53, Section 5 and 6] for center manifolds for equilibria. In particular, recall from Example 25 that the \(C^{\infty}\)-smooth periodic center manifold is not analytic for all \(t\in\mathbb{R}\). However, is it possible to construct an example where a \(C^{\infty}\)-smooth periodic center manifold may change periodically from non-analytic to analytic? Or is there a possibility that the (non)-analyticitiy of a periodic center manifold is time-independent? ## Acknowledgements The authors would like to thank Prof. Renato Huzak (Hasselt University), Prof. Peter De Maesschalck (Hasselt University) and Dr. Heinz Hansmann (Utrecht University) for helpful discussions and suggestions. ## Appendix A Smoothness of the center manifold In this appendix, we will prove that the map \(\mathcal{C}\) inherits the same finite order of smoothness as the nonlinearity \(R\). Our results are based on the theory of contraction on scales of Banach spaces, see [24, Section IX.6 and Appendix IV] and [59, 35, 36, 11, 45] for applications of this theory to ordinary differential equations and (mixed) functional differential equations. Our arguments here are based on the strategy developed in the mentioned references and closely follow [45]. To prove additional smoothness of the map \(\mathcal{C}\), let us first observe that we are only interested in pairs \((s,y_{0})\in E_{0}\) due to (12). Therefore, let us incorporate the starting time \(s\) inside the domain of the fixed point operator \(\mathcal{G}_{s}^{\eta}\) from Section 3.3. Hence, define for \(\eta\in(0,\min\{-a,b\})\) and sufficiently small \(\delta>0\) the map \(\mathcal{G}^{\eta}\) by \[\mathrm{BC}_{s}^{\eta}\times E_{0}\ni(u,s,y_{0})\mapsto U_{s}^{\eta}y_{0}+ \mathcal{K}_{s}^{\eta}(\tilde{R}_{\delta}(u))\in\mathrm{BC}_{s}^{\eta},\] and following the same steps from Section 3.3, we have that \(\mathcal{G}^{\eta}(\cdot,s,y_{0})\) has a unique fixed point \(\hat{u}^{\eta}:E_{0}\to\mathrm{BC}_{s}^{\eta}\) such that \(\hat{u}^{\eta}(s,\cdot)\) is globally Lipschitz and satisfies \(\hat{u}^{\eta}(s,0)=0\) for all \(s\in\mathbb{R}\). It turns out that the space \(\mathrm{BC}_{s}^{\eta}\) is not really suited to increase smoothness of the center manifold. The main idea is to work with another \(\eta\)-exponent that makes a trade-off between ensuring smoothness while not losing the contraction property. To make this construction, choose an interval \([\eta_{-},\eta_{+}]\subset(0,\min\{-a,b\})\) such that \(k\eta_{-}<\eta_{+}\) and \(\delta>0\) small enough to guarantee that \[L_{\delta}\|\mathcal{K}_{s}^{\eta}\|_{\eta,s}<\frac{1}{4},\quad\forall\eta\in [\eta_{-},\eta_{+}],\ s\in\mathbb{R}, \tag{39}\] which is possible since \(L_{\delta}\to 0\) as \(\delta\downarrow 0\) proven in Proposition 10. From this construction, it is clear that we would like to switch back and forth between, for example, the fixed points \(\hat{u}^{\eta}(s,y_{0})\) and \(\hat{u}^{\eta_{-}}(s,y_{0})\). Therefore, introduce for any \(0<\eta_{1}\leq\eta_{2}<\min\{-a,b\}\) the linear embedding \(\mathcal{J}_{s}^{\eta_{2},\eta_{1}}:\mathrm{BC}_{s}^{\eta_{1}}\hookrightarrow \mathrm{BC}_{s}^{\eta_{2}}\) and notice that this map is bounded since \[\|u\|_{\eta_{2},s}=\sup_{t\in\mathbb{R}}e^{-\eta_{2}|t-s|}\|u(t)\|\leq\sup_{ t\in\mathbb{R}}e^{-\eta_{1}|t-s|}\|u(t)\|=\|u\|_{\eta_{1},s}<\infty,\] for any \(u\in\mathrm{BC}_{s}^{\eta_{1}}\). Hence, \(\mathcal{J}_{s}^{\eta_{2},\eta_{1}}\) is \(C^{\infty}\)-smooth and \(\mathrm{BC}_{s}^{\eta_{1}}\) can be considered as a subspace of \(\mathrm{BC}_{s}^{\eta_{2}}\). The following lemma shows we can switch back and forth between the fixed points of interest. **Lemma 29**.: _Let \(0<\eta_{1}\leq\eta_{2}<\min\{-a,b\}\) and \(s\in\mathbb{R}\). Assume that \(\hat{u}^{\eta_{1}}(s,y_{0})\) is the fixed point of \(\mathcal{G}^{\eta_{1}}(\cdot,s,y_{0})\) for some \((s,y_{0})\in E_{0}\). Then \(\hat{u}^{\eta_{2}}(s,y_{0})=\mathcal{J}_{s}^{\eta_{2},\eta_{1}}\hat{u}^{\eta_ {1}}(s,y_{0})\)._ Proof.: Note that the definition of the fixed point operator does not depend explicitly on the choice of \(\eta\in(0,\min\{-a,b\})\), since \(\mathcal{K}_{s}^{\eta_{1}}u=\mathcal{K}_{s}^{\eta_{2}}u\) for all \(u\in\mathrm{BC}_{s}^{\eta_{1}}\). Then by uniqueness of the fixed point and since \(\mathrm{BC}_{s}^{\eta_{1}}\) is continuously embedded in \(\mathrm{BC}_{s}^{\eta_{2}}\), it is clear that \(\tilde{u}_{s}^{\eta_{2}}(y_{0})=\mathcal{J}_{s}^{\eta_{2},\eta_{1}}\tilde{u}_ {s}^{\eta_{1}}(y_{0})\). A first step in increasing smoothness of the center manifold is to show that \(\tilde{R}_{\delta}\) is sufficiently smooth. Recall from Section 3.2 that \(R_{\delta}\) is \(C^{k}\)-smooth. Consider now for any pair of integers \(p,q\geq 0\) with \(p+q\leq k\) the map defined by \[\tilde{R}_{\delta}^{(p,q)}(u)(v_{1},\ldots,v_{q})(t):=D_{1}^{p}D_{2}^{q}R_{ \delta}(t,u(t))(v_{1}(t),\ldots,v_{q}(t)).\] Here \(\mathcal{L}^{q}(Y,Z)\) denotes the space of \(q\)-linear mappings from \(Y^{q}:=Y\times\cdots\times Y\) into \(Z\) for Banach spaces \(Y\) and \(Z\). The following three lemmas, adapted from the literature towards the finite-dimensional ODE-setting, will be crucial in the proof of Proposition 33. **Lemma 30** ([24, Lemma XII.7.3] and [36, Proposition 8.1]).: _Let \(p,q\geq 0\) be integers with \(p+q\leq k\) and \(\eta\geq q\mu>0\). Then for any \(u\in C(\mathbb{R},\mathbb{R}^{n})\) we have \(\tilde{R}_{\delta}^{(p,q)}(u)\in\mathcal{L}^{q}(\mathrm{BC}_{s}^{\mu},\mathrm{ BC}_{s}^{\eta})\), where the norm is bounded by_ \[\|\tilde{R}_{\delta}^{(p,q)}\|\leq\sup_{t\in\mathbb{R}}e^{-(\eta-q\mu)|t-s|} \|D_{1}^{p}D_{2}^{q}R_{\delta}(t,u(t))\|<\infty.\] _Furthermore, consider any \(0\leq l\leq k-(p+q)\) and \(\sigma>0\). If \(\eta>q\mu+l\sigma\), then the map \(u\mapsto\tilde{R}_{\delta}^{(p,q)}\) from \(\mathrm{BC}_{s}^{\sigma}\) into \(\mathcal{L}^{q}(\mathrm{BC}_{s}^{\mu},\mathrm{BC}_{s}^{\eta})\) is \(C^{l}\)-smooth with \(D^{l}\tilde{R}_{\delta}^{(p,q)}=\tilde{R}_{\delta}^{(p,q+l)}\)._ **Lemma 31** ([24, Lemma XII.7.6] and [36, Proposition 8.2]).: _Let \(p,q\geq 0\) be integers with \(p+q<k\) and let \(\eta>q\mu+\sigma\) for some \(\mu,\sigma>0\). Consider a map \(\Phi\in C^{1}(E_{0},\mathrm{BC}_{s}^{\sigma})\). Then the map \(\tilde{R}_{\delta}^{(p,q)}\circ\Phi:E_{0}\to\mathcal{L}^{q}(\mathrm{BC}_{s}^{ \mu},\mathrm{BC}_{s}^{\eta})\) is \(C^{1}\)-smooth with_ \[D(\tilde{R}_{\delta}^{(p,q)}\circ\Phi)(s_{0},y_{0})(v_{1},\ldots,v_{q},(s_{1}, y_{1}))=\tilde{R}_{\delta}^{(p,q+1)}(\Phi(s_{0},y_{0}))(v_{1},\ldots,v_{q},D\Phi(s_{0 },y_{0})(s_{1},y_{1})).\] **Lemma 32** ([24, Lemma XII.6.6 and XII.6.7]).: _Let \(Y_{0},\,Y,\,Y_{1},\) and \(\Lambda\) be Banach spaces with continuous embeddings \(J_{0}:Y_{0}\hookrightarrow Y\) and \(J:Y\hookrightarrow Y_{1}\). Consider the fixed point problem \(y=f(y,\lambda)\) for \(f:Y\times\Lambda\to Y\). Suppose that the following conditions hold._ 1. _The function_ \(g:Y_{0}\times\Lambda\to Y_{1}\) _defined by_ \(g(y_{0},\lambda):=Jf(J_{0}y_{0},\lambda)\) _is of the class_ \(C^{1}\) _and there exist mappings_ \(f^{2:}:J_{0}Y_{0}\times\Lambda\to\mathcal{L}(Y)\) _and_ \(f_{1}^{(1)}:J_{0}Y_{0}\times\Lambda\to\mathcal{L}(Y_{1})\) _such that_ \(D_{1}g(y_{0},\lambda)\xi=Jf^{(1)}(J_{0}y_{0},\lambda)J_{0}\) _for all_ \((y_{0},\lambda,\xi)\in Y_{0}\times\Lambda\times Y_{0}\) _and_ \(Jf^{(1)}(J_{0}y_{0},\lambda)y=f_{1}^{(1)}(J_{0}y_{0},\lambda)Jy\) _for all_ \((y_{0},\lambda,y)\in Y_{0}\times\Lambda\times Y\)_._ 2. _There exists a_ \(\kappa\in[0,1)\) _such that for all_ \(\lambda\in\Lambda\) _the map_ \(f(\cdot,\lambda):Y\to Y\) _is Lipschitz continuous with Lipschitz constant_ \(\kappa\)_, independent of_ \(\lambda\)_. Furthermore, for any_ \(\lambda\in\Lambda\) _the maps_ \(f^{(1)}(\cdot,\lambda)\) _and_ \(f_{1}^{(1)}(\cdot,\lambda)\) _are uniformly bounded by_ \(\kappa\)_._ 3. _Under the previous condition, the unique fixed point_ \(\Psi:\Lambda\to Y\) _satisfies_ \(\Psi(\lambda)=f(\Psi(\lambda),\lambda)\) _and can be written as_ \(\Psi=J_{0}\circ\Phi\) _for some continuous_ \(\Phi:\Lambda\to Y_{0}\)_._ 4. _The function_ \(f_{0}:Y_{0}\times\Lambda\to Y\) _defined by_ \(f_{0}(y_{0},\lambda)=f(J_{0}y_{0},\lambda)\) _has continuous partial derivative_ \(D_{2}f:Y_{0}\times\Lambda\to\mathcal{L}(\Lambda,Y)\)_._ 5. _The mapping_ \(Y_{0}\times\Lambda\ni(y,\lambda)\mapsto J\circ f^{(1)}(J_{0}y,\lambda)\in \mathcal{L}(Y,Y_{1})\) _is continuous._ _Then the map \(J\circ\Psi\) is of the class \(C^{1}\) and \(D(J\circ\Psi)(\lambda)=J\circ\mathcal{A}(\lambda)\) for all \(\lambda\in\Lambda\), where \(A=\mathcal{A}(\lambda)\in\mathcal{L}(\Lambda,Y)\) is the unique solution of the fixed point equation \(A=f^{(1)}(\Psi(\lambda),\lambda)A+D_{2}f_{0}(\Psi(\lambda),\lambda)\)._ Now we can prove the main results of this appendix. **Proposition 33**.: _For each \(l\in\{1,\ldots,k\}\) and \(\eta\in(l\eta_{-},\eta_{+}]\subset(0,\min\{-a,b\})\), the map \(\mathcal{J}_{s}^{\eta,\eta_{-}}\circ\hat{u}^{\eta_{-}}:E_{0}\to\mathrm{BC}_{s}^{\eta}\) is \(C^{l}\)-smooth provided that \(\delta>0\) is sufficiently small._ Proof.: To begin, we choose \(\delta>0\) small enough so that (39) holds. We prove the assertion by induction on \(l\). Let \(l=k=1\) and \(\eta\in(\eta_{-},\eta_{+}]\) be given. We show that Lemma32 applies with the Banach spaces \(Y_{0}=Y=\mathrm{BC}_{s}^{\eta_{-}},Y_{1}=\mathrm{BC}_{s}^{\eta}\) and \(\Lambda=E_{0}\), and operators \[f(u,s,y_{0}) =\mathcal{G}^{\eta_{-}}(u,s,y_{0}),\] \[f^{(1)}(u,s,y_{0}) =\mathcal{K}_{s}^{\eta_{-}}\circ\tilde{R}_{\delta}^{(0,1)}(u),\] \[f_{1}^{(1)} =\mathcal{K}_{s}^{\eta}\circ\tilde{R}_{\delta}^{(0,1)}(u),\] with embeddings \(J=\mathcal{J}_{s}^{\eta,\eta_{-}}\) and \(J_{0}\) denotes the identity map. In the context of Lemma32, the map \(g\) is given by \(\mathcal{G}^{\eta}\) due to the linearity of the embedding \(J\). Because \((s,y_{0})\mapsto U(\cdot,s)y_{0},s\mapsto\mathcal{K}_{s}^{\eta}\) and \(u\mapsto\tilde{R}_{\delta}(u)\) are \(C^{1}\)-smooth (Section2, Proposition8 and Lemma30), the map \(g\) is \(C^{1}\)-smooth and one can easily verify the additional equalities. The second condition follows from (39) and the fact that the Lipschitz constant is independent of \(s\in\mathbb{R}\) due to Proposition8. The third condition follows from the fact that \(\Psi\) is given by \(\hat{u}^{\eta_{-}}\) and therefore well-defined due to Proposition14. The mentioned results show that the fourth condition is satisfied. It follows from Proposition8 and Lemma30 that the fifth condition is satisfied as well. Hence, we conclude that the map \(\mathcal{J}_{s}^{\eta,\eta_{-}}\circ\hat{u}^{\eta_{-}}\) is of the class \(C^{1}\) and that \(D(\mathcal{J}_{s}^{\eta,\eta_{-}}\circ\hat{u}^{\eta_{-}})=\mathcal{J}_{s}^{ \eta,\eta_{-}}\circ\hat{u}^{\eta_{-},(1)}\in\mathcal{L}(E_{0},\mathrm{BC}_{s}^ {\eta})\), where \(\hat{u}^{\eta_{-},(1)}(s,y_{0})\) is the unique solution of \[w^{(1)}=\mathcal{K}_{s}^{\eta_{-}}\circ\tilde{R}_{\delta}^{(0,1)}(\hat{u}^{ \eta_{-}}(s,y_{0}))w^{(1)}+U_{s}^{\eta_{-}}=:F_{\eta_{-}}^{(1)}(w^{(1)},s,y_{0})\] in the space \(\mathcal{L}(E_{0},\mathrm{BC}_{s}^{\eta_{-}})\). Here \[F_{\eta_{-}}^{(1)}:\mathcal{L}(E_{0},\mathrm{BC}_{s}^{\eta_{-}})\times E_{0} \to\mathcal{L}(E_{0},\mathrm{BC}_{s}^{\eta_{-}})\] and notice that \(F_{\eta_{-}}^{(1)}(\cdot,s,y_{0})\) is a uniform contraction (Lemma30), which proves the uniqueness of the fixed point. To specify the induction hypothesis, consider any integer \(1\leq l<k\) and suppose that for all \(1\leq q\leq l\) and all \(\eta\in(q\eta_{-},\eta_{+}]\) that the map \(\mathcal{J}_{s}^{\eta,\eta_{-}}\circ\hat{u}^{\eta_{-}}\) is \(C^{q}\)-smooth with \(D^{q}(\mathcal{J}_{s}^{\eta,\eta_{-}}\circ\hat{u}^{\eta_{-}})=\mathcal{J}_{s}^ {\eta,\eta_{-}}\circ\hat{u}^{\eta_{-},(q)}\in\mathcal{L}^{q}(E_{0},\mathrm{BC}_ {s}^{\eta\eta})\), where \(\hat{u}^{\eta_{-},(q)}\) is the unique solution of \[w^{(l)}=\mathcal{K}_{s}^{l\eta_{-}}\circ\tilde{R}_{\delta}^{(0,1)}(\hat{u}^{ \eta_{-}}(s,y_{0}))w^{(l)}+H_{\eta_{-}}^{(l)}(s,y_{0})=:F_{l\eta_{-}}^{(l)}(w^ {(l)},s,y_{0})\] in the space \(\mathcal{L}^{q}(E_{0},\mathrm{BC}_{s}^{\eta\eta_{-}})\). Here \(H_{\eta_{-}}^{(1)}(s,y_{0})=U_{s}^{\eta_{-}}y_{0}\) and for \(\nu\in[\eta_{-},\eta_{+}]\) and \(l\geq 2\) we have that \(H_{\nu}^{(l)}(s,y_{0})\) is a finite sum of terms of the form \[\mathcal{K}_{s}^{l\nu}\circ\tilde{R}_{\delta}^{(0,q)}(\hat{u}^{\eta_{-}}(s,y_{0 }))(\hat{u}^{\eta_{-},(r_{1})}(s,y_{0}),\ldots,\hat{u}^{\eta_{-},(r_{q})}(s,y_{ 0})),\] with \(2\leq q\leq l\) and \(1\leq r_{i}<l\) for \(i=1,\ldots,q\) such that \(r_{1}+\cdots+r_{q}=l\). Here \(F_{l\eta}^{(l)}:\mathcal{L}^{l}(E_{0},\mathrm{BC}_{s}^{l\eta})\times E_{0}\to \mathcal{L}^{l}(E_{0},\mathrm{BC}_{s}^{l\eta})\) is a uniform contraction (Lemma30) for any \(\eta\in[\eta_{-},\eta_{+}]\), which guarantees the uniqueness of the fixed point. For the induction step, fix some \(\eta\in((l+1)\eta_{-},\eta_{+}]\) and choose \(\sigma,\mu>0\) such that \(\eta_{-}<\sigma<(l+1)\sigma<\mu<\eta\). We show that Lemma32 applies with the Banach spaces \(Y_{0}=\mathcal{L}^{l}(E_{0},\mathrm{BC}_{s}^{l\sigma}),Y=\mathcal{L}^{l}(E_{0},\mathrm{BC}_{s}^{\mu}),Y_{1}=\mathcal{L}^{l}(E_{0},\mathrm{BC}_{s}^{\eta})\) and \(\Lambda=E_{0}\), and operators \[f(u,s,y_{0}) =\mathcal{K}_{s}^{\mu}\circ\tilde{R}_{\delta}^{(0,1)}(\hat{u}^{ \eta_{-}}(s,y_{0}))u+H_{\mu/l}^{(l)}(s,y_{0}),\] \[f^{(1)}(u,s,y_{0}) =\mathcal{K}_{s}^{\mu}\circ\tilde{R}_{\delta}^{(0,1)}(\hat{u}^{ \eta_{-}}(s,y_{0}))\in\mathcal{L}(\mathcal{L}^{l}(E_{0},\mathrm{BC}_{s}^{\mu})),\] \[f_{1}^{(1)}(u,s,y_{0}) =\mathcal{K}_{s}^{\eta}\circ\tilde{R}_{\delta}^{(0,1)}(\hat{u}^{ \eta_{-}}(s,y_{0}))\in\mathcal{L}(\mathcal{L}^{l}(E_{0},\mathrm{BC}_{s}^{\eta})).\] To verify the first condition, we have to check that \(g:\mathcal{L}^{l}(E_{0},\mathrm{BC}^{l\sigma}_{s})\times E_{0}\to\mathcal{L}(E_{0 },\mathrm{BC}^{\eta}_{s})\) given by \[g(u,s,y_{0})=\mathcal{K}^{\eta}_{s}\circ\tilde{R}^{(0,1)}_{\delta}(\hat{u}^{\eta _{-}}(s,y_{0}))u+\mathcal{J}^{\eta,\mu}_{s}\circ H^{(l)}_{\mu/l}(s,y_{0})\] is \(C^{1}\)-smooth, where now \(\mathcal{J}^{\eta,\mu}_{s}:\mathcal{L}^{l}(E_{0},\mathrm{BC}^{\mu}_{s}) \hookrightarrow\mathcal{L}^{l}(E_{0},\mathrm{BC}^{\eta}_{s})\) is the continuous embedding. Clearly, \(g\) is \(C^{1}\)-smooth in the first variable since it is linear. For the second variable, notice that the map \((s,y_{0})\mapsto\mathcal{K}^{\eta}_{s}\circ\tilde{R}^{(0,1)}_{\delta}(\hat{u}^ {\eta_{-}}(s,y_{0}))u\) is \(C^{1}\)-smooth due to Lemma 31 with \(\mu>(l+1)\sigma\) and the \(C^{1}\)-smoothness of \((s,y_{0})\mapsto\mathcal{J}^{\sigma,\eta_{-}}_{s}\hat{u}^{\eta_{-}}(s,y_{0})\) for any \(\sigma\geq\eta_{-}\). For the \(C^{1}\)-smoothness of the map \(H^{(l)}_{\mu/l}\), we get differentiability from Lemma 31 and so we have that the derivative of this map is a finite sum of terms of the form \[\mathcal{K}^{\mu}_{s}\circ\tilde{R}^{(0,q+1)}_{\delta}(\hat{u}^{ \eta_{-}}(s,y_{0}))(\hat{u}^{\eta_{-},(r_{1})}(s,y_{0}),\dots,\hat{u}^{\eta_{- },(r_{q})}(s,y_{0}))\] \[+\sum_{j=1}^{q}\mathcal{K}^{\mu}_{s}\circ\tilde{R}^{(0,q)}_{\delta }(\hat{u}^{\eta_{-}}(s,y_{0}))(\hat{u}^{\eta_{-},(r_{1})}(s,y_{0}),\dots,\hat{ u}^{\eta_{-},(r_{j}+1)}(s,y_{0}),\dots,\hat{u}^{\eta_{-},(r_{q})}(s,y_{0}))\] and each \(\hat{u}^{\eta_{-},(r_{j})}(s,y_{0})\) is a map from \(E_{0}\) into \(\mathrm{BC}^{j\sigma}_{s}\) for \(j=1,\dots,q\). An application of Lemma 30 with \(\mu>(l+1)\sigma\) ensures the continuity of \(DH^{(l)}_{\mu/l}(s,y_{0})\) and consequently that of \(\mathcal{J}^{\eta,\mu}_{s}DH^{(l)}_{\mu/l}(s,y_{0})\). The remaining calculations from the first condition are then easily checked, and condition four can be proven similarly. The Lipschitz condition and boundedness for the second condition follows by the choice of \(\delta>0\) chosen at the beginning of the proof and the contractivity of \(H^{(l)}_{\mu/l}\) described above. To prove the third condition, observe that one can write \[\mathcal{K}^{\eta}_{s}\circ\tilde{R}^{(0,1)}_{\delta}(\hat{u}^{\eta_{-}}(s,y_{ 0}))=\mathcal{J}^{\eta,\mu}_{s}\mathcal{K}^{\mu}_{s}\circ\tilde{R}^{(0,1)}_{ \delta}(\hat{u}^{\eta_{-}}(s,y_{0}))\] and applying Lemma 30 together with the \(C^{1}\)-smoothness of \(\hat{u}^{\eta_{-}}\) to obtain the continuity of \((s,y_{0})\mapsto\tilde{R}^{(0,1)}_{\delta}(\hat{u}^{\eta_{-}}(s,y_{0}))\). This also proves the fifth condition, and so we conclude that \(\hat{u}^{\eta_{-}}:E_{0}\mapsto\mathcal{L}^{l}(E_{0},\mathrm{BC}^{\eta}_{s})\) is of the class \(C^{1}\) with derivative \(\hat{u}^{\eta_{-},(l+1)}=D\hat{u}^{\eta_{-},(l)}\in\mathcal{L}^{l+1}(E_{0}, \mathrm{BC}^{\eta}_{s})\) that is the unique solution of \[w^{(l+1)}=\mathcal{K}^{\mu}_{s}\circ\tilde{R}^{(0,1)}_{\delta}(\hat{u}^{\eta_{ -},(l+1)})w^{(l+1)}+H^{(l+1)}_{\mu/(l+1)}(s,y_{0}),\] where \[H^{(l+1)}_{\mu/(l+1)}(s,y_{0})=\mathcal{K}^{\mu}_{s}\circ\tilde{R}^{(0,2)}_{ \delta}(\hat{u}^{\eta_{-}}(s,y_{0}))(\hat{u}^{\eta_{-},(l)}(s,y_{0}),\hat{u}^{ \eta_{-},(1)}(s,y_{0}))+DH^{(l)}_{\mu/l}(s,y_{0}).\] A similar argument as in the proof of the \(l=k=1\) case shows that the unique fixed point \(\hat{u}^{\eta_{-},(l+1)}\) is also contained in \(\mathcal{L}^{l+1}(E_{0},\mathrm{BC}^{(l+1)\eta_{-}}_{s})\). Hence, the map \(\mathcal{J}^{\eta,\eta_{-}}_{s}\circ\hat{u}^{\eta_{-}}\) is of the class \(C^{l+1}\) provided that \(\eta\in((l+1)\eta_{-},\eta_{+}]\) and \(\delta>0\) is sufficiently small. **Theorem 34**.: _The map \(\mathcal{C}:E_{0}\to\mathbb{R}^{n}\) from (12) is \(C^{k}\)-smooth._ Proof.: Let \(\eta\in[\eta_{-},\eta_{+}]\subset(0,\min\{-a,b\})\) such that \(k\eta_{-}<\eta_{+}\). Let \(\mathrm{ev}_{s}\) denote the bounded linear evolution operator (at time \(s\)) defined in the proof of Proposition 20. Recall that \(\mathcal{C}(s,y_{0})=\hat{u}^{\eta}(s,y_{0})(s)=\mathrm{ev}_{s}(\hat{u}^{\eta} (s,y_{0}))\), and so \(\mathcal{C}(s,y_{0})=\mathrm{ev}_{s}(\mathcal{J}^{\eta,\eta_{-}}_{s}\hat{u}^{ \eta_{-}}(s,y_{0}))\). The result follows now from Proposition 33. To study in Proposition 20 the tangent bundle of the center manifold, we have to use the partial derivative of the map \(\mathcal{C}\) in the second component. The following result shows that such (higher order) partial derivatives are uniformly Lipschitz continuous. **Corollary 35**.: _For each \(l\in\{0,\dots,k\}\), there exists a constant \(L(l)>0\) such that_ \[\|D^{l}_{2}\mathcal{C}(s,y_{0})-D^{l}_{2}\mathcal{C}(s,z_{0})\|\leq L(l)\|y_{0} -z_{0}\|\] _for all \((s,y_{0}),(s,z_{0})\in E_{0}\)._ Proof.: For \(l=0\), the result is already proven in Lemma 16. Now let \(l\in\{1,\dots,k\}\). Then, from the proof of Proposition 33 we see that \(\hat{u}^{\eta_{-},(l)}\) is the unique solution of a fixed point problem, where the right hand-side is a contraction with a Lipschitz constant \(L(l)\) independent of \(s\). Using the same strategy as the proof of Lemma 16, we obtain the desired result.
2309.16910
Magnetic Sublevel Independent Magic and Tune-out Wavelengths of the Alkaline-earth Ions
Lightshift of a state due to the applied laser in an atomic system vanishes at the tune-out wavelengths ($\lambda_T$s). Similarly, differential light shift of a transition vanishes at the magic wavelengths ($\lambda_{magic}$s). In many of the earlier studies, values of the electric dipole (E1) matrix elements were inferred precisely by combining measurements of $\lambda_{magic}$ with the calculated their values. Similarly, the $\lambda_T$ values of an atomic state can be used to infer the E1 matrix element as it involves dynamic electric dipole ($\alpha$) values of only one state whereas the $\lambda_{magic}$ values are dealt with $\alpha$ values of two states. However, both the $\lambda_T$ and $\lambda_{magic}$ values depend on angular momenta and their magnetic components ($M$) of states. Here, we report the $\lambda_T$ and $\lambda_{magic}$ values of many $S_{1/2}$ and $D_{3/2,5/2}$ states, and transitions among these states of the Mg$^{+}$, Ca$^{+}$, Sr$^{+}$ and Ba$^{+}$ ions that are independent of $M$- values. Measuring these wavelengths in a special set-up as discussed in the paper, it could be possible to infer a large number of E1 matrix elements of the above ions accurately.
Jyoti, Harpreet Kaur, Bindiya Arora, B. K. Sahoo
2023-09-29T00:31:06Z
http://arxiv.org/abs/2309.16910v1
# Magnetic Sublevel Independent Magic and Tune-out Wavelengths of the Alkaline-earth Ions ###### Abstract Lightshift of a state due to the applied laser in an atomic system vanishes at the tune-out wavelengths (\(\lambda_{T}\)s). Similarly, differential light shift of a transition vanishes at the magic wavelengths (\(\lambda_{magic}\)s). In many of the earlier studies, values of the electric dipole (E1) matrix elements were inferred precisely by combining measurements of \(\lambda_{magic}\) with the calculated their values. Similarly, the \(\lambda_{T}\) values of an atomic state can be used to infer the E1 matrix element as it involves dynamic electric dipole (\(\alpha\)) values of only one state whereas the \(\lambda_{magic}\) values are dealt with \(\alpha\) values of two states. However, both the \(\lambda_{T}\) and \(\lambda_{magic}\) values depend on angular momenta and their magnetic components (\(M\)) of states. Here, we report the \(\lambda_{T}\) and \(\lambda_{magic}\) values of many \(S_{1/2}\) and \(D_{3/2,5/2}\) states, and transitions among these states of the Mg\({}^{+}\), Ca\({}^{+}\), Sr\({}^{+}\) and Ba\({}^{+}\) ions that are independent of \(M\)- values. Measuring these wavelengths in a special set-up as discussed in the paper, it could be possible to infer a large number of E1 matrix elements of the above ions accurately. ## I Introduction Singly charged alkaline-earth ions are the most eligible candidates for considering for the high-precision measurements due to several advantages [1]. Except Be\({}^{+}\) and Mg\({}^{+}\), other alkaline-earth ions have two metastable states and most of the transitions among the ground and metastable states are accessible by lasers. This is why these ions are considered for carrying out high-precision measurements such as testing Lorentz symmetry violations [2; 3; 4], parity nonconservation effects [5], non-linear isotope shift effects [6], quantum information [7; 8] and many more including for the optical atomic clock experiments [9]. One of the major systematics in these measurements is the Stark shift due to the employed laser, which depends on the frequency of the laser. The solution to this problem was suggested by Katori et al. [10] who proposed that the trapping laser can be tuned to wavelengths at which differential ac Stark shifts of the transitions can vanish [10]. These wavelengths were coined as magic wavelengths (\(\lambda_{magic}\)s) and being popularly used in the optical lattice clocks. There are also applications of the magic wavelengths for carrying measurements of atoms trapped inside high-Q cavities in the strong-coupling regime [11]. In quantum state engineering [12], magic wavelengths provide an opportunity to extract accurate values of oscillator strengths [13] that are particularly important for the correct stellar modeling and analysis of spectral lines identified in the spectra of stars and other heavenly bodies so as to infer fundamental stellar parameters [14; 15]. Apart from the magic trapping condition, where light shift of two internal states is identical, another well known limiting case is where light shift of one state vanishes. This case is known as tune-out condition [16]. Applications of such tune-out wavelengths (\(\lambda_{T}\)) lie in novel cooling techniques of atoms [17], selective addressing and manipulation of quantum states [18; 19; 20], precision measurement of atomic structures [21; 22; 23; 24; 25; 26] and precise estimation of oscillator strength ratios [27]. Additionally, tune-out conditions are powerful tools for the evaporative cooling of optical lattices [16] and hence, are important for experimental explorations. In one of the experiments pertaining to magic wavelengths of alkaline-earth ions, Liu et al. demonstrated the existence of magic wavelengths for a single trapped \({}^{40}\)Ca\({}^{+}\) ion [28] whereas Jiang et al. evaluated magic wavelengths of Ca\({}^{+}\) ions for linearly and circularly polarized light using relativistic configuration interaction plus core polarization (RCICP) approach [29; 30]. Recently, Chanu et al. proposed a model to trap Ba\({}^{+}\) ion by inducing an ac Stark shift using 653 nm linearly polarized laser [31]. Kaur et al. reported magic wavelengths for \(nS_{1/2}-nP_{1/2,3/2}\) and \(nS_{1/2}-mD_{3/2,5/2}\) transitions in alkaline-earth-metal ions using linearly polarized light [32] whereas Jiang et al. located magic and tune-out wavelengths for Ba\({}^{+}\) ion using RCICP approach [33]. Despite having a large number of applications, these magic wavelengths suffer a setback because of their dependency on the magnetic-sublevels (\(M\)) of the atomic systems. Linearly polarized light has been widely used for the trapping of atoms and ions as it is free from the contribution of the vector component in the interaction between atomic states and electric fields. However, the magic wavelengths thus identified are again magnetic-sublevel dependent for the transitions involving states with angular momenta greater than 1/2. On the other hand, the implementation of circularly polarized light for trapping purposes requires magnetic-sublevel selective trapping. In order to circumvent this \(M\)-dependency of magic wavelengths, a magnetic-sublevel independent strategy for trapping of atoms and ions was proposed by Sukhjit et al. [34]. Later on, Kaur et al. implemented similar technique to compute magic and tune-out wavelengths independent of magnetic sublevels \(M\) for different \(nS_{1/2^{-}}\)\((n-1)D_{3/2,5,2}\) transitions in Ca\({}^{+}\), Sr\({}^{+}\) and Ba\({}^{+}\) ions corresponding to n=4 for Ca\({}^{+}\), 5 for Sr\({}^{+}\) and 6 for Ba\({}^{+}\) ion [35]. In addition to the applications of \(\lambda_{magic}\) in getting rid of differential Stark shift in a transition, they are also being used to infer the electric dipole (E1) matrix elements of many allowed transitions in different atomic systems [J. A. Sherman, T. W. Koerber, A. Markhotok, W. Nagourney, and E. N. Fortson Phys. Rev. Lett. 94, 243001 (2005); B. K. Sahoo, L. W. Wansbeek, K. Jungmann, and R. G. E. Timmermans, Phys. Rev. A 79, 052512 (2009); Liu et al, Phys Rev. Lett. 114, 223001 (2015); Jun Jiang, Yun Ma, Xia Wang, Chen-Zhong Dong, and Z. W. Wu, Phys. Rev. A 103, 032803 (2021) etc.]. The basic procedure of these studies is that the \(\lambda_{magic}\) values are calculated by fine-tuning the magnitudes dominantly contributing E1 matrix elements to reproduce their measured values. Then, the set of the E1 matrix elements that give rise the best matched \(\lambda_{magic}\) values are considered as the recommended E1 matrix elements. However, calculations of these \(\lambda_{magic}\) values of a transition demand determination of dynamic E1 polarizabilities (\(\alpha\)) of both the states. In view of this, use of \(\lambda_{T}\) values of a given atomic state can be advantageous as they involve dynamic \(\alpha\) values of only one state. Furthermore, both the \(\lambda_{T}\) and \(\lambda_{magic}\) values depend on angular momenta and their magnetic components (\(M\)) of atomic states. This requires evaluation of scalar, vector and tensor components of the \(\alpha\) values for states with angular momenta greater than 1/2, which is very cumbersome. To circumvent this problem, we present here \(M\)-sublevel independent \(\lambda_{T}\) and \(\lambda_{magic}\) values of many states and transitions involving a number of \(S_{1/2}\) and \(D_{3/2,5/2}\) states in the alkaline-earth metal ions from Mg\({}^{+}\) through Ba\({}^{+}\) that can be inferred to the E1 matrix elements more precisely. We have used the E1 matrix elements from an all-order relativistic atomic many-body method to report the \(M\)-Independent \(\lambda_{T}\) and \(\lambda_{magic}\) values to search for these values in the experiments, when they are measured precisely the E1 matrix elements need to be fine-tuned in order to minimize their uncertainties. It can be achieved by specially setting up the experiment suitably fixing the polarization and quantization angles of the applied lasers. To validate our results for the transitions involving high-lying states, we have compared the values of our \(\lambda_{T}\) and \(\lambda_{magic}\) values for the ground to the metastable states of the considered alkaline-earth ions with the previously reported values. The paper is organized as follows: In Sec. II, we provide underlying theory and Sec. III describes the method of evaluation of the calculated quantities. Sec. IV discusses the obtained results, while concluding the study in Sec. V. Unless we have stated explicitly, physical quantities are given in atomic units (a.u.). ## II Theory The electric field \(\mathcal{E}\)(\(r\),\(t\)) associated with a general plane electromagnetic wave can be represented in terms of complex polarization vector \(\hat{\chi}\) and the real wave vector \(\mathbf{k}\) by the following expression [36] \[\mathcal{E}(\mathbf{r},t)=\frac{1}{2}\mathcal{E}\hat{\chi}e^{-\iota(\omega t- \mathbf{k}.\mathbf{r})}+c.c., \tag{1}\] where \(c.c.\) is the complex-conjugate of the preceding term. Assuming \(\hat{\chi}\) to be real and adopting the coordinate system as presented in Fig. 1, the polarization vector can be expressed as [34] \[\hat{\chi}=e^{i\sigma}(cos\phi\ \hat{\chi}_{maj}+\iota\ sin\phi\ \hat{\chi}_{ min}), \tag{2}\] where \(\hat{\chi}_{maj}\) and \(\hat{\chi}_{min}\) denote the real components of the polarization vector \(\hat{\chi}\), \(\sigma\) is the real quantity denoting the arbitrary phase and \(\phi\) is analogous to degree of polarization \(A\) such that \(A=sin(2\phi)\). For linearly polarized light, \(\phi=0\) whereas \(\phi\) takes the value either \(\pi/4\) or \(3\pi/4\) for circularly polarized light, which further defines \(A=0\) for linearly polarized and \(A=1(-1)\) for right-hand (left-hand) circularly polarized light [36]. As shown in the Fig. 1, this coordinate system follows \[cos^{2}\theta_{p}=cos^{2}\phi\ cos^{2}\theta_{maj}+sin^{2}\phi\ sin^{2}\theta_ {min} \tag{3}\] and \[\theta_{maj}+\theta_{min}=\frac{\pi}{2}. \tag{4}\] Here, \(\theta_{p}\) is the angle between quantization axis \(\hat{\chi}_{B}\) and direction of polarization vector \(\hat{\chi}\) and the parameters \(\theta_{maj}\) and \(\theta_{min}\) are the angles between respective unit vectors and \(\hat{\chi}_{B}\). When an atomic system is subjected to the above electric field and the magnitude of \(\mathcal{E}\) is small, shift in the energy of its \(n^{th}\) level (Stark shift) can be given by \[\delta E_{n}^{K}\simeq-\frac{1}{2}\alpha_{n}^{K}(\omega)|\mathcal{E}|^{2}, \tag{5}\] where \(\alpha_{n}^{K}(\omega)\) is known as the second-order electric dipole (E1) polarizability and the superscript \(K\) denotes angular momentum of the state, which can be atomic angular momentum \(J\) or hyperfine level angular momentum \(F\). Depending upon polarization, dynamic dipole polarizability \(\alpha_{n}^{K}(\omega)\) can be expressed as \[\alpha_{n}^{K}(\omega)=\alpha_{nS}^{K}(\omega)+\beta(\chi)\frac{M _{K}}{2K}\alpha_{nV}^{K}(\omega)\] \[+\gamma(\chi)\frac{3M_{K}^{2}-K(K+1)}{K(2K-1)}\alpha_{nT}^{K}( \omega), \tag{6}\] where \(\alpha_{nS}^{K}\), \(\alpha_{nV}^{K}\) and \(\alpha_{nT}^{K}\) are the scalar, vector and tensor components of the polarizability, respectively. In the expression can be defined on the basis of the coordinate system provided in the Fig. 1. Geometrically, values for \(\beta(\chi)\) and \(\gamma(\chi)\) in their elliptical form are given as [34; 36] \[\beta(\chi)=\iota(\hat{\chi}\times\hat{\chi}^{*}).\hat{\chi}_{B}=Acos\theta_{k} \tag{7}\] and \[\gamma(\chi)=\frac{1}{2}\left[3(\hat{\chi}^{*}.\hat{\chi}_{B})(\hat{\chi}.\hat{ \chi}_{B})-1\right]=\frac{1}{2}\left(3cos^{2}\theta_{p}-1\right), \tag{8}\] where \(\theta_{k}\) is the angle between direction of propagation \(\mathbf{k}\) and \(\hat{\chi}_{B}\). Substitution of \(\beta(\chi)\) and \(\gamma(\chi)\) from Eq. 7 and 8 reforms the expression for dipole polarizability to \[\alpha_{n}^{K}(\omega)=\alpha_{nS}^{K}(\omega)+Acos\theta_{k} \frac{M_{K}}{2K}\alpha_{nV}^{K}(\omega)\] \[+\left(\frac{3cos^{2}\theta_{p}-1}{2}\right)\frac{3M_{K}^{2}-K(K +1)}{K(2K-1)}\alpha_{nT}^{K}(\omega) \tag{9}\] with the azimuthal quantum number \(M_{K}\) of the respective angular momentum \(K\). Thus, it is obvious from Eq. (7) that \(\alpha_{n}^{K}\) values of two states have to be same if we intend to find \(\lambda_{magic}\) for the transition involving both the states. Since the above expression for \(\alpha_{n}^{K}\) has \(M_{K}\) dependency, the \(\lambda_{magic}\) become \(M_{K}\) dependent. In order to remove \(M_{K}\) dependency, one can choose \(M_{K}=0\) sublevels but in the atomic states of the alkaline-earth ions they are non-zero while isotopes with integer nuclear spin of the alkaline-earth ions \(M_{K}\)s are again non-zero. To address this, a suitable combination of the \(\beta(\chi)\) and \(\gamma(\chi)\) parameters need to be chosen such that \(cos\theta_{k}=0\) and \(cos^{2}\theta_{p}=\frac{1}{3}\), which are feasible to achieve in an experiment by setting \(\theta_{k},\hat{\chi}_{maj}\) and \(\phi\) values as demonstrated in Ref. [34]. In such a scenario, the \(\lambda_{magic}\) values can depend on the scalar part only by suppressing the vector and tensor components of \(\alpha_{n}^{K}\); i.e. the net differential Stark effect of a transition occurring from between the \(J\) to \(J^{\prime}\) states will be given by \[\delta E_{JJ^{\prime}}=-\frac{1}{2}\left[\alpha_{nS}^{J}(\omega)-\alpha_{nS} ^{J^{\prime}}(\omega)\right]\mathcal{E}^{2}. \tag{10}\] This has an additional advantage that the differential Stark effects at an arbitrary electric field become independent of choice of atomic or hyperfine levels in a given atomic system as the scalar component of \(\alpha_{n}^{J}\) and \(\alpha_{n}^{F}\) are the same. Again, the same choice of \(\lambda_{magic}\) values will be applicable to both the atomic and hyperfine levels in an high-precision experiment. ## III Method of evaluation Determination of \(\alpha_{n}^{J}\) values require accurate calculations E1 matrix elements. For the computation of E1 matrix elements, we need accurate atomic wave functions of the alkaline-earth ions. We have employed here a relativistic all-order method to the determine atomic wave functions of the considered atomic systems, whose atomic states have closed core configuration with an unpaired electron in the valence orbital. Detailed descriptions of our all-order method can be found in Refs. [37; 38; 39; 40], however a brief outline of the same is also provided here for the completeness. Our all-order method follows the relativistic coupled-cluster (RCC) theory ansatz \[|\psi_{v}\rangle=e^{S}|\phi_{v}\rangle, \tag{11}\] where \(|\phi_{v}\rangle\) represents the mean-field wave function of the state \(v\) and constructed as [41] \[|\phi_{v}\rangle=a_{v}^{\dagger}|0_{c}\rangle, \tag{12}\] where \(|0_{c}\rangle\) represents the Dirac-Hartree-Fock (DHF) wave function of the closed-core. Subscript \(v\) represents the valence orbital of the considered state. In our calculations, we consider only linear terms in the singles and doubles approximation of the RCC theory (SD method) by expressing [41] \[|\psi_{v}\rangle=(1+S_{1}+S_{2}+...)|\phi_{v}\rangle, \tag{13}\] where \(S_{1}\) and \(S_{2}\) depict terms corresponding to the single and double excitations, respectively, that can further be written in terms of second quantization creation and annihilation operators as follows [42] \[S_{1}=\sum_{ma}\rho_{ma}a_{m}^{\dagger}a_{a}+\sum_{m\neq v}\rho_{mv}a_{m}^{ \dagger}a_{v} \tag{14}\] and \[S_{2}=\frac{1}{2}\sum_{mnab}\rho_{mnab}a_{m}^{\dagger}a_{n}^{\dagger}a_{b}a_{ a}+\sum_{mna}\rho_{mnva}a_{m}^{\dagger}a_{n}^{\dagger}a_{a}a_{v}, \tag{15}\] Figure 1: Representation of elliptically polarized laser beam swept out by the laser’s polarization vector in one period. \(\hat{\chi}\) representing the laser’s complex polarization vector and \(\hat{k}\) as the laser wave vector perpendicular to quantization axis \(\hat{\chi}_{B}\). The vectors \(\hat{\chi}_{maj}\), \(\hat{\chi}_{min}\) and \(\hat{k}\) are mutually perpendicular to each other where indices \(m\) and \(n\) range over all possible virtual orbitals, and indices \(a\) and \(b\) range over all occupied core orbitals. The coefficients \(\rho_{ma}\) and \(\rho_{mv}\) represent excitation coefficients of the respective single excitations for the core and the valence electrons, respectively, whereas \(\rho_{mnab}\) and \(\rho_{mnva}\) depict double excitation coefficients for the core and the valence electrons respectively. These amplitudes are calculated in an iterative procedure [43] due to which they include electron correlation effects to all-order. Hence, atomic wave function of the considered states in the alkaline-earth ions are expressed as [41; 44]: \[|\psi_{v}\rangle_{SD} =\left[1+\sum_{ma}\rho_{ma}a_{m}^{\dagger}a_{a}+\frac{1}{2}\sum_{ mnab}\rho_{mnab}a_{m}^{\dagger}a_{n}^{\dagger}a_{b}a_{a}\right. \tag{16}\] \[\left.+\sum_{m\neq v}\rho_{mv}a_{m}^{\dagger}a_{v}+\sum_{mna}\rho _{mnva}a_{m}^{\dagger}a_{n}^{\dagger}a_{a}av\right]|\phi_{v}\rangle,\] To improve the calculations further and understand the importance of contributions from the triple excitations in the RCC theory, we take into account important core and valence triple excitations through the perturbative approach over the SD method (SDpT method) by redefining the wave function expression as [41] \[|\psi_{v}\rangle_{SDpT} = |\psi_{v}\rangle_{SD}+\left[\frac{1}{6}\sum_{mnrab}\rho_{mnrvab}a_ {m}^{\dagger}a_{n}^{\dagger}a_{r}^{\dagger}a_{b}a_{a}a_{v}\right. \tag{17}\] \[\left.+\frac{1}{18}\sum_{mnrabc}\rho_{mnrabc}a_{m}^{\dagger}a_{n} ^{\dagger}a_{r}^{\dagger}a_{c}a_{b}a_{a}\right]|\phi_{v}\rangle.\] After obtaining the wave functions of the interested atomic states, we evaluate the E1 matrix elements between states \(|\psi_{v}\rangle\) and \(|\psi_{w}\rangle\) as [42] \[D_{wv}=\frac{\langle\psi_{w}|D|\psi_{v}\rangle}{\sqrt{\langle\psi_{w}|\psi_{w }\rangle\langle\psi_{v}|\psi_{v}\rangle}}, \tag{18}\] where \(D=-e\Sigma_{j}\mathbf{r}_{j}\) is the E1 operator with \(\mathbf{r}_{j}\) being the position of \(j^{th}\) electron [44]. The resulting expression of numerator of Eq. 18 includes the sum of the DHF matrix elements \(z_{wv}\), twenty correlation terms of the SD method that are linear or quadratic functions of excitation coefficients \(\rho_{mv}\), \(\rho_{ma}\), \(\rho_{mnva}\) and \(\rho_{mnab}\), and their core counterparts [38]. In the sum-over-states approach, expression for the scalar dipole polarizability is given by \[\alpha_{v}(\omega)=\frac{2}{3(2J_{v}+1)}\sum_{v\neq w}\frac{(E_{v}-E_{w})| \langle\psi_{v}||D||\psi_{w}\rangle|^{2}}{(E_{v}-E_{w})^{2}-\omega^{2}}, \tag{19}\] where \(\langle\psi_{v}||D||\psi_{w}\rangle\) is the reduced matrix element for the transition occurring between the states involving the valence orbitals \(v\) and \(w\). Here, we have dropped the superscript \(J\) in the dipole polarizability notation for the brevity. For the convenience, we divide the entire contribution to \(\alpha_{v}(\omega)\) in three parts as \[\alpha_{n}=\alpha_{n,c}+\alpha_{n,vc}+\alpha_{n,v}, \tag{20}\] where \(c,vc\) and \(v\) corresponds to core, valence-core and valence contributions arising due to the correlations among the core orbitals, core-valence orbitals and valence-virtual orbitals respectively [45]. Due to very smaller magnitudes, the core and core-valence contributions are calculated by using the DHF method. The dominant contributions will arise valence from \(\alpha_{n,v}\) due to small energy denominators. Again, the high-lying states will not contribute to \(\alpha_{n,v}\) owing to large energy denominators. Thus, we calculate E1 matrix elements only among the low-lying excited states and refer the contributions as 'Main'. Contributions from the less contributing high-lying states are referred as 'Tail' and are estimated again using the DHF method. To reduce the uncertainties in the estimations of Main contributions, we have used experimental energies of the states from the National Institute of Science and Technology atomic database (NIST AD) [46]. ## IV Results and discussion The precise computation of magic and tune-out wavelengths requires the accurate determination of E1 matrix elements as well as dipole polarizabilities. In our work, we have used E1 matrix elements and energies for different states available on Portal for High-Precision Atomic Data and Computation [47] and NIST Atomic Spectra Database [46], respectively. We have listed resonance transitions, magic wavelengths and their corresponding polarizabilities for magnetic-sublevel independent \(nS\)-\(mD\) transitions for alkaline-earth ions from Mg\({}^{+}\) through Ba\({}^{+}\) along with their comparison with available literature in the Tables 1 through 4, respectively. The further discussion regarding the magic wavelengths is provided in the subsection IV.1 for the considered alkaline-earth ions. Furthermore, we have discussed our results for tune-out wavelengths in the subsection IV.2 along with the comparison of our results with respect to the available theoretical data. ### Magic Wavelengths #### iv.1.1 Mg\({}^{+}\) In Table 1, we have tabulated our results for magic wavelengths and their corresponding dipole polarizabilities for \((3,4)S_{1/2}\)-\(3D_{3/2,5/2}\) and \(4S_{1/2}\)-\(4D_{3/2}\) transitions. Fig. 2(a) demonstrates scalar dipole polarizabilities of \(3S_{1/2}\) and \(3D_{3/2,5/2}\) states of Mg\({}^{+}\) ion with respect to wavelength of the external field. It can be perceived from the figure that a number of magic wavelengths at the crossings of the scalar polarizabilities' curves of the corresponding state have been predicted for the transition. As can be seen from the Table 1, a total of 4 magic wavelengths have been found for \(3S\)-\(3D_{3/2}\) transition, whereas \(3S\)-\(3D_{5/2}\) transition shows a total of 3 magic wavelengths in the range \(300-1250\) nm, out of which no magic wavelength is found to exist in visible spectrum. However, all the magic wavelengths enlisted in Table 1 are not known to be the same. \begin{table} \begin{tabular}{l l l l l l l} \hline \hline & \multicolumn{2}{c}{\(3S_{1/2}-3D_{3/2}\)} & \multicolumn{5}{c}{\(3S_{1/2}-3D_{5/2}\)} \\ Resonance & \(\lambda_{res}\) & \(\lambda_{magic}\) & \(\alpha_{magic}\) & Resonance & \(\lambda_{res}\) & \(\lambda_{magic}\) & \(\alpha_{magic}\) \\ \hline \(3D_{3/2}\to 6P_{3/2}\) & 292.92 & & & \(3D_{5/2}\to 5F_{5/2}\) & 310.56 & & \\ \(3D_{3/2}\to 5P_{1/2}\) & 385.15 & & & \(3D_{5/2}\to 5P_{3/2}\) & 384.920 & & 313.86 & 168.85 \\ & & 385.30 & 73.51 & & & & 385.10 & 73.59 \\ & & 757.79 & 40.43 & & & & \\ \(3D_{3/2}\to 5P_{3/2}\) & 1091.83 & & & \(3D_{5/2}\to 4F_{5/2}\) & 448.24 & & \\ & 1092.44 & 37.41 & & & & 756.72 & 40.45 \\ \(3D_{3/2}\to 4P_{1/2}\) & 1095.48 & & & \(3D_{5/2}\to 4P_{3/2}\) & 1091.72 & & \\ \hline & \multicolumn{2}{c}{\(4S_{1/2}-3D_{3/2}\)} & \multicolumn{5}{c}{\(4S_{1/2}-3D_{5/2}\)} \\ \hline \(4S_{1/2}\to 5P_{3/2}\) & 361.48 & & & \(4S_{1/2}\to 5F_{7/2}\) & 310.56 & & \\ & & 361.26 & \(-202.05\) & & & 344.87 & \(-160.00\) \\ & & 1092.38 & 1976.85 & & & 361.63 & \(-202.81\) \\ & & 1132.53 & 1681.41 & & & 361.62 & \(-203.15\) \\ & & & & & 4S_{1/2}\to 5P_{1/2}\) & 361.66 & & \\ & & 923.81 & \(-140.74\) & & & & \\ \(4S_{1/2}\to 4P_{1/2}\) & 924.68 & & & & \(3D_{5/2}\to 5P_{3/2}\) & 384.93 & & \\ \(3D_{3/2}\to 4P_{3/2}\) & 1091.83 & & & & & 385.41 & \(-163.67\) \\ & & 1092.38 & 1976.85 & & & & \\ \(3D_{3/2}\to 4P_{1/2}\) & 1095.48 & & & & \(4S_{1/2}\to 4P_{3/2}\) & 922.08 & & \\ & & 1132.53 & 1681.41 & & & 923.81 & \(-144.72\) \\ & & & & & \(3D_{5/2}\to 4P_{3/2}\) & 1091.72 & & \\ & & & & & & 1128.42 & 1706.33 \\ \hline & \multicolumn{2}{c}{\(4S_{1/2}-4D_{3/2}\)} & \multicolumn{5}{c}{\(4S_{1/2}-4D_{5/2}\)} \\ \hline \(4D_{3/2}\to 7F_{5/2}\) & 526.58 & & & & \(4D_{5/2}\to 7F_{5/2,7/2}\) & 526.57 & & \\ & & 591.48 & \(-422.01\) & & & 591.34 & \(-421.69\) \\ \(4D_{3/2}\to 7P_{3/2}\) & 591.83 & & & \(4D_{5/2}\to 7P_{3/2}\) & 591.81 & & \\ \(4D_{3/2}\to 7P_{1/2}\) & 591.98 & & & & & & \\ & & 616.11 & \(-482.44\) & & & 616.02 & \(-482.19\) \\ \(4D_{3/2}\to 6F_{5/2}\) & 634.87 & & & & \(4D_{5/2}\to 6F_{7/2}\) & 634.85 & & \\ \(4P_{1/2}\to 4D_{3/2}\) & 787.92 & & & & & & \\ & & 789.50 & \(-1578.93\) & & & & \\ \(4P_{3/2}\to 4D_{3/2}\) & 789.82 & & & \(4P_{3/2}\to 4D_{5/2}\) & 789.85 & & \\ \(4D_{3/2}\to 6P_{3/2}\) & 811.78 & & & & \(4D_{5/2}\to 6P_{3/2}\) & 811.75 & & \\ & & 811.82 & \(-1973.57\) & & & & 812.12 & \(-1979.96\) \\ & & 812.83 & & & & & 844.63 & \(-2964.83\) \\ & & 812.65 & \(-1991.34\) & & & & \\ & & 843.61 & \(-2921.35\) & & & & \\ \(4S_{1/2}\to 4P_{3/2}\) & 922.08 & & & & \(4S_{1/2}\to 4P_{3/2}\) & 922.08 & & \\ & & 923.84 & \(-4957.89\) & & & & 923.84 & \(-4975.83\) \\ \(4S_{1/2}\to 4P_{1/2}\) & 924.68 & & & & \(4S_{1/2}\to 4P_{1/2}\) & 924.68 & & \\ \(4D_{3/2}\to 5F_{5/2}\) & 963.51 & & & & \(4D_{5/2}\to 5F_{7/2}\) & 963.45 & & \\ & & 1006.101 & 3585.53 & & & & 1005.86 & 3594.76 \\ \hline \hline \end{tabular} \end{table} Table 1: Magic wavelengths \(\lambda_{magic}\) (in nm) with the corresponding polarizability \(\alpha_{n}(\omega)\) (in a.u.) for \(3S_{1/2}\)–\(3D_{3/2,5/2}\) transitions in Mg\({}^{+}\) ion. support red-detuned trap. Fig. 2(b) represents the plot of scalar dipole polarizabilities of \(4S\) and \(4D_{3/2,5/2}\) states against wavelength of the external field. It can also be assessed from Table 1 that there exists a total of nine magic wavelengths in the considered wavelength range for \(4S\)-\(4D_{3/2}\) transition, whereas only five magic wavelengths are spotted for \(4S\)-\(4D_{5/2}\) transition. However, in both the cases, all the magic wavelengths except those around 616 nm, 844 nm and 1006 nm are close to resonance, thereby making them unsuitable for further use. However, out of these three values, \(\lambda_{magic}\) at 616 nm lies in the visible region and is far-detuned with considerable deep potential. Hence, we recommend this magic wavelength for trapping of Mg\({}^{+}\) ion for both \(4S\)-\(4D_{3/2,5/2}\) transitions for further experimentations in optical clock applications. Fig. 2(c) demonstrates the magic wavelengths for M\({}_{J}\) independent scheme for \(4S\)-\(3D_{3/2,5/2}\) transitions for Mg\({}^{+}\) ion along with their corresponding scalar dynamic polarizabilities. According to Table 1, it can be realized that none of the magic wavelengths for these transitions lies within the visible spectrum of electromagnetic radiations. However, all of these magic wavelengths support red-detuned trap, except 1132.53 nm and 1128.42 nm for \(4S\)-\(3D_{3/2}\) and \(4S\)-\(3D_{5/2}\) transitions, respectively, support far blue-detuned traps and are found to be useful for experimental demonstrations. #### iii.2.2 Ca\({}^{+}\) We have considered \(4S\)-\(3D_{3/2,5/2}\) and \(5S\)-\((4,3)D_{3/2,5/2}\) transitions for locating the magic wavelengths in Ca\({}^{+}\) ion. We have tabulated magic wavelengths for these transitions along with the comparison of \(\lambda_{magic}\)s with the only available results for \(4S\)-\(3D_{3/2,5/2}\) in Table 2. Also, we have plotted scalar dipole polarizabilities against wavelengths for these transitions in Figs. 2(d), 2(e) and 2(f) correspondingly. According to Table 2, it is ascertain that subsequently three and two magic wavelengths exist between 393 nm and 1030 nm for \(4S\)-\(3D_{3/2,5/2}\) transitions. In both cases, except 1029.97 nm and 1011.90 nm magic wavelengths, that are far-detuned, all other magic wavelengths are close to resonances and are not suitable for laser trapping. During analysis, six and five magic wavelengths are located for \(5S\)-\((3,4)D_{3/2}\) and \(5S\)-\((3,4)D_{5/2}\) transitions, respectively. It is also analyzed that all the magic wavelengths are approximately same for both \(5S\)-\(4D_{3/2}\) and \(5S\)-\(4D_{5/2}\) transitions. Moreover, \(\lambda_{magic}\)s around 845 nm, 847 nm and 860 nm share deep trapping potential for blue-detuned traps and hence, are further recommended for configuring feasible traps. \(\lambda_{magic}\) at 1191.56 nm, identified in infrared region for both \(5S\)-\(4D_{3/2,5/2}\) transitions, is the only magic wavelength that supports red-detuned trap. Besides, the polarizability for this wavelength is sufficient enough for creating an ion trap at reasonable laser power. To validate our results, we have also compared our results with the results provided only for \(4S\)-\(3D_{3/2,5/2}\) in Ref. [35], and noticed that the results for these transitions are in good agreement with only less than 1% variation w.r.t. obtained results. #### iii.2.3 Sr\({}^{+}\) Fig.s 3(a), 3(b) and 3(c) demonstrate the M\({}_{J}\)-independent dynamic dipole polarizability versus wavelength plots for \((6,5)S_{1/2}\)-\(4D_{3/2,5/2}\) and \(6S_{1/2}\)-\(5D_{3/2,5/2}\) transitions for Sr\({}^{+}\) ion. The results corresponding to these figures have been enlisted in Table 3. Only two magic wavelengths have been traced for \(5S\)-\(4D_{3/2}\) transition, whereas only one magic wavelength exists for \(5S\)-\(4D_{5/2}\) transition. According to Table 3, for \(6S\)-\(4D_{3/2}\) transition, three magic wavelengths exist below 480 nm, with a dynamic polarizability of value less than 15 a.u., however, other three \(\lambda_{magic}\)s, lie between 1000 nm and 1231 nm. The \(\lambda_{magic}\)s at 1002.401 nm and 1087.35 nm support blue-detuned traps with sufficiently high polarizabilities for experimental trapping of Sr\({}^{+}\) ion. For \(6S\)-\(4D_{5/2}\) transition, five magic wavelengths have been located between 420 nm and 1250 nm, out of which, the magic wavelengths at 421.47 nm, 474.61 nm, 477.56 nm and 1239.05 nm follow red-detuned traps whereas the only magic wavelength at 1025.19 nm with corresponding \(\alpha=-2857.98\) a.u., supports blue-detuned trap which can be useful for experimental purposes. We recommend this magic wavelength of Sr\({}^{+}\) ion for \(6S\)-\(4D_{5/2}\) transition. Moreover, it is also observed that all the magic wavelengths for these two transitions lie between same resonance transitions and are closer to each other. So, it is probable to trap Sr\({}^{+}\) ion for both of these transitions with same magic wavelength. Table 3 also shows that there are four magic wavelengths which lie within the wavelength range of 640 nm to 1450 nm for \(6S\)-\(5D_{3/2}\) transition. It is also observed that three out of four magic wavelengths for \(6S\)-\(5D_{3/2}\) transition support blue-detuned traps, however the \(\lambda_{magic}=1233.61\) nm at \(\alpha_{magic}=-6755.64\) a.u. is recommended for experimental purposes as it is far-detuned and a high value of dipole polarizability indicates deep trapping potential. On the other hand, only three magic wavelengths have been identified for \(6S\)-\(5D_{5/2}\) transition in Sr\({}^{+}\) ion with two supporting blue-detuned traps. Two out of these \(\lambda_{magic}\)s, i.e., 1233.06 nm and 1448.40 nm are located at higher wavelength range, with deep potentials for their respective favourable blue- and red-detuned traps. Therefore, both of these values are recommended for further experimental studies. Moreover, we have compared our magic wavelengths for \(5S\)-\(4D_{3/2,5/2}\) transitions with respect to available literature in the same table. It is seen that our reported values are in excellent approximation with the results obtained by Kaur et al. [35] with a variation less than 0.05%. Unfortunately, we couldn't find any data related to other transitions to carry out the comparison with. Hence, it can be concluded from the comparison of available data that our results are promising and can be used for further prospective calculations of atomic structures and atomic properties of this ion. #### iii.2.4 Ba\({}^{+}\) The results for magic wavelengths for 6\(S\)-5\(D_{3/2,5/2}\), 7\(S\)-5\(D_{3/2,5/2}\) and 7\(S\)-6\(D_{3/2,5/2}\) transitions in Ba\({}^{+}\) ion are tabulated in tables 4. As per Fig. 3(d) and Table4, A maximum of magic wavelengths have been located between 480 and 700 nm. It is also observed that the magic wavelengths that lie between \(6S\)-\(6P_{1/2}\) and \(6S\)-\(6P_{3/2}\) resonant transitions support blue-detuned trap, however, the dynamic dipole polarizability corresponding to these magic wavelengths are too small to trap Ba\({}^{+}\) ion at these wavelengths. A total of six magic wavelengths are found for \(7S\)-\(5D_{3/2}\) transition out of which two lie in the vicinity of 526 nm. The sharp intersection of polarizability curves of the involved states of transition lie at 583.76 nm, 638.75 nm and 1380.83 nm. Similarly, four magic wavelengths have been identified for \(7S\)-\(5D_{5/2}\) transition, however, unlike \(7S\)-\(5D_{3/2}\) transition, no magic wavelength has been identified in the vicinity of 600 to 1300 nm. It is also analyzed that three out of these four \(\lambda_{magic}\)s, support blue-detuned trap, although the trapping potentials for these traps are not deep enough for further consideration to experimentations. Table4 also compiles the magic wavelengths for \(7S\) \(5D_{3/2,5/2}\) transitions and shows that there exists six and four magic wavelengths for \(7S\)-\(5D_{3/2}\) and \(7S\)-\(5D_{5/2}\) transition, respectively. It is also seen that the magic wavelengths between \(6P_{3/2}\)-\(7S\) and \(7S\)-\(8P_{3/2}\) as well as \(5D_{3/2}\)-\(6P_{1/2}\) and \(7S\)-\(7P_{3/2}\) transitions seem to be missing as shown in Fig. 3(e). It is also observed that the magic wavelength at 466.95 nm and 1380.83 nm are slightly red-shifted, nevertheless, the \(\lambda_{magic}\) at 638.75 nm lies in visible region supports blue-detuned trap, can have sufficient trap depth at reasonable laser power. Similarly, the magic wavelengths and their corresponding dynamic dipole polarizability along with their comparison with available literature is also provided in the same table for \(7S\)-\(6D_{3/2,5/2}\) transitions. The same have been demonstrated graphically in the Fig. 3(f) which includes a total of thirteen magic wavelengths in all for the considered transitions. It is also examined that no magic wavelength exists between \(6D_{3/2}\)-\(6F_{5/2}\) and \(6D_{3/2}\)-\(8P_{1/2}\) resonances. Unlike \(7S\)-\(6D_{3/2}\) transition, around eight magic wavelengths have been located between \(7S\)-\(8P_{3/2}\) and \(7S\)-\(7P_{1/2}\) resonances, and all of them support blue-detuned traps. Moreover, magic wavelengths at 532.80 nm, 735.65 nm and 1381.39 nm are expected to be more promising for experiments due to sufficient trap depths for the reasonable power lasers. However, on the comparison of our results for \(6S\)-\(5D_{3/2,5/2}\) transitions for Ba\({}^{+}\) ion, we have observed that all the magic wavelengths agree well with the results obtained by Kaur et al. in \begin{table} \begin{tabular}{c c c c c c c} & \multicolumn{2}{c}{\(5S_{1/2}-4D_{3/2}\)} & \multicolumn{4}{c}{\(5S_{1/2}-4D_{5/2}\)} \\ Resonance & \(\lambda_{res}\) & \(\lambda_{magic}\) & \(\alpha_{magic}\) & Resonance & \(\lambda_{res}\) & \(\lambda_{magic}\) & \(\alpha_{magic}\) \\ \hline \(5S_{1/2}\to 5P_{3/2}\) & 407.89 & & & \(5S_{1/2}\to 5P_{3/2}\) & 407.89 & & \\ & & 417.00 & 15.28 & & & 417.00 & 15.18 \\ & & 416.9(3) [35] & 14.47 [35] & & & 416.9(3) [35] & 13.3 [35] \\ \(5S_{1/2}\to 5P_{1/2}\) & 421.67 & & & \(5S_{1/2}\to 5P_{1/2}\) & 421.67 & & \\ \(4D_{3/2}\to 5P_{3/2}\) & 1003.94 & & & \(4D_{5/2}\to 5P_{3/2}\) & 1003.01 & & \\ & & 1014.68 & 108.70 & & & \\ & & 1014.6(2) [35] & 108.35 [35] & & & & \\ \(4D_{3/2}\to 5P_{1/2}\) & 1091.79 & & & & & \\ \hline & \multicolumn{2}{c}{\(6S_{1/2}-4D_{3/2}\)} & \multicolumn{4}{c}{\(6S_{1/2}-4D_{5/2}\)} \\ \hline \(5P_{1/2}\to 6S_{1/2}\) & 416.27 & & & \(5P_{1/2}\to 6S_{1/2}\) & 416.30 & & \\ & & 421.47 & 14.98 & & & 421.47 & 14.85 \\ \(5P_{3/2}\to 6S_{1/2}\) & 430.67 & & & & \(5P_{3/2}\to 6S_{1/2}\) & 430.67 & & \\ \(6S_{1/2}\to 7P_{3/2}\) & 474.37 & & & & \(6S_{1/2}\to 7P_{3/2}\) & 474.37 & & \\ & & 474.61 & 11.35 & & & 474.61 & 10.95 \\ \(6S_{1/2}\to 7P_{1/2}\) & 477.49 & & & \(6S_{1/2}\to 7P_{1/2}\) & 477.49 & & \\ & & 477.55 & 11.14 & & & 477.56 & 10.72 \\ & & 1002.40 & \(-\)2470.01 & & & 1025.19 & \(-\)2857.98 \\ \(4D_{3/2}\to 5P_{3/2}\) & 1003.94 & & & & \(4D_{5/2}\to 5P_{3/2}\) & 1033.01 & & \\ & & 1087.35 & \(-\)4653.36 & & & & \\ \(4D_{3/2}\to 5P_{1/2}\) & 1091.79 & & & & \(6S_{1/2}\to 6P_{3/2}\) & 1201.73 & & \\ \(6S_{1/2}\to 6P_{3/2}\) & 1201.73 & & & & & 1230.05 & 170.22 \\ & & 1230.02 & 223.42 & & & & \\ \(6S_{1/2}\to 6P_{1/2}\) & 1244.84 & & & & \(6S_{1/2}\to 6P_{1/2}\) & 1244.84 & & \\ & & 1233.61 & \(-\)6755.64 & & & & \\ \(6S_{1/2}\to 6P_{1/2}\) & 1244.84 & & & & \(6S_{1/2}\to 6P_{1/2}\) & 1244.84 & & \\ \(5D_{3/2}\to 4F_{5/2}\) & 1297.85 & & & & \(5D_{5/2}\to 4F_{5/2}\) & 1312.62 & & \\ & & 1411.88 & 4381.56 & & & & \\ & & & & & 1448.40 & 3812.06 \\ \end{tabular} \end{table} Table 3: Magic wavelengths \(\lambda_{magic}\) (in nm) with the corresponding polarizability \(\alpha_{n}(\omega)\) (in a.u.) along with their comparison with available literature for \(5S_{1/2}\)–\(4D_{3/2,5/2}\) transitions in Sr\({}^{+}\) ion. Ref. [35], except the last magic wavelengths that are identified at 693 nm and 653 nm for 6\(S\)-5\(D_{3/2}\) and 6\(S\)-5\(D_{5/2}\) transitions. ### Tune-out Wavelengths We have illustrated tune-out wavelengths for different states of the considered transitions in the alkaline-earth ions along with their comparison with already available literature in Table 5. To locate these M\({}_{J}\)-independent tune-out wavelengths, we have evaluated scalar dipole dynamic polarizabilities of these states for considered alkaline-earth ions and identified those values of \(\lambda\) for which polarizability vanished. It is also accentuated that in Mg\({}^{+}\) ion, all the tune-out wavelengths identified for \(3S_{1/2}\) and \(4S_{1/2}\) states lie in UV region, whereas for \((3,4)D_{3/2,5/2}\) states, a few tune-out wavelengths are located in visible range. Moreover, the largest \(\lambda_{T}\) is identified for \(4D_{3/2}\) state at 1331.527 nm. Furthermore, only one tune-out wavelength,i.e., \(\lambda_{T}=280.11\) nm for \(3S_{1/2}\) could be compared with the result presented by Kaur et al. in Ref. [48] and it is seen that our result is in \begin{table} \begin{tabular}{c c c c c c c} & \multicolumn{2}{c}{\(6S_{1/2}-5D_{3/2}\)} & \multicolumn{4}{c}{\(6S_{1/2}-5D_{5/2}\)} \\ Resonance & \(\lambda_{res}\) & \(\lambda_{magic}\) & \(\alpha_{magic}\) & Resonance & \(\lambda_{res}\) & \(\lambda_{magic}\) & \(\alpha_{magic}\) \\ \hline \(6S_{1/2}\to 6P_{3/2}\) & 455.53 & & & \(6S_{1/2}\to 6P_{3/2}\) & 455.53 & & \\ & & 480.710 & \(-\)4.10 & & & 480.76 & \(-\)8.32 \\ & & 480.6(5) [35] & \(-\)2.89 [35] & & & & \\ \(6S_{1/2}\to 6P_{1/2}\) & 493.55 & & & \(6S_{1/2}\to 6P_{1/2}\) & 493.55 & & \\ \(5D_{3/2}\to 6P_{3/2}\) & 585.53 & & & \(5D_{3/2}\to 6P_{3/2}\) & 614.34 & & \\ & & 588.32 & 330.15 & & & 653.17 & 247.90 \\ & & 588.4(3) [35] & 329.33 [35] & & & 695.7(3) [35] & 219.4 [35] \\ \(5D_{3/2}\to 6P_{1/2}\) & 649.87 & & & & & \\ & & 693.46 & 221.91 & & & & \\ & & 655.50(3) [35] & 244.89 [35] & & & & \\ \hline \multicolumn{6}{c}{\(7S_{1/2}-5D_{3/2}\)} & \multicolumn{4}{c}{\(7S_{1/2}-5D_{5/2}\)} \\ \hline \(6P_{1/2}\to 7S_{1/2}\) & 452.62 & & & \(6P_{1/2}\to 7S_{3/2}\) & 452.62 & & \\ & & 466.952 & 0.52 & & & 466.883 & \(-\)2.63 \\ \(6P_{3/2}\to 7S_{1/2}\) & 490.13 & & & \(6P_{3/2}\to 7S_{1/2}\) & 490.13 & & \\ \(7S_{1/2}\to 8P_{3/2}\) & 518.49 & & & \(7S_{1/2}\to 8P_{3/2}\) & 518.49 & & \\ & & 518.79 & \(-\)22.85 & & & 518.79 & \(-\)31.91 \\ \(7S_{1/2}\to 8P_{1/2}\) & 526.75 & & & \(7S_{1/2}\to 8P_{1/2}\) & 526.75 & & \\ & & 526.78 & \(-\)28.72 & & & 601.37 & \(-\)548.78 \\ & & 583.76 & \(-\)548.592 & & & & \\ \(5D_{3/2}\to 6P_{3/2}\) & 585.53 & & & \(5D_{5/2}\to 6P_{3/2}\) & 614.34 & & \\ \(5D_{3/2}\to 6P_{1/2}\) & 649.87 & & & & & \\ \(7S_{1/2}\to 7P_{3/2}\) & 1306.14 & & & \(7S_{1/2}\to 7P_{3/2}\) & 1306.14 & & \\ & & 1380.83 & 59.75 & & & 1380.83 & 59.15 \\ \(7S_{1/2}\to 7P_{1/2}\) & 1421.54 & & & & \(7S_{1/2}\to 7P_{1/2}\) & 1421.54 & & \\ \hline \multicolumn{6}{c}{\(7S_{1/2}-6D_{3/2}\)} & \multicolumn{4}{c}{\(7S_{1/2}-8P_{3/2}\)} \\ \hline \(7S_{1/2}\to 8P_{3/2}\) & 518.49 & & & \(7S_{1/2}\to 8P_{3/2}\) & 518.49 & & \\ \(7S_{1/2}\to 8P_{1/2}\) & 526.750 & & & \(7S_{1/2}\to 8P_{1/2}\) & 526.75 & & \\ & & 526.84 & \(-\)484.17 & & & 526.839 & \(-\)479.94 \\ & & 530.98 & \(-\)664.18 & & & 532.797 & \(-\)655.19 \\ \(6D_{3/2}\to 6F_{5/2}\) & 536.28 & & & \(6D_{5/2}\to 6F_{7/2}\) & 539.31 & & \\ & & & & \(6D_{5/2}\to 6F_{5/2}\) & 542.26 & & 542.17 & \(-\)613.07 \\ & & & & & \(6D_{5/2}\to 8P_{3/2}\) & 645.33 & \(-\)580.80 \\ \(6D_{3/2}\to 8P_{3/2}\) & 637.25 & & & & \(6D_{5/2}\to 8P_{3/2}\) & 645.70 & & \\ \(6D_{3/2}\to 8P_{1/2}\) & 649.77 & & & & \(6D_{5/2}\to 5F_{7/2}\) & 871.32 & & \\ & & 743.97 & \(-\)746.39 & & & 889.20 & \(-\)1211.62 \\ \(6D_{3/2}\to 5F_{5/2}\) & 874.02 & & & & \(6D_{5/2}\to 5F_{5/2}\) & 889.99 & & \\ \(7S_{1/2}\to 7P_{3/2}\) & 1306.14 & & & & \(7S_{1/2}\to 7P_{3/2}\) & 1306.14 & & \\ \(7S_{1/2}\to 7P_{1/2}\) & 1421.54 & & & & \(7S_{1/2}\to 7P_{1/2}\) & 1421.54 & & \\ \end{tabular} \end{table} Table 4: Magic wavelengths \(\lambda_{magic}\) (in nm) with the corresponding polarizability \(\alpha_{n}(\omega)\) (in a.u.) along with their comparison with available literature for \(6S_{1/2}\)–\(5D_{3/2,5/2}\) transitions in Ba\({}^{+}\) ion. good accord with this value. Similarly, we have pointed out tune-out wavelengths for \(nS_{1/2}\) and \((n-1)D_{3/2}\), \(n=(4,5),(5,6)\) and \((6,7)\) states for Ca\({}^{+}\), Sr\({}^{+}\) and Ba\({}^{+}\) ions, by identifying \(\lambda\)s at which their corresponding \(\alpha\)s tend to zero. Hence, it has been perceived that out of 25 tune-out wavelengths for all states of Ca\({}^{+}\) ion, only seven of them lie within visible spectrum and on comparison of different tune-out wavelengths for \(4S_{1/2}\) and \(3D_{3/2}\) states of Ca\({}^{+}\) ion, it has been analyzed that all of these results are advocated by the results obtained in Refs. [35; 48]. However, one of the tune-out wavelength has been located at 493.13 nm for \(3D_{5/2}\) state of Ca\({}^{+}\) ion seems have 2% variation from the wavelength obtained by Kaur et al. in Ref. [35]. This may be due to the fact that our study incorporates all the highly precise E1 matrix elements as well energies of the states available at Portal for High-Precision Atomic Data and Computation [47], which appears to be missing in previous studies. For Sr\({}^{+}\) ion, maximum number of tune-out wavelengths have been identified out of all the considered alkaline-earth ions. It is also realized that most of these \(\lambda\)s lie within visible spectrum of electromagnetic radiation, are mostly comprise of all the \(\lambda_{T}\) values corresponding to \(5S_{1/2}\), \(5D_{3/2}\) and \(5D_{5/2}\) states. Additionally, during the comparison of these values with the results published in Refs. [35; 48], it is examined that tune-out wavelength at 417.04 nm for \(5S_{1/2}\) as well as \(\lambda_{T}=1018.91\) nm for \(4D_{3/2}\) state agree well with the available results, however, the tune-out wavelengths at 606.50 nm and 594.03 nm for \(4D_{3/2}\) and \(4D_{5/2}\) states, respectively show a discrepancy of less than 2% which lies within quoted error limit. In case of Ba\({}^{+}\) ion, we have located 24 tune-out wavelengths, which in all comprise of 10, 10 and 4 wavelengths in visible, UV and infrared regions, respectively. It is also accentuated that all the tune-out wavelengths that exist in visible region lie within the range 480 nm to 550 nm. We have also compared our tune-out wavelengths for \(6S_{1/2}\) and \(5D_{3/2,5/2}\) states against available theoretical data in Refs. [48; 35; 49] and it is found that all the \(\lambda_{T}\)s except 468.61 nm and 459.570 nm, respectively for \(5D_{3/2}\) and \(5D_{5/2}\) states show disparity less than 1% which lies within the considerable error limit. ## V Conclusion We have identified a number of reliable magnetic-sublevel independent tune-out wavelengths of many \(S_{1/2}\) and \(D_{3/2,5/2}\) states, and magic wavelengths of different combinations of \(S_{1/2}\)-\(D_{3/2,5/2}\) transitions in the alkaline-earth ions from Mg\({}^{+}\) through Ba\({}^{+}\). If they can be measured precisely, accurate values of many electric dipole matrix elements can be inferred by combining the experimental values of these quantities with our theoretical results. Most of the magic wavelengths found from this study show that they can be detected using the red and blue-detuned traps. In fact, it is possible to perform many high-precision measurements by trapping the atoms at the reported tune-out and magic wavelengths of the considered transitions in the future that can be applied to different metrological studies.
2310.20319
GACE: Geometry Aware Confidence Enhancement for Black-Box 3D Object Detectors on LiDAR-Data
Widely-used LiDAR-based 3D object detectors often neglect fundamental geometric information readily available from the object proposals in their confidence estimation. This is mostly due to architectural design choices, which were often adopted from the 2D image domain, where geometric context is rarely available. In 3D, however, considering the object properties and its surroundings in a holistic way is important to distinguish between true and false positive detections, e.g. occluded pedestrians in a group. To address this, we present GACE, an intuitive and highly efficient method to improve the confidence estimation of a given black-box 3D object detector. We aggregate geometric cues of detections and their spatial relationships, which enables us to properly assess their plausibility and consequently, improve the confidence estimation. This leads to consistent performance gains over a variety of state-of-the-art detectors. Across all evaluated detectors, GACE proves to be especially beneficial for the vulnerable road user classes, i.e. pedestrians and cyclists.
David Schinagl, Georg Krispel, Christian Fruhwirth-Reisinger, Horst Possegger, Horst Bischof
2023-10-31T09:55:04Z
http://arxiv.org/abs/2310.20319v1
# GACE: Geometry Aware Confidence Enhancement for ###### Abstract Widely-used LiDAR-based 3D object detectors often neglect fundamental geometric information readily available from the object proposals in their confidence estimation. This is mostly due to architectural design choices, which were often adopted from the 2D image domain, where geometric context is rarely available. In 3D, however, considering the object properties and its surroundings in a holistic way is important to distinguish between true and false positive detections, e.g. occluded pedestrians in a group. To address this, we present GACE, an intuitive and highly efficient method to improve the confidence estimation of a given black-box 3D object detector. We aggregate geometric cues of detections and their spatial relationships, which enables us to properly assess their plausibility and consequently, improve the confidence estimation. This leads to consistent performance gains over a variety of state-of-the-art detectors. Across all evaluated detectors, GACE proves to be especially beneficial for the vulnerable road user classes, i.e. pedestrians and cyclists. ## 1 Introduction Three-dimensional perception of surrounding objects is a critical component for autonomous vehicles and robots. Many modern perception systems use point cloud data from LiDAR (Light Detection and Ranging) sensors for this task, since they can provide accurate 3D information even over long distances. The popularity of these sensors can be seen in the increased research interest in LiDAR-based 3D object detection approaches, _e.g_. [10, 29, 38, 40, 45, 47] and the large number of recently published autonomous driving datasets that include LiDAR data, _e.g_. [1, 6, 20, 33]. However, the characteristics of LiDAR data impose significant challenges for object detection. Unlike pixels in an image, which are aligned in a regular grid, point clouds represent 3D data as a collection of individual points in space, each with its own set of coordinates. In addition to the unstructured nature of the data, the highly variable point density poses a major challenge. Due to the angular offset of the LiDAR beams, the density is highly dependent on the distance to the object and can be altered by occlusions in the foreground. This often requires detecting objects based on very few data points, which is especially true for classes with smaller spatial dimensions, such as pedestrians and cyclists. For example, in the Waymo Open Dataset [33], one of the most widely used and challenging datasets to date, about 30 percent of all annotated pedestrians consist of less than merely 20 points. Detecting these sparsely sensed objects naturally leads to a large number of false positive detections at test time. For this reason, determining a meaningful confidence value for the detections is critical to find a trade-off between precision and recall that adequately dis Figure 1: Context matters: a baseline detector struggles at detecting true positive objects confidently if the sampling pattern is atypical, _e.g_. the occluded pedestrian in the orange bounding box (close-up top-left). GACE exploits the geometric properties of the detection and its surrounding objects to significantly increase the score for this detection, which is intuitively correct for a human observer considering this scene. tinguishes true positives from false positives. The potential that could be exploited by improving the confidence score is considerable: Suppose we have an oracle that could correctly classify the detections of a SECOND [38] model on the Waymo dataset into true and false positives. This would increase the LEVEL_1 average precision for vehicles by \(+3.96\)AP and, more importantly, for pedestrians and cyclists by as much as \(+10.71\)AP and \(+13.74\)AP, respectively. Existing 3D object recognition pipelines, including confidence estimation approaches, were largely inspired by 2D image-based object recognition models and then gradually adapted to point cloud processing. However, the conventional _backbone - neck - head_ architecture of the 2D detection model was largely retained, in [8, 10, 27, 38, 45, 27]. After extracting features (point-based, voxel-based, or region-based) over multiple levels in the backbone, they are fused in the neck module and then passed to the detection head, where bounding box regression and confidence estimation are performed on a dense feature representation. Typically, separate branches are used within the head for bounding box regression and confidence estimation, each consisting of one or more fully connected layers on top of the common feature representation. Unlike image-based object detection, there are several highly relevant geometric properties inherent to objects in 3D point clouds that have been largely unexploited in assessing the confidence of a detection. In the image case, it is usually not possible to easily derive geometric properties for the objects, such as height or orientation, unless a static and fully calibrated camera and a known or constant scene are given. In contrast, in 3D object detection, many geometric features are directly available in the object properties and associated 3D data points to better assess the presence of a real object. On the one hand, these are **instance-specific properties**, such as the dimension of the object, the heading direction, the position, or the point distribution within the bounding box. For example, as shown in Figure 2, the precision of a SECOND [38] model for detecting a vehicle is highly dependent on the size of the object and from which side it is detected by the LiDAR. It can be seen that vehicle categories between passenger cars and heavy duty vehicles (_i.e_. vehicles with a length of 6 to 13m) are harder to detect, potentially because they are underrepresented in the dataset, and that vehicles are easier to detect from behind (_i.e_. viewing angle between \(\pm 45\) degrees), presumably because of the highly reflective license plates, as shown in [25]. On the other hand, **contextual properties**, _i.e_. geometric relationships to neighboring objects, can contribute to a more reliable estimation of the confidence. For example, as shown in Figure 1, a pedestrian that appears atypical due to occlusions can be assessed more reliably by additionally taking neighboring vehicles and pedestrians into consideration. Nevertheless, these simple but highly informative metric properties are neglected when estimating the confidence score in current detector architectures for the following reasons: First, in grid-based models, information such as the exact point distribution within the bounding box or the number of points is already partially discarded by the discretization (voxelization) in the preprocessing phase. Second, the bounding box properties (object dimensions & rotation) are determined in the parallel and separate box regression head and are therefore not accessible to the confidence estimation head. Finally, confidence estimation is usually performed using only features within a small area around the object (depending on the receptive field and detector), and no explicit information about neighboring objects and their geometric properties or confidence values is used, preventing a holistic estimation. Inspired by these observations, we present _GACE_, an intuitive and highly effective method to improve the confidence estimation of any black-box detector using geometric information. Given a set of detections from the base detector, we explicitly use these neglected features to enhance the expressiveness of the confidence values with the help of these additional cues. Our model-agnostic approach is intentionally applied after the actual detector training process to perform an auxiliary geometric assessment independent of the initial features. In a detailed evaluation on the Waymo dataset, we show that GACE consistently improves the performance of several state-of-the-art detection pipelines. Furthermore, we demonstrate the generalizability and transferability of our method by applying it to other datasets and even other detectors. Without retraining them, we achieve highly compelling performance gains. Figure 2: Precision of a SECOND [38] model for the Waymo [33] vehicle class as a function of the object length (top) and of the viewing angle (bottom), indicating from which side the vehicle is seen by the LiDAR, where 0 degrees corresponds to the rear view. These examples illustrate the strong dependence of the precision on simple geometric object properties. ## 2 Related Work **LiDAR-based 3D Object Detection:** Depending on how existing methods for 3D object detection in single frame LiDAR data deal with the unstructured nature of point clouds, they can be broadly categorized into point-based, grid-based, and hybrid approaches. **Point-based methods** extract information directly from the individual raw 3D points [23, 24, 30, 32, 40, 44]. The pioneering works, PointNet [23] and PointNet++ [24], used shared multilayer perceptrons in combination with global pooling functions to directly extract features from the irregular point cloud data. In Point-RCNN [30] features extracted in this manner are used to segment foreground points and generate proposals based on them. The advantages of point-based 3D object detectors are that there is no loss of point information due to discretization and the large receptive field, but at the cost of high computational demands. Instead of processing the points directly, **grid-based methods**[4, 15, 21, 27, 34, 38, 39, 43, 46] discretize the non-uniform 3D points into regular grids that can then be processed with 2D/3D convolutions. Voxelnet [46], a pioneering method, divides the point cloud into uniformly spaced 3D voxels, aggregates information from the points within them and generates predictions using 3D convolutions. To better handle the large number of empty voxels, SECOND [38] introduced sparse 3D convolutions. To reduce complexity, PointPillars [15] and PillarNet [27] use a 2D grid on the ground plane to create a column representation, a single pillar-shaped voxel per location, that can be processed using 2D convolutions. As an alternative to such anchor-based methods, Centerpoint [43] predicts a bird's-eye view heat map and detects the object center using a keypoint detector. Recently, transformer-based backbones have also been used to enable long-range relationships between voxels [5, 21, 34, 47]. Object relations within a frame and across multiple frames are mapped by Ret3D [37] using a graph and a transformer. The advantage of grid-based methods is that they can process data faster due to the regular format, but are limited by the loss of point information during the initial discretization phase. In order to obtain both, multi-scale features and fine-grained information, **hybrid methods** process voxel and point information jointly [22, 26, 28, 29, 31, 41, 42]. PV-RCNN [28], uses a set abstraction module that combines surrounding point and voxel features at keypoints to improve the detections. Part-A\({}^{2}\)[31] predicts the position of parts within an object based on point features to improve the accuracy, while LiDAR R-CNN [17] uses features of a PointNet [23] model that processes points within and around box proposals. Pyramid R-CNN [19] creates point features using a pyramid grid structure to acquire fine-grained and long-range contextual information. **Confidence Estimation**: In the usual _backbone - neck - head_ detector architecture, after the feature extraction and aggregation, the box regression and confidence estimation are performed. This is usually done in two separate branches based on a common dense feature representation [15, 38, 40, 46]. This has the disadvantage that the accuracy of the localization is hardly included in the confidence score. Inspired by 2D object detection methods [11, 12, 36], IoU guided supervision is frequently used to obtain a better correlation between the classification result and the localization accuracy [9, 10, 16, 27, 28, 31, 41, 45]. Thereby, the IoU between the predicted box and the ground truth is learned in a third branch during training and then incorporated into the final confidence score at test time. Hu [9] leverage the inherent relationship between object distance and point density to better assess a detection. Inspired by the 2D approach [3], a spatial transformation on the feature maps is done by He [8] to better align the confidence prediction and bounding box regression. Related to confidence estimation are also calibration methods [7, 14] where the score should represent a true probability, _i.e_. how likely a detection is. Detection pipelines, however, aim at the best separation of true and false positives as optimization goal for the confidence prediction. In our confidence estimation method we also pursue this optimization objective, but in contrast to existing approaches, we present a method for refining the confidence values for a given set of detections by exploiting the rich geometric information contained directly in the detections as well as in the underlying 3D points. While these useful cues for assessing the plausibility of a detection have been largely untapped due to the architecture of common detection pipelines, they allow us to increase the expressiveness of the confidence values. ## 3 Geometry-Aware Confidence Enhancement Our goal is to optimize the confidence scores for a given set of detections from a base detector in order to better separate true positives from false positives, thus increasing the overall detection performance. We use the detector in a pure black box manner, _i.e_. we do not assume any knowledge about the architecture of the base model, nor access to its internals like parameters, features or gradients. This black-box optimization, taking only the point cloud and the set of objects detected in it as input, enables universal applicability and easy transferability of our enhancement module to any base 3D object detection pipeline. The basic idea is to revalidate the detections by exploiting as much as possible the geometric information they inherently contain. In our proposed approach called _GACE_ (**Ge**ometry **A**ware **C**onfidence **E**nhancement) we exploit two types of geometric information, as shown in Figure 3: * **Instance-specific Geometric Properties (Section 3.1):** Attributes of the bounding box itself, combined with the point data inside. For example, how well is the size of the object or the heading angle supported by the point distribution? * **Contextual Geometric Properties (Section 3.2):** Relationships to surrounding objects can provide useful information to better validate an uncertain detection, _e.g._, a partially occluded vehicle moving in the same lane and same direction as the surrounding vehicles. These useful cues to estimate the plausibility of an object are usually not used for the confidence estimation in common detection pipelines. The reasons are that information is often already discarded during preprocessing (discretization), essential properties such as the estimated size of the object are not available (separate box regression in a parallel branch), or the objects are only evaluated individually and not more holistically. Therefore, we explicitly use these easily accessible and rich sources of information as input to our enhancement module. After merging the instance-specific and contextual features, we determine the new confidence value of each proposed object via an auxiliary task (Section 3.3). To generate the training data for our enhancement module, we use the black-box base model in a single inference run on the training set. The resulting set of all detections from the base model represents the training data to learn our improved confidence estimator. Formally, this can be described as follows. **Definitions & Notations:** Let \(\widetilde{\mathcal{X}}=\{\widetilde{\mathbf{x}}_{k}\}_{k=1\ldots K}\) be a LiDAR point cloud, where each of the \(K\) unordered points \(\widetilde{\mathbf{x}}_{k}\in\mathbb{R}^{5}\) consists of the 3D coordinates, intensity/reflectance and the elongation value. Furthermore, let \(\widetilde{\mathcal{Y}}=\{\widetilde{\mathbf{y}}_{i}\}_{j=1\ldots M}\) be the set of corresponding ground truth objects. Each object annotation \(\widetilde{\mathbf{y}}=[\widetilde{\mathbf{b}},\widetilde{\mathbf{y}}]\) includes the bounding box parameters \(\widetilde{\mathbf{b}}=[\widetilde{c}_{x},\widetilde{c}_{y},\widetilde{c}_{z},\widetilde{d}_{x},\widetilde{d}_{y},\widetilde{d}_{z},\widetilde{\Theta}]\) and the corresponding class label \(\widetilde{y}\). For a given black-box 3D object detector \(F\), let \(F(\widetilde{\mathcal{X}})=\mathcal{Y}=\{\mathbf{y}_{i=1\ldots N}\}\) be the set of proposed detections for this input point cloud, where \(\mathbf{y}=[\mathbf{b},y,s]\). In addition to the box properties \(\mathbf{b}\) and the corresponding class label \(y\), the detector predicts a confidence value \(s\) that should ideally indicate the prevalence of a true positive example. **Confidence Optimization:** Based on the known ground truth objects, a category label \(\{u_{i}\in\{0,1\}\}_{i=1\ldots N}\) can be assigned to each detection, indicating whether it is a true positive or false positive detection. Moreover, we know the IoU with a possible ground truth bounding box for each object which we define as \(\{v_{i}\}_{i=1\ldots N}\). We aim to improve the confidence estimate of the original detector by focusing exclusively on the binary classification of detections into true positive and false positive examples. We determine the revised confidence score \(\{\widehat{s}_{i}\}=H(\mathcal{Y},\mathcal{X})\) using our module \(H\), where only the set of detections \(\mathcal{Y}\) and the points they include \(\mathcal{X}\subset\widetilde{\mathcal{X}}\) are used as input. Figure 3: Schematic of GACE: To re-evaluate the confidence score of a detection (orange), we aggregate geometric properties of the detection itself and the points it contains into a feature vector (top). To capture the context of the detection, geometric relationships to neighboring detections are aggregated using a shared MLP and subsequent pooling function (bottom). By merging both features (right), we obtain a new confidence score that takes into account the underlying geometric properties of the detection. ### Instance-specific Geometric Properties In common 2D and 3D object detection architectures, the bounding box regression and the confidence estimation are performed completely separated in different branches. This is well suited for 2D where it is desired to detect objects in the image regardless of their scale. For 3D, however, important cues for plausibility estimation remain largely unused. Let us consider the available context directly provided by an object proposal: The position of the possible object in combination with the dimensions indicates, for example, which point density is to be expected. Furthermore, the direction angle provides an indication of the expected point distribution within the box. This information can even be further refined by knowing the class of the object. This directly available geometrical knowledge about an object allows for a more profound estimation of the confidence score. We extract these basic properties and transform them into a compact representation using a multilayer perceptron (MLP) \(H_{I}\). As input parameters, we first use the object parameters, _i.e_. the position \((c_{x},c_{y},c_{z})\), the size \((d_{x},d_{y},d_{z})\), and the heading angle \(\Theta\) of the bounding box, as well as the initial confidence value \(s_{i}\) of the detection estimated by the base detector. Additionally, we use the distance \(\|\mathbf{c}\|\) from the LiDAR sensor to the object center, and the angle \(\alpha\) between the line of sight to the object and the heading angle of the object: \[\alpha=\Theta-\text{atan2}(c_{y},c_{x}). \tag{1}\] This angle describes from which side an object is seen from the LiDAR center, independent of the position of the object relative to the LiDAR, _e.g_. a vehicle driving directly towards the LiDAR always has the same angle \(\alpha\), no matter from which direction the vehicle approaches. We complement these cues with information about the points \(\mathcal{X}_{\mathbf{b}}\subset\mathcal{X}\) inside the bounding box \(\mathbf{b}\). Besides the overall number of points \(|\mathcal{X}_{\mathbf{b}}|\), we extract elementary low-level statistics. Therefore, we scale the box and the associated points to a uniform size and then align them w.r.t. their center and yaw angle. In this canonical representation we compute the mean, standard deviation, minimum and maximum of \(\mathcal{X}_{\mathbf{b}}\) for all axes, denoted as \(\mathcal{X}_{\mathbf{b}}^{\text{mean}},\mathcal{X}_{\mathbf{b}}^{\text{std}},\mathcal{X}_{\mathbf{b}}^{\text{min}},\mathcal{X}_{\mathbf{b}}^{\text{max}}\). We then aggregate these attributes into one feature vector representing the instance-specific plausibility per object as \[\mathbf{f}^{I}=H_{I}\left(\left[\mathbf{b},\alpha,\|\mathbf{c}\|,|\mathcal{X}_ {\mathbf{b}}|,\mathcal{X}_{\mathbf{b}}^{\text{mean}},\mathcal{X}_{\mathbf{b} }^{\text{std}},\mathcal{X}_{\mathbf{b}}^{\text{min}},\mathcal{X}_{\mathbf{b} }^{\text{max}}\right]^{\top}\right), \tag{2}\] where we pass all angles as direction vectors \((\cos(\cdot),\sin(\cdot))\) and normalize all metric properties to unit length using the corresponding maximum value range. ### Contextual Geometric Properties Especially in the case of uncertain detections, _e.g_. detections that are far away and therefore consist of only a few 3D points, or detections that are partially occluded and therefore appear atypical, geometric contextual information can be very useful in assessing a confidence score. Examples include a vehicle that is heavily occluded but in a convoy with other vehicles, or cyclists moving in a group. However, this information is usually not available for the confidence estimation, since the object proposals are evaluated individually within the receptive field but without explicit knowledge of the objects detected in its vicinity and their properties. In order to assess the plausibility of a detection \(\mathbf{y}\) in a more holistic way, we use the geometric relations to surrounding objects in the scene. We thereby consider all neighbors that are within a certain radius \(r\) around the object to be evaluated. We therefore create a representation per object, which captures the relationships to its neighboring objects. In order to be independent of the number of neighboring objects, we first create a feature vector for each neighbor, which we then combine into a unified representation using a symmetric pooling function. As input parameters per neighbor we use the distance to the object to be evaluated \(\|\mathbf{c}-\mathbf{c}_{n}\|\), the direction vector \(\mathbf{c}-\mathbf{c}_{n}\), the difference between the two heading angles \(\Theta-\Theta_{n}\), and the neighbor's class label \(y_{n}\). To incorporate the validity of the neighbor, we leverage \(\mathbf{f}_{n}^{I}\), which represents the instance-specific properties of the neighbor. This information is encoded via the shared weight MLP \(H_{C}\) to form the feature vector \(\mathbf{f}_{n}^{C}\) for each individual neighbor, \[\mathbf{f}_{n}^{C}=H_{C}\left(\left[\|\mathbf{c}-\mathbf{c}_{n}\|,\mathbf{c} -\mathbf{c}_{n},\Theta-\Theta_{n},y_{n},\mathbf{f}_{n}^{I}\right]^{\top} \right), \tag{3}\] where the angle \(\Theta-\Theta_{n}\) is provided as direction vector and the metric features are normalized to unit length. Finally, we aggregate the information from the individual neighbors into a unified representation for later processing, as shown in Figure 3. This requires accumulating features of an unknown (and varying) number of neighbors. To impose no constraint on the number of features, we take inspiration from the pooling step of point-based detectors, _e.g_. [23], and employ a symmetric max pooling function to form a plausibility signature \(\mathbf{f}^{C}\) over the feature vectors of all neighbors. Thus, in this representation the geometric relations to the surrounding objects are accumulated for the subsequent confidence estimation, taking into account the respective local properties of the neighbors. ### Data Fusion & Confidence Prediction To estimate the new confidence value for a detection, we merge the instance-specific and contextual geometric features. The instance-specific information encoded in \(\mathbf{f}^{I}\), as well as the contextual information in \(\mathbf{f}^{C}\) are concatenated and processed using \(H_{F}\), a MLP with sigmoid output function, to estimate the new confidence score \[\widehat{s}=H_{F}\left(\left[\mathbf{f}^{I},\mathbf{f}^{C}\right]^{\top}\right). \tag{4}\] We train the whole module including the two feature encoding networks \(H_{I}\) and \(H_{C}\) using an end-to-end training strategy. As loss function \(\mathcal{L}_{\text{conf}}(\widehat{s},u)\) for this task we use the focal loss [18] to focus learning on hard examples. The goal is to divide the set of detections as good as possible into true or false positive examples using the binary category label \(u\in\{0,1\}\) as target. As auxiliary task during training we estimate the IoU with the ground truth bounding box in a further output \(\widehat{v}\) of \(H_{F}\). Therefore, we add an IoU-guidance [41, 16, 45] L1-loss term \(\mathcal{L}_{\text{IoU}}(\widehat{v},v)\). This increases the importance of the point distribution statistics within the features, as they provide evidence of the bounding box accuracy from which the confidence estimate also benefits. The overall loss function is therefore \[\mathcal{L}=\mathcal{L}_{\text{conf}}(\widehat{s},u)\ +\ \lambda_{\text{ IoU}}\ \mathcal{L}_{\text{IoU}}(\widehat{v},v), \tag{5}\] where \(\lambda_{\text{IoU}}\) is a hyper-parameter to adjust the influence of the auxiliary task. ## 4 Experiments To demonstrate the benefits of GACE, we evaluate our black-box confidence optimization method on several well-known state-of-the-art 3D object detection pipelines. In particular, we apply it to the pillar-based PointPillars [15], the voxel-based SECOND [38] and Centerpoint [43], as well as the hybrid methods Part-A2[31], PV-RCNN [28] and PV-RCNN++ [29]. Footnote 2: [https://github.com/dschinagl/gace](https://github.com/dschinagl/gace) Datasets:For the evaluation of our approach we use the Waymo Open Dataset [33] as well as the KITTI Dataset [6]. The Waymo dataset is one of the largest and most challenging public datasets for autonomous driving research. It provides 798 training scenes and 202 validation scenes, where each scene consists of about 200 LiDAR samples covering the full 360\({}^{\circ}\) field-of-view. In total, the dataset consists of over 8.9M annotated objects classified into vehicles, pedestrians and cyclists. We follow the common evaluation protocol using the standard metrics average precision (AP), as well as average precision with heading (APH), where true positives are weighted by their heading accuracy. In addition, we use the KITTI dataset, one of the most popular datasets for 3D object detection. We thereby use the standard split [2] into a training set (3712 samples) and validation set (3769 samples). We also follow the common evaluation practice using average precision with 40 recall points. For both datasets, the 3D IoU threshold for a true positive sample is 0.7 for vehicles and 0.5 for pedestrians and cyclists. Implementation Details:Our experimental setup1 is based on the open-source toolbox OpenPCDet [35]. To ensure the reproducibility of our results, all base models used in the experiments have been trained on the training set using the default configuration and default training policies of OpenPCDet. This includes augmenting the data by randomly rotating, scaling and flipping the point cloud, as well as ground truth sampling [38], the placement of objects from other training examples in the current frame. To create the training data for our module, we use the base model as a black-box detection pipeline. In a single inference run on the training set, we collect the output of the base model, the set of all detections, which represents our training data. The actual training process of our module is therefore entirely decoupled from the base model. Footnote 1: [https://github.com/dschinagl/gace](https://github.com/dschinagl/gace) Our sub-networks for feature transformation, \(H_{I}\) and the shared network \(H_{C}\), as well as the confidence prediction network \(H_{F}\) are two-layer MLP's with 256 feature dimensions. The feature vector \(\mathbf{f}^{I}\) in which the instance-specific information is encoded is 128-dimensional and \(\mathbf{f}^{C}\) for the contextual information is 64-dimensional. When determining the contextual geometric properties, we accumulate the neighboring objects that are within a radius of \(r=40m\). We train our model end to end with the Adam [13] optimizer and a learning rate of 0.001. The weighting of the auxiliary loss term during training is set to \(\lambda_{\text{IoU}}=0.5\). In all experiments we use the same architecture as well as hyperparameters, regardless of the underlying black-box base model. We train our module over 5 epochs on a single NVIDIA@GeForce@ RTX 3090 GPU, which takes less than 10 minutes due to our favorable lightweight model size and low feature dimensions. ### Main Results Table 1 summarizes the results of our confidence optimization on the Waymo dataset for different baseline detection pipelines. For each detector architecture, we report the performance of the base detector as well as the results after applying our GACE module. Please note that for better traceability, the results of the base detectors correspond to the respective OpenPCDet implementations (see OpenPCDet modelzoo). Some reported baseline scores are even higher than in their original papers due to augmentation techniques used in OpenPCDet. For all baseline detectors and all classes, adjusting the confidence level with our approach leads to an increase in performance without exception. The overall performance gains range from +1.02mAPH (LEVEL_2) for PV-RCNN++ to +4.94mAPH (LEVEL_2) for SECOND, demonstrating the significance of this geometric information in estimating a confidence score for a detection. Object Classes:The performance improvements are significantly higher for the classes of pedestrians and cyclists. Intuitively, objects of these vulnerable road users contain significantly fewer 3D points due to their smaller spatial extent compared to vehicles. This effect is even aggravated by possible occlusions. Furthermore, the class of cyclists in particular is underrepresented in the training data. These properties make objects of these classes more difficult to detect, which leads to a higher number of false positives, but also makes confidence estimation more complex. Especially in these cases, our confidence enhancement method benefits from the additional geometric information when separating false positives from true positives, resulting in a higher impact of GACE. Base Detectors:The overall performance gain is highest when applying GACE on pillar-based (PointPillars [15]) and voxel-based methods (SECOND [38] and Centerpoint [43]). These methods lose valuable point information already in their preprocessing stage, _i.e_. by voxelization. Thus, the optimization potential for these methods is higher than for the hybrid methods, which explicitly incorporate point-level information at detection locations. However, by explicitly exploiting both instance-specific and contextual geometric properties of the detections, GACE also leads to a significant performance improvement for these methods, _i.e_. Part-A\({}^{2}\)[31], PV-RCNN [28] and PV-RCNN++ [29], most notably for the classes of pedestrians and cyclists. Precision/Recall Plots:To better illustrate the impact of our method, Figure 4 shows the precision/recall plots before and after applying GACE with a SECOND [38] model as the base detector (See the supplemental material for the other base detectors). Since recall remains unaffected, the significant performance gains of GACE are entirely due to an increase in precision. The better separation of true and false positives is mainly seen in the more challenging classes of pedestrians and cyclists, where the number of false detections is significantly higher and confidence estimation is more difficult. Furthermore, we see that the precision gains are higher in regions of higher recall, _i.e_. for detections with a lower confidence score. Especially these initially underrated detections of objects of the vulnerable classes represent a safety risk that can be reduced by GACE. Range-based Evaluation:Table 2 summarizes our evaluation across the different distance ranges of Waymo. This also shows that GACE can consistently improve the performance of all baseline detectors and in all sub-ranges, especially in the far range. Detections at long distances are more challenging due to the lower point density, making it harder to distinguish true positives from false positives. Therefore, the geometric information is even more valuable in these cases, _e.g_. from +2.93mAPH (LEVEL_2) for PV-RCNN++ up to +4.97mAPH (LEVEL_2) for PointPillars. The detailed evaluation results for all subclasses can be found in the supplemental material. \begin{table} \begin{tabular}{l||c c c c|c c c c|c c c c|c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{4}{c|}{Overall} & \multicolumn{4}{c|}{Vehicules (IoU=0.7)} & \multicolumn{4}{c}{Pedestrians (IoU=0.5)} & \multicolumn{4}{c}{Cyclists (IoU=0.5)} \\ & \multicolumn{2}{c|}{LEVEL\_1} & \multicolumn{2}{c|}{LEVEL\_2} & \multicolumn{2}{c|}{LEVEL\_1} & \multicolumn{2}{c|}{LEVEL\_1} & \multicolumn{2}{c|}{LEVEL\_2} & \multicolumn{2}{c|}{LEVEL\_1} & \multicolumn{2}{c|}{LEVEL\_2} & \multicolumn{2}{c|}{LEVEL\_1} & \multicolumn{2}{c|}{LEVEL\_2} \\ & mAP & mAPH & mAP & mAPH & AP & AP & AP & AP & AP & AP & AP & AP & AP & AP & AP & AP \\ \hline PointPillars [15] & 64.72 & 57.07 & 58.57 & 51.73 & 70.99 & 70.35 & 62.79 & 62.20 & 66.36 & 47.15 & 58.27 & 41.32 & 56.81 & 53.71 & 54.66 & 51.67 \\ + GACE (Ours) & 69.25 & 61.24 & 62.98 & 55.73 & 71.92 & 71.28 & 63.63 & 63.04 & 72.18 & 51.97 & 64.06 & 45.96 & 63.64 & 60.47 & 61.25 & 58.20 \\ \hline _Improvement_ & **+4.53** & **+4.17** & **+4.41** & **+4.00** & **+0.93** & **+0.93** & **+0.84** & **+0.84** & **+5.82** & **+4.82** & **+5.79** & **+4.64** & **+6.83** & **+6.76** & **+6.59** & **+6.53** \\ \hline SECOND [38] & 65.13 & 60.81 & 59.01 & 55.12 & 70.93 & 70.30 & 62.65 & 62.07 & 65.67 & 54.96 & 57.78 & 48.25 & 58.78 & 57.85 & 57.18 & 56.59 & 55.05 \\ + GACE (Ours) & 70.17 & 66.13 & 63.74 & 60.06 & 71.56 & 70.92 & 63.22 & 62.63 & 77.11 & 61.87 & 63.27 & 54.37 & 67.22 & 65.99 & 64.73 & 63.16 \\ _Improvement_ & **+5.04** & **+5.32** & **+4.73** & **+4.94** & **+0.63** & **+0.62** & **+0.57** & **+0.56** & **+6.04** & **+6.91** & **+5.49** & **+6.12** & **+8.44** & **+8.11** & **+8.11** \\ \hline Part-A\({}^{2}\)[31] & 70.30 & 66.66 & 63.53 & 60.27 & 73.35 & 72.81 & 64.73 & 64.24 & 70.20 & 61.01 & 60.83 & 52.85 & 67.53 & 66.18 & 65.03 & 63.73 \\ + GACE (Ours) & 73.07 & 69.21 & 66.24 & 62.77 & 73.99 & 73.43 & 65.58 & 64.87 & 72.36 & 62.93 & 62.31 & 54.81 & 72.84 & 71.28 & 70.13 & 68.63 \\ _Improvement_ & **+2.77** & **+2.55** & **+2.71** & **+2.50** & **+0.64** & **+0.62** & **+0.65** & **+0.63** & **+2.34** & **+1.92** & **+2.38** & **+1.96** & **+5.31** & **+5.10** & **+5.10** & **+4.90** \\ \hline Centerpoint [43] & 73.01 & 70.35 & 66.79 & 64.33 & 72.87 & 72.33 & 64.76 & 64.27 & 74.48 & 68.22 & 66.55 & 60.81 & 71.69 & 70.50 & 69.06 & 67.92 \\ + GACE (Ours) & 75.88 & 72.98 & 69.19 & 66.76 & 74.49 & 73.99 & 61.69 & 65.73 & 78.62 & 72.48 & 70.44 & 64.73 & 73.64 & 72.48 & 70.94 & 69.83 \\ _Improvement_ & **+2.57** & **+2.63** & **+2.40** & **+2.43** & **+1.62** & **+1.66** & **+4.13** & **+4.16** & **+4.14** & **+4.26** & **+3.89** & **+3.92** & **+1.95** & **+1.98** & **+1.88** & **+1.91** \\ \hline PV-RCNN [28] & 71.11 & 66.78 & 64.40 & 60.50 & 74.79 & 74.17 & 61.67 & 65.61 & 72.06 & 61.46 & 63.00 & 53.56 & 66.48 & 64.72 & 64.03 & 62.34 \\ + GACE (Ours) & 73.38 & 68.89 & 66.65 & 62.57 & 75.20 & 74.55 & 66.57 & 65.97 & 73.84 & 62.97 & 64.88 & 55.14 & 71.12 & 69.61 & 68.50 & 66.60 \\ \hline Improvement & **+2.27** & **+2.11** & **+2.25** & **+2.07** & **+0.41** & **+0.38** & **+0.40** & **+0.36** & **+1.78** & **+1.51** & **+1.88** & **+1.58** & **+4.64** & **+4.47** & **+4.26** \\ \hline PV-RCNN++ [29] & 75.72 & 73.05 & 69.22 & 66.73 & 77.30 & 76.81 & 68.92 & 68.47 & 78.91 & 72.42 & 70.43 & 64.41 & 70.95 & 69.90 & 68.31 & 67.31 \\ + GACE (Ours) & 76.76 & 74.02 & 70.31 & 67.75 & 77. ### Ablation Studies **Main Components:** We evaluate the contribution of our proposed instance-specific and contextual properties as well as the influence of the auxiliary IoU loss for a SECOND [38] model as baseline in Table 3. Incorporating the instance-specific geometric properties already leads to a significant performance increase of +3.65AP/+3.32APH. Including also the contextual information, _i.e_. incorporating the relationships to the neighboring detection hypotheses further increases the performance by another +0.71AP/+1.14APH. Compared to the contributions of these two components, there is only a minor impact of incorporating the IoU guidance, leading to additional +0.37AP/+0.48APH. Note that the degradation when adding \(\mathcal{L}_{\text{IoU}}\) to only contextual features is caused by the lack of point information, which does not allow a reasonable estimation of the IoU. Overall, the instance-specific geometric information contributes stronger than the contextual geometric information. **Instance-Specific Properties:** We analyze the contribution of each feature group within the instance-specific properties, namely _box properties_\((\mathbf{b},\|\mathbf{c}\|)\), _number of points_\((|\mathcal{X}_{\mathbf{b}}|)\), _viewing angle_\((\alpha)\), and _point statistics_\((\mathcal{X}_{\mathbf{b}}^{\text{mean}},\mathcal{X}_{\mathbf{b}}^{\text{std} },\mathcal{X}_{\mathbf{b}}^{\text{min}},\mathcal{X}_{\mathbf{b}}^{\text{max}})\). Table 4 shows the impact of each group on the overall performance when used exclusively, indicating a high contribution of box properties and point statistics. A complete list of the combinations can be found in the supplemental material. **Contextual Properties Radius:** The dependence of the performance on the chosen context radius \(r\) is shown in Figure 5. It can be seen that the performance increases significantly up to \(\sim 15m\), followed by a slight degradation starting at \(\sim 40m\), which illustrates the importance of neighboring objects in the near and middle ranges. \begin{table} \begin{tabular}{c|c|c|c} Box Properties & \# Points & Viewing Angle & Point Stat. \\ \hline \(56.98^{\text{\text{\text{\text{\text{\text{\text{\text ### Generalization and Transferability Since we use only the detection attributes and corresponding properties of the underlying point cloud as input, a GACE module trained on base detector A can be directly applied to another base detector B. Furthermore, since all metric features are normalized to their maximum possible value and the statistical parameters (point distribution) are computed from a normalized unit-length box, GACE can be applied not only to a different detector, but also directly to a different detector on a different dataset. Therefore, to demonstrate the general applicability of GACE, we freeze a GACE module trained on detections from a SECOND detector on Waymo, and apply it directly to detections of a SECOND detector on the KITTI dataset [6]. As shown in Table 5, first row, this also leads to considerable performance improvements despite the distribution shift to a different dataset (different LiDAR sensor, different country). Even more remarkably, significant performance gains are also obtained when the same GACE module is applied even to a different base detector, _e.g_. +7.42AP for PV-RCNN on pedestrians while achieving the same performance for cars. The results demonstrate the excellent generalization capability of GACE and the general validity of the geometric information regardless of the dataset. ### Runtime Analysis In Table 6 we show the runtime analysis of GACE for a 360\({}^{\circ}\) field-of-view Waymo point cloud using a single NVIDIA(r) GeForce(r) RTX 3090 GPU. The computationally most intensive part is the feature extraction, especially extracting the statistical features for all detections. However, this can be solved efficiently via PyTorch einsum. During inference, we first compute the instance-specific plausibility \(\mathbf{f}^{I}\) for each detection via \(H_{I}\), which is then used as input to \(H_{C}\) for the corresponding neighbors. Overall, GACE is capable of processing \(\sim 490\) Waymo point clouds per second with \(\sim 100\) detections each. ## 5 Limitations While our method has proven effective in improving object detection performance for vulnerable classes such as pedestrians and cyclists, it is worth noting that it may not offer such significant benefits for simpler classes such as vehicles. This is because all detectors typically have few false positives for these _easier-to-detect_ objects. Therefore, there is less room for improvement for this class. However, it is important to consider the context in which our method is applied. Even if the performance improvement is not as large for simpler classes, the overall impact of our method on safety should not be underestimated. In many real-world scenarios, pedestrians and cyclists are particularly vulnerable, and any improvement in their detection can make a significant contribution to safety. ## 6 Conclusion We proposed GACE, a method to better evaluate object hypotheses from black-box LiDAR-based 3D object detectors by explicitly assessing numerous geometric properties inherent in the detections. This enables a better separation between true and false positive detections and thus significantly improves the performance. In a comprehensive analysis, we were able to show the performance of our method for several state-of-the-art detectors and also demonstrate the generalisation capabilities of GACE. This underlines the importance and general validity of the geometric information inherent in 3D detections, which has been largely neglected in the past. **Acknowledgements** This work was partially funded by the Austrian FFG project iLIDS4SAM (878713) and by the Christian Doppler Laboratory for Embedded Machine Learning. \begin{table} \begin{tabular}{l|l|l} \multirow{2}{*}{\begin{tabular}{c} **Feature** \\ **Extraction** \\ \end{tabular} } & Points in Boxes Query & \(0.56\) ms \\ & Geometric \& Statistical Features & \(0.98\) ms \\ & Neighboring Object ID’s Query & \(0.07\) ms \\ \hline \multirow{2}{*}{ \begin{tabular}{c} **Network** \\ **Inference** \\ \end{tabular} } & \(H_{I}\) (Instance-specific) & \(0.14\) ms \\ & \(H_{C}\) (Contextual) & \(0.17\) ms \\ & \(H_{F}\) (Confidence Estimation) & \(0.12\) ms \\ \hline \multicolumn{3}{c}{**Overall**} & \multicolumn{2}{c}{**2.04 ms**} \\ \end{tabular} \end{table} Table 6: GACE runtime analysis per 360\({}^{\circ}\) field-of-view Waymo point cloud on a single NVIDIA® GeForce® RTX 3090 GPU. Figure 5: Performance of purely context-based GACE for different radius \(r\) (SECOND base model / Waymo overall). \begin{table} \begin{tabular}{l|c|c|c} Method & Car & Pedestrian & Cyclist \\ \hline SECOND (KITTI) & \(81.61\) & \(51.14\) & \(66.74\) \\ +GACE (SECOND Waymo) & \(82.04^{\textbf{0.43}}_{\textbf{0.43}}\) & \(57.11^{\textbf{5.97}}_{\textbf{0.99}}\) & \(70.99^{\textbf{4.25}}_{\textbf{0.25}}\) \\ \hline PointPillars (KITTI) & \(78.39\) & \(51.41\) & \(62.94\) \\ +GACE (SECOND Waymo) & \(78.51^{\textbf{0.12}}_{\textbf{0.12}}\) & \(55.30^{\textbf{+3.89}}_{\textbf{0.39}}\) & \(67.94^{\textbf{+5.00}}_{\textbf{0.00}}\) \\ \hline Part-A\({}^{2}\) (KITTI) & \(82.92\) & \(59.73\) & \(70.10\) \\ +GACE (SECOND Waymo) & \(82.94^{\textbf{0.02}}_{\textbf{0.02}}\) & \(64.21^{\textbf{4.48}}_{\textbf{0.48}}\) & \(72.16^{\textbf{+2.06}}_{\textbf{0.26}}\) \\ \hline PV-RCNN (KITTI) & \(82.86\) & \(53.64\) & \(70.42\) \\ +GACE (SECOND Waymo) & \(82.84^{\textbf{0.02}}_{\textbf{0.02}}\) & \(61.06^{\textbf{+7.42}}_{\textbf{0.27}}\) & \(72.70^{\textbf{+2.28}}\) \\ \end{tabular} \end{table} Table 5: Model Transfer: Applying a GACE module trained on SECOND detections on Waymo (without LiDAR elongation, which is not available on KITTI) to different detectors on the KITTI dataset (moderate difficulty / @R40).
2309.13868
Charge-Density-Wave State in Extremely Overdoped Cuprates Driven by Phonons
Recent resonant x-ray scattering (RXS) experiments revealed a novel charge order in extremely overdoped La$_{2-x}$Sr$_x$CuO$_4$ (LSCO) [Phys. Rev. Lett. 131,116002]. The observed charge order appears around the $(\pi/3,0)$ wavevector, distinct from the well-known stripe fluctuations near 1/8 doping, and persists from cryogenic temperatures to room temperature. To investigate the origin of this charge order in the overdoped regime, we use determinant quantum Monte Carlo (DQMC) simulations to examine correlated models with various interactions. We demonstrate that this distinctive CDW originates from remnant correlations in extremely overdoped cuprates, with its specific pattern shaped by interactions beyond the Hubbard model, particularly electron-phonon couplings. The persistence of the $(\pi/3,0)$ wavevector across different doping levels indicates the presence of nonlocal couplings. Our study reveals the significant role of phonons in cuprates, which assist correlated electrons in the formation of unconventional phases.
Jiarui Liu, Shaozhi Li, Edwin Huang, Yao Wang
2023-09-25T04:40:48Z
http://arxiv.org/abs/2309.13868v2
# Charge-Density Wave in Overdoped Cuprates Driven by Electron-Phonon Couplings ###### Abstract Recent resonant x-ray scattering (RXS) experiments revealed a novel charge order in highly overdoped La\({}_{2-x}\)Sr\({}_{x}\)CuO\({}_{4}\) (LSCO) [1]. The observed charge order appears around the \((\pi/3,0)\) wavevector and remains robust from cryogenic temperatures to room temperature. To investigate the origin of this charge order in the overdoped region, we use determinant quantum Monte Carlo (DQMC) simulations to examine models with various interactions. We demonstrate that this CDW originates from remnant correlations in overdoped cuprates. The doping-independent wavevector \((\pi/3,0)\) further reflects the presence of nonlocal electron-phonon couplings. Our study reveals the importance of phonons in the cuprates, which assist correlated electrons in the formation of exotic phases. Unconventional superconductivity (SC) in cuprates has attracted extensive experimental and theoretical studies [2; 3; 4; 5; 6]. In addition to its promising applications in energy and quantum technology, the exploration of cuprates has been driven by numerous complex phases of cuprates [7]. These phases can coexist or compete with SC and have proven challenging to address within traditional solid-state frameworks. One prominent phase among these is the charge density wave (CDW). In conventional BCS superconductors, phonons mediate an effective electron-electron attraction that gives rise to both mobile Cooper pairs and immobile charge modulations. These two states compete against each other as evidenced by investigations into Holstein-like models [8; 9; 10]. In cuprates, despite having distinct pairing symmetry and likely different pairing mechanisms from BCS theory, many experiments have unveiled the presence of CDW orders or fluctuations near the SC phase. Their proximity suggests an intertwined origin of CDW and high-\(T_{\rm c}\) SC [11; 12; 13; 14; 15]. Furthermore, recent observations of CDW in infinite-layer nickelates [16; 17; 18; 19] provide an additional example for the relation between these two states in unconventional superconductors analogous to cuprates. Previous studies about CDW have been focused on underdoped and optimally doped cuprates [20; 21; 22], drawing significant attention to the intricate interplay between SC, pseudogap, and CDW [23; 24; 25]. Particularly, near 12.5% hole doping, charge order has been detected with a unidirectional stripe behavior and a periodicity of 4 unit cells [see Fig. 1]. Advanced numerical many-body methods have successfully captured and described this stripe order in the context of the Hubbard model [26; 27]. Remarkably, the CDW appearing in these simulations around 12.5% hole doping exhibits a notable competition with \(d\)-wave superconductivity [28; 29; 30]. Recently, CDWs in overdoped cuprates have been observed in BSCCO [31], prompting a novel exploration of their role in high-\(T_{c}\) cuprates. This CDW phenomenon was subsequently observed in extremely overdoped LSCO, in the form of a charge modulation of over 6 unit cells in the antinodal direction [see Fig. 1] and persists to room temperature [1]. (Here, we refer to the \((H,0)\) and \((H,H)\) directions as "antinodal" and "nodal", respectively) This CDW order starts to develop at 35% doping and maximizes its intensity at approximately 50% doping. Despite the expected screening of Coulomb interactions in the overdoped regime, Fermi-surface instabilities cannot explain the origin of this CDW [1]. Consequently, it is likely driven by interactions. Residing away from the SC and pseudogap phases, the overdoped regime may experience less influence from the dominant spin fluctuations, providing a unique opportunity to reveal intrinsic yet subleading interactions in cuprates. For this purpose, we examine the extremely overdoped cuprates using various correlated models, including the Hubbard model, the Hubbard-Holstein model, and their variants. Our findings reveal that while the Hubbard interaction correctly produces a correlation-induced charge instability maximized at 50% doping, it fails to capture the charge pattern with wavevector \((\pi/3,0)\). The inclusion of electron-phonon coupling (EPC), particularly nonlocal EPC, shifts the ordering wavevector from \((\pi,\pi)\) towards the anticipated \((\pi/3,0)\). Building upon this framework, we quantify the strength and distribution of the EPC, addressing the experimental observations in Figure 1: Schematic illustrating different types of charge fluctuations or order observed from experiments across various doping regimes of cuprates. The origin of the overdoped period-6 CDW serves as the primary focus of this work. LSCO. This work underscores the essential contributions of EPC to correlated phases in cuprates. To capture the correlations, our theoretical study starts from the single-band Hubbard model [32; 33]: \[\mathcal{H}_{\text{Hubbard}}=-\sum_{ij\sigma}t_{ij}c_{i\sigma}^{\dagger}c_{j \sigma}-\mu\sum_{i\sigma}n_{i\sigma}+\sum_{i}Un_{i\uparrow}n_{i\downarrow}\,,\] where \(c_{i\sigma}\) (\(c_{i\sigma}^{\dagger}\)) annihilates (creates) an electron at site \(i\) with spin \(\sigma\) and \(n_{i\sigma}=c_{i\sigma}^{\dagger}c_{i\sigma}\) is the density operator. The hopping term is governed by the one-electron integral \(t_{ij}\) of Wannier wavefunctions at site \(i\) and \(j\). With tight-binding assumption, we constrain the hopping to the nearest and next-nearest neighbors (denoted as \(t\) and \(t^{\prime}\), respectively). The Coulomb interaction is simplified to the on-site Hubbard interaction \(U\) and the chemical potential \(\mu\) determines the electron filling within the grand canonical ensemble. To reflect the electronic structure in LSCO, we use \(t^{\prime}=-0.15t\) and \(U=8t\) throughout the main text [34; 35], while the \(t^{\prime}\) dependence is discussed in the Supplementary Material (SM) [36]. We employ the determinant quantum Monte Carlo (DQMC) algorithm to simulate the Hubbard model, considering that the overdoped CDW is persistent at high temperatures [1; 31]. DQMC is an unbiased quantum many-body method, which maps the thermal density matrix into a summation over Hubbard-Stratonovich field configurations that is estimated stochastically through importance sampling [37; 38]. This work aims to address the charge order in the RXS experiments and primarily focuses on the charge susceptibility \(\chi_{\text{c}}(\mathbf{q},\omega)\) at \(\omega=0\) \[\begin{split}\chi_{\text{c}}(\mathbf{q},\omega=0)=& \int_{0}^{\beta}d\tau\sum_{i,j}e^{-i\mathbf{q}\cdot(\mathbf{r}_{i}- \mathbf{r}_{j})}\\ &\times\left(\left\langle n_{i}(\tau)n_{j}(0)\right\rangle-\left \langle n_{i}(\tau)\right\rangle\left\langle n_{j}(0)\right\rangle\right). \end{split} \tag{1}\] Here, \(\beta=1/T\) is the inverse temperature and \(n_{i}=n_{i,\uparrow}+n_{i,\downarrow}\). If interactions are neglected, \(\chi_{\text{c}}\) can be evaluated using the Lindhard response function, which fails to explain the observed CDW as discussed in Ref. [1]. With the Hubbard model, we examine the impact of strong electron-electron correlations on overdoped cuprates. Figure 2 shows the doping dependence and momentum dependence of \(\chi_{\text{c}}\) at \(T=0.4t\). We employ a \(12\times 12\) square cluster to preserve the \(D_{4h}\) symmetry, which is necessary for an unbiased comparison of charge instabilities along the nodal and antinodal directions. Since the overall charge susceptibility varies from almost zero at half-filling to substantial values at large dopings, in our figures, we normalize \(\chi_{\text{c}}(\mathbf{q},\omega)\) at each doping by its maximal intensity to reflect the relative distribution of spectral weight in momentum space. As the doping increases, the Hubbard model exhibits three discernible charge fluctuation behaviors. At and near half filling, the dominant antiferromagnetic (AFM) order is insulating and therefore suppresses any charge fluctuations, leading to a reduction in \(\chi_{\text{c}}\). Its residual intensity concentrates around the wavevector \((\pi,\pi)\), corresponding to the subleading doublon-hole fluctuations at nearest neighbors [39]. Upon reaching \(\sim 5\%\) doping, the system transforms from the AFM state to stripe fluctuations. The locking of charge and spin stripes results in a prevailing wavevector along the antinodal direction (see the SM [36]) [26; 40]. Beyond \(25\%\) hole doping, the stripe fluctuations are gradually supplanted by a checkerboard pattern of charge fluctuation with \(\mathbf{q}=(\pi,\pi)\). This short-range fluctuation stems from the strong local correlation caused by the repulsive Hubbard \(U\), which tends to correlate one hole with a neighboring doublon. As the hole doping increases from \(25\%\) to \(50\%\), the momentum distribution of charge fluctuations remains qualitatively unchanged, Figure 2: The charge susceptibility \(\chi_{\text{c}}(\mathbf{q},\omega=0)\) obtained from the Hubbard model on a \(12\times 12\) square lattice at temperature \(T=0.4t\). The susceptibility is normalized for each doping (panel). The white circles mark the maximal instability among all momenta for each doping. The upper ribbons indicate the corresponding phase. The last panel summarizes the doping dependence of charge susceptibility at \(\mathbf{q}=(\pi,\pi)\) and \((\pi/3,0)\) [experimentally relevant momentum]. accompanied by a rapid increase in the overall intensity. This intensification is attributed to unraveled charge carriers from the AFM background. However, this upward trend stops and the susceptibility starts to drop at around quarter filling (50% doping), where a singly-occupied checkerboard pattern develops. Further doping beyond 50% breaks this checkerboard pattern, leading to a drop in intensity. The evolution of the charge susceptibility, obtained from the Hubbard model, successfully captures the doping dependence of the overall intensity observed in experiments [1]. The maximum at quarter filling reflects the tendency towards a singly-occupied checkerboard pattern, since double occupancy is suppressed by the Hubbard interaction. Therefore, the experimentally observed doping dependence reflects the remnant correlations present in heavily doping cuprates [41; 42; 43; 44; 45; 46], not predictable by the Fermi-surface instability. Such an instability is universal in Hubbard models irrespective of specific band structures, as discussed in the SM [36]. It is therefore not surprising that the overdoped CDW is observed in multiple cuprate materials [1; 31]. Nonetheless, the charge susceptibilities of the Hubbard model fail to accurately depict the momentum distribution in experiments [1]. Once the Hubbard model is doped beyond the stripe regime, its charge response consistently manifests dominance at \(\mathbf{q}=(\pi,\pi)\). Excluding band structure effects (different \(t^{\prime}\)) explained in the SM [36], we conclude that the Hubbard model is _insufficient_ to address the experimentally observed overdoped CDW. A natural extension of the Hubbard model involves the inclusion of additional interactions. For example, recent studies for 1D cuprate chains have revealed the significance of attractive nonlocal interactions mediated by phonons [47; 48; 49; 50; 51; 52]. Experimental results on underdoped or optimally doped cuprates have further indicated the necessity to include long-range effective interactions, especially attractive ones, to account for the charge order pattern [53; 54]. With these considerations, we introduce EPCs into the Hubbard model and investigate their influence on overdoped CDW. We restrict ourselves to site phonons coupled to electron density (\(n_{i\sigma}\)) in this paper due to their direct impact on charge density (see SM for a brief discussion about other types of phonons [36]). In the case where the interaction is local, the model corresponds to the Hubbard-Holstein model, with the Hamiltonian: \[\mathcal{H}_{\rm HH}=\mathcal{H}_{\rm Hubbard}+\sum_{i}\left[\frac{M}{2}\omega _{\rm ph}^{2}X_{i}^{2}+\frac{P_{i}^{2}}{2M}\right]-\sum_{i\sigma}gX_{i}n_{i \sigma}\,.\] Here, \(X_{i}\) (\(P_{i}\)) is the lattice displacement (momentum) at lattice site \(i\), \(g\) is the onsite EPC strength, \(M\) is the phonon oscillator mass (set as \(1t^{-1}\)), and \(\omega_{\rm ph}\) is the phonon frequency (set as \(1t\)). Building upon the Hubbard-model results in Fig. 2, we focus on the momentum distribution of charge susceptibilities at quarter filling (\(n=0.5\)) (see the SM [36] for doping dependent results). Due to the numerical complexity associated with phonon degrees of freedom, we employ a \(6\times 6\) square lattice here, corresponding to the smallest square cluster capable of accommodating the experimentally observed (\(\pi/3,0\)) momentum. The leftmost spectrum in Fig. 3(a) shows \(\chi_{\rm c}(\mathbf{q},\omega=0)\) for the Hubbard-Holstein model with \(g=0.5t\,\)meV [55], as determined for 1D cuprate chains (see the SM for different \(g\) values [36]). This result suggests that the presence of the Holstein phonon has a marginal impact on the momentum distribution of \(\chi_{\rm c}\), and the susceptibility is still Figure 3: (a) Charge susceptibility for the quarterly filled HEM model (\(U=8t\) and \(g=0.5t\)) with different \(g^{\prime}\)s. (b) Charge susceptibility for \(T=0.4t\) (dashed curves) and \(T=0.2t\) (solid curves) as a function of the \(g^{\prime}\) with \(U=8t\). The blue and red curves denote results at the momenta (\(\pi/3,0\)) and (\(\pi,\pi\)), respectively. (c) Phonon-mediated effective interaction \(|V_{\rm eff}(\mathbf{q})|\) for \(\omega=0\) as a function of \(g^{\prime}\) at specific momentum points marked in the inset. (d) The \(\chi_{\rm c}(\mathbf{q})\) obtained by RPA (lower-left) and DQMC (upper-right insets) simulation for the extended-Holstein model with \(U=0\) and \(T=0.2t\). (e) Schematic illustrating that the nonlocal EPC \(g^{\prime}\) favors longer period charge modulation, while onsite EPC \(g\) favors short period charge modulation. dominated by the nodal direction near \((\pi,\pi)\). The on-site EPC, with a realistic strength, does not qualitatively change the conclusion derived from the Hubbard model. Thus, we further include nonlocal EPCs. This coupling is found essential in explaining the recently observed attractive interactions in 1D cuprates [47; 48; 49]. Due to the distance-dependent decay of electrostatic couplings [48], our primary attention is on the EPC between nearest-neighbor electron density (\(n_{i\sigma}\)) and lattice displacement (\(X_{i\pm\hat{x}}\) or \(X_{i\pm\hat{y}}\)), denoted as \(g^{\prime}\). This generalized model is referred to as the Hubbard-extended-Holstein (HEH) model. Starting from the quarterly-filled Hubbard-Holstein model (\(g^{\prime}=0\)), we gradually increase \(g^{\prime}\) while fixing \(g=0.5t\) [see Fig. 3(a)]. In the presence of this nearest-neighbor coupling, a small-momentum charge susceptibility begins to emerge at the zone center. At the same time, the checkerboard charge fluctuations at large momenta maintain their intensities. As \(g^{\prime}\) increases and exceeds \(\sim 0.25t\), the small-wavevector \(\chi_{\rm c}\), represented by \({\bf q}=(\pi/3,0)\), becomes dominant over the \((\pi,\pi)\) susceptibility [see Fig. 3(b)]. This critical coupling strength is slightly larger than the geometric estimation based on octahedral symmetry, yielding \(g^{\prime}\sim g/\sqrt{5}\)[48]. Considering the Jahn-Teller effect, the strengths of \(g^{\prime}\) and \(g\) can be closer in the actual cuprates. Importantly, the intensities at these key wavevectors and their doping dependence are robust against temperature change [see Fig. 3(b)], consistent with the RXS results [1]. Furthermore, while these nonlocal EPCs change the dominant wavevector, they do not affect the doping dependence of the charge susceptibility, which is still maximal near quarter filling (see the SM [36]). The rise of small-momentum susceptibilities can be elucidated by analyzing the phonon-mediated effective interactions between electrons. The \(g^{\prime}\) contributes modulation in the EPC of momentum space, i.e., \(g_{\bf q}=g+2g^{\prime}(\cos q_{x}+\cos q_{y})\). When \(g^{\prime}\) shares the same sign of \(g\), the coupling is predominantly projected to small \({\bf q}\)s. As the phonon-mediated dynamical interaction \(V_{\rm eff}({\bf q},\omega)=g_{\bf q}^{2}/M(\omega^{2}-\omega_{\rm ph}^{2})\) scales with \(g_{\bf q}^{2}\), this modulation further affects the charge susceptibility at the corresponding momenta [see Figs. 3(b) and (c)]. As a side effect, the momentum dependence of the attractive \(V_{\rm eff}\) also adjusts the \((\pi,\pi)\) charges instability. Due to the decrease and eventual negativity of \(g_{\bf q=(\pi,\pi)}\) with rising \(g^{\prime}\), the \(V_{\rm eff}\propto g_{\bf q}^{2}\) displays a nonmonotonic dependence on \(g^{\prime}\). Such a non-monotonic relationship is also mirrored in the evolution of \(\chi_{\rm c}({\bf q}=(\pi,\pi))\). Alternatively, the impact of nonlocal EPC can be interpreted in the real-space picture. As shown in Fig. 3(e), the presence of \(g^{\prime}\) clusters electrons around an individual lattice displacement, which turns the short-range doublon-hole fluctuations to form a longer-range charge pattern. The association of \(V_{\rm eff}\) and small-momentum charge instabilities suggests an independent origin of these intensities and their coexistence with (large-momentum) doublon-hole fluctuations induced by \(U\). These two interactions manifest minimal overlap in momentum space. As a demonstration, we set Hubbard \(U\) to zero in the insets of Fig. 3(d) and find that the DQMC-simulated charge susceptibility intensifies only near the zone center, with little intensity at \((\pi,\pi)\) compared to that of the HEH model [i.e., Fig. 3(a)]. With EPC of the same strength, these small-momentum susceptibilities are consistent for models with and without \(U\). The Hubbard \(U\) plays the role of determining the absolute value of charge susceptibility and its doping dependence, without which the maximum cannot be reached at \(50\%\) doping. While the dominant wavevector for charge susceptibility has reached the experimentally observed \((\pi/3,0)\) in Fig. 3(a), we avoid over-interpreting the quantitative value of this wavevector, since it is the smallest nonzero wavevector in the \(6\times 6\) square lattice. As discussed in the SM [36], simulating larger systems (but with a higher temperature) reflects a dominant wavevector closer to \((0,0)\). Moreover, the phonon-mediated \(|V_{\rm eff}|\) cannot peak at nonzero wavevectors if only positive onsite and nearest-neighbor couplings (\(g\) and \(g^{\prime}\)) are considered. Therefore, it is necessary to determine a parameter regime with longer-range couplings (e.g., the next-nearest-neighbor coupling \(g^{\prime\prime}\), next-next-nearest-neighbor coupling \(g^{\prime\prime\prime}\), and longer-range coupling \(g^{\prime\prime\prime\prime}\)), where the \((\pi/3,0)\) CDW can be more precisely pinned. These terms stem from the electrostatic origin of the site-phonon couplings and contribute to the momentum-space modulation \(g_{\bf q}\), which determines the small-momentum charge distributions. To provide a quick and infinite-resolution estimation of phonon-induce CDW orders, we find that the momentum distribution of \(\chi_{\rm c}({\bf q})\) can be well estimated by the random phase approximation (RPA), using the \(V_{\rm eff}({\bf q},\omega)\) as the vertex. This can be reflected by the comparison of Figure 4: (a) Effective dynamical interaction \(|V_{\rm eff}({\bf q},\omega=0)|\) mediated by phonons with longer-range interactions (\(g^{\prime}=0.2t\), \(g^{\prime\prime}=-0.1g,g^{\prime\prime\prime}=g^{\prime\prime}/\sqrt{2},g^{ \prime\prime\prime\prime}=g^{\prime\prime}/2\)). (b) DQMC-simulated charge susceptibility \(\chi_{\rm c}\) for quarterly filled HEH (\(U=8t\) and \(g=0.5t\)) with the interactions in (a), obtained at \(T=0.5t\). (c) Comparison of the background-removed scattering results obtained by RXS experiments (shaded gray peak), the antinodal charge susceptibility distribution for the Hubbard model (green), and that for the HEH model (blue). the major parts (RPA) and insets (DQMC) in Fig. 3(c). This estimation provides a good approximation of the small-momentum instabilities for systems with \(U=8t\), since the two interactions determine small- and large-momentum susceptibilities almost independently. Due to the screening effects at long distances, we slightly release the constraints that EPC decay as \(1/r\). With the EPC parameter set \(g=0.5t,g^{\prime}=0.2t,g^{\prime\prime}=-0.1g,g^{\prime\prime\prime}=g^{\prime \prime}/\sqrt{2},g^{\prime\prime\prime\prime}=g^{\prime\prime}/2\) (see the definition of long-range EPC is in the SM [36]), the effective interaction \(|V_{\text{eff}}(\mathbf{q},\omega=0)|\) has a peak at \((\pi/3,0)\)[see Fig. 4]. Using this parameter set, we simulate \(\chi_{c}\) by DQMC. As expected, the charge wavevector is precisely pinned at \((\pi/3,0)\) for \(12\times 12\) systems. We use this example to demonstrate the feasibility of inducing a robust charge order consistent with experiments by including long-range EPC. While these couplings are determined by materials' crystal and electronic structures, a slight deviation from this commensurate wavevector may be prevented by domain boundaries in reality. Our simulation ignores the long-range electronic Coulomb repulsion, which contributes an additional \(\sim 1/|\mathbf{q}|^{2}\) interaction as its bare form. However, the single-band Hubbard model has considered corrections from these Coulomb interactions when projecting from its multiband prototype [56; 57]. Thus, the effective single-band wavefunction is a composite of copper and oxygen wavefunctions [33], where the in-plane Coulomb interaction has been screened largely by the copper-oxygen covalent bond. Such a screening is more severe in the overdoped regime with a large Fermi surface. In conclusion, we have explained the emergence of the period-6 CDW in overdoped cuprates. Excluding Fermi-surface instability from Lindhard analyses, the overdoped Hubbard model correctly captures the doping dependence of this CDW, reflecting the remnant correlations even in the overdoped regime. Addressing the constant \((\pi/3,0)\) wavevector of this CDW then requires further including EPCs, especially nonlocal EPC, which are determined by geometric relations. The phonon-mediated effective interaction pins the overdoped CDW at \((\pi/3,0)\) for a wide range of doping, as seen in RXS experiments. Since the overdoped regime is far from the pseudogap and AFM phases, where spin fluctuations dominate, its phonon-driven nature suggests a subleading interaction in cuprates. This interaction is overwhelmed by the Hubbard \(U\) for underdoped and optimally doped cuprates, as compared in this paper. However, increasing evidence for the contribution of phonons (in addition to strong correlations) to \(d\)-wave superconductivity has been revealed [58; 59; 60; 61; 62]. With separable impact from correlations in this overdoped regime, the extracted EPC here is crucial for the theory of unconventional superconductivity. We thank the experimental inputs from Yingying Peng and insightful discussions from Ilya Esterlis. This work is supported by the Air Force Office of Scientific Research Young Investigator Program under grant FA9550-23-1-0153. This research used resources of the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231 using NERSC award BES-ERCAP0023810.
2309.05039
The inverse limit topology and profinite descent on Picard groups in $K(n)$-local homotopy theory
In this paper, we study profinite descent theory for Picard groups in $K(n)$-local homotopy theory through their inverse limit topology. Building upon Burklund's result on the multiplicative structures of generalized Moore spectra, we prove that the module category over a $K(n)$-local commutative ring spectrum is equivalent to the limit of its base changes by a tower of generalized Moore spectra of type $n$. As a result, the $K(n)$-local Picard groups are endowed with a natural inverse limit topology. This topology allows us to identify the entire $E_1$ and $E_2$-pages of a descent spectral sequence for Picard spaces of $K(n)$-local profinite Galois extensions. Our main examples are $K(n)$-local Picard groups of homotopy fixed points $E_n^{hG}$ of the Morava $E$-theory $E_n$ for all closed subgroups $G$ of the Morava stabilizer group $\mathbb{G}_n$. The $G=\mathbb{G}_n$ case has been studied by Heard and Mor. At height $1$, we compute Picard groups of $E_1^{hG}$ for all closed subgroups $G$ of $\mathbb{G}_1=\mathbb{Z}_p^\times$ at all primes as a Mackey functor.
Guchuan Li, Ningchuan Zhang
2023-09-10T14:26:48Z
http://arxiv.org/abs/2309.05039v2
# The inverse limit topology and profinite descent on Picard groups in \(K(n)\)-local homotopy theory ###### Abstract. In this paper, we study profinite descent theory for Picard groups in \(K(n)\)-local homotopy theory through their inverse limit topology. Building upon Burklund's result on the multiplicative structures of generalized Moore spectra, we prove that the module category over a \(K(n)\)-local commutative ring spectrum is equivalent to the limit of its base changes by a tower of generalized Moore spectra of type \(n\). As a result, the \(K(n)\)-local Picard groups are endowed with a natural inverse limit topology. This topology allows us to identify the entire \(E_{1}\) and \(E_{2}\)-pages of a descent spectral sequence for Picard spaces of \(K(n)\)-local profinite Galois extensions. Our main examples are \(K(n)\)-local Picard groups of homotopy fixed points \(E_{n}^{\text{h}G}\) of the Morava \(E\)-theory \(E_{n}\) for all closed subgroups \(G\) of the Morava stabilizer group \(\mathbb{G}_{n}\). The \(G=\mathbb{G}_{n}\) case has been studied by Heard and Mor. At height \(1\), we compute Picard groups of \(E_{1}^{\text{h}G}\) for all closed subgroups \(G\) of \(\mathbb{G}_{1}=\mathbb{Z}_{p}^{\times}\) at all primes as a Mackey functor. 2020 Mathematics Subject Classification: Primary 14C22, 55P43; Secondary 20E18, 55N22, 55T25 ###### Contents * 1 Recollections on Picard groups and spaces of \(K(n)\)-local ring spectra * 2 The inverse limit topology on \(K(n)\)-local Picard groups * 3 Profinite descent for \(K(n)\)-local Picard groups * 4 Computations of \(K(1)\)-local Picard Mackey functors One observation is that the \(K(1)\)-local Picard group \(\operatorname{Pic}\big{(}\mathsf{Sp}_{K(1)}\big{)}\) is a _profinite_ abelian group. This holds for \(\operatorname{Pic}\big{(}\mathsf{Sp}_{K(n)}\big{)}\) in general for any \(n\) by [11, Proposition 14.3.(d)] (see also [10]). The main purpose of this paper is to lift the inverse limit topology1 on Picard groups of \(K(n)\)-local ring spectra to the level of Picard spaces. This topology allows us to identify the entire \(E_{1}\) and \(E_{2}\)-pages of a descent spectral sequence for Picard spaces of \(K(n)\)-local profinite Galois extensions. Footnote 1: Not necessarily profinite. Recall the limit of (discrete) sets \(X=\lim_{i}X_{i}\) can be modelled as a subset of \(\prod X_{i}\), which has a non-trivial product topology. The limit \(X\) then inherits a subspace topology. This is also the weakest topology on \(X\) to make the projection maps \(X\to X_{i}\) continuous with each \(X_{i}\) being discrete. We say \(X\) has an inverse limit topology. It is called profinite if the \(X_{i}\)’s are all finite sets. In [11], Hovey-Strickland described the inverse limit topology on \(\operatorname{Pic}\big{(}\mathsf{Sp}_{K(n)}\big{)}\) as follows. They first constructed a tower of generalized Moore spectra \(\{M_{j}\}\) of type \(n\) such that \(X\simeq\lim_{j}L_{K(n)}(X\wedge M_{j})\) for any \(K(n)\)-local spectrum \(X\). Then they defined a basis \(\{V_{j}\}\) for closed neighborhoods of the identity element \(S^{0}_{K(n)}\) in the Picard group by setting \(V_{j}=\big{\{}X\in\operatorname{Pic}\big{(}\mathsf{Sp}_{K(n)}\big{)}\,|\,L_{ K(n)}(X\wedge M_{j})\simeq L_{K(n)}M_{j}\big{\}}\). Recent work of Burklund in [12] implies the \(M_{j}\)'s in this tower can be chosen to have compatible \(\mathbb{E}_{j}\)-algebra structures (Proposition 2.1.4). For a ring (spectrum) \(R\), its Picard group and space are defined to be the Picard group and space, respectively, of its module category \(\mathsf{Mod}(R)\). Our first main result in this paper is the following Grothendieck existence theorem (Remark 2.2.3) for \(K(n)\)-local ring spectra in Section2. **Theorem** (Main Theorem A).: _Let \(R\) be a \(K(n)\)-local \(\mathbb{E}_{\infty}\)-ring spectrum. Then the limit of base change functors induces an equivalence of symmetric monoidal \(\infty\)-categories_ \[\mathsf{Mod}_{K(n)}(R)\stackrel{{\sim}}{{\longrightarrow}} \lim_{j}\mathsf{Mod}_{K(n)}(R\wedge M_{j})\stackrel{{ z}}{{ \longleftrightarrow}}\lim_{j}\mathsf{Mod}(R\wedge M_{j}).\] _This yields an equivalence of group-like \(\mathbb{E}_{\infty}\)-spaces and an isomorphism of abelian groups:_ \[\mathfrak{pic}_{K(n)}(R)\stackrel{{\sim}}{{\longrightarrow}} \lim_{j}\mathfrak{pic}(R\wedge M_{j}),\quad\operatorname{Pic}_{K(n)}(R) \stackrel{{\sim}}{{\longrightarrow}}\lim_{j}\operatorname{Pic} (R\wedge M_{j}).\] The isomorphism above endows \(\operatorname{Pic}_{K(n)}(R)\) with a natural _inverse limit topology_. We show in Proposition2.3.3 that this topology is independent of the choice of the tower of generalized Moore spectra of type \(n\) in Proposition2.1.4. When \(R=S^{0}_{K(n)}\), our result gives an \(\infty\)-categorical explanation of the profinite topology on \(\operatorname{Pic}\big{(}\mathsf{Sp}_{K(n)}\big{)}\) described in [10, 11]. It is well-established that the Picard group functor \(\operatorname{Pic}\mathsf{:}\mathsf{CAlg}(\mathsf{Sp})\to\mathsf{Ab}\) for \(\mathbb{E}_{\infty}\)-ring spectra commutes with filtered colimits and _finite_ products. In Proposition2.3.6 and Proposition2.3.7, we show that the \(K(n)\)-local Picard group functor \(\operatorname{Pic}_{K(n)}\mathsf{:}\mathsf{CAlg}\big{(}\mathsf{Sp}_{K(n)} \big{)}\to\mathsf{Pro}(\mathsf{Ab})\) commutes with \(K(n)\)-local filtered colimits and _profinite_ products (see Definition2.3.4). Our computations at height \(1\) in Section4 give concrete examples when the _discrete_\(K(1)\)-local Picard group functor \(\operatorname{Pic}_{K(1)}\mathsf{:}\mathsf{CAlg}\big{(}\mathsf{Sp}_{K(1)} \big{)}\to\mathsf{Pro}(\mathsf{Ab})\stackrel{{\lim}}{{ \longrightarrow}}\mathsf{Ab}\) does not commute with filtered colimits (Remark 4.2.8 and Remark4.3.6). Building on the descent theory for \(K(n)\)-local module categories in [13], the structure of units of a ring spectrum in [11, SS5], and Main TheoremA, we prove our second main result in Section3.3. **Theorem** (Main Theorem B).: _Let \(A\to B\) be a descendable \(K(n)\)-local profinite \(G\)-Galois extension of \(\mathbb{E}_{\infty}\)-ring spectra. Suppose the inverse system \(\{\pi_{t}(B\wedge M_{j})\}_{j\geq 1}\) satisfies the Mittag-Leffler condition for any \(t\in\mathbb{Z}\). Then there are continuous homotopy fixed point spectral sequences:_ \[\operatorname{{}^{\mathsf{HFP}}}E_{1}^{s,t}=\operatorname{Map}_{c}(G^{\times s },\pi_{t}(B)),\qquad\operatorname{{}^{\mathsf{HFP}}}E_{2}^{s,t}=H_{c}^{s}(G; \pi_{t}(B))\Longrightarrow\pi_{t-s}(A);\] \[\operatorname{{}^{\mathsf{pic}}}E_{2}^{s,t}=H_{c}^{s}(G;\pi_{t}\mathfrak{pic}_{ K(n)}(B)),\qquad\operatorname{{}^{\mathsf{pic}}}E_{2}^{s,t}=H_{c}^{s}(G;\pi_{t} \mathfrak{pic}_{K(n)}(B))\Longrightarrow\pi_{t-s}\mathfrak{pic}_{K(n)}(A), \qquad t-s\geq 0.\] _In particular, the second spectral sequence abuts to \(\operatorname{Pic}_{K(n)}\left(A\right)\) when \(t=s\). The differentials in both spectral sequences are the form \(d_{r}^{s,t}\colon E_{r}^{s,t}\to E_{r}^{s\leftrightarrow t,t+r-1}\). When \(t-s>0\) and \(s>0\), or \(2\leq r\leq t-1\), we have \(\operatorname{\mathsf{pic}}d_{r}^{s,t-1}=^{\operatorname{HFP}}d_{r}^{s,t}\)._ Having set up the general theory, we mainly focus on \(K(n)\)-local Picard groups of homotopy fixed points \(E_{n}^{hG}\) of the height \(n\) Morava \(E\)-theory, where \(G\) is a _closed_ subgroup of the Morava stabilizer group \(\mathbb{G}_{n}\). These fixed points were explicitly constructed by Devinatz-Hopkins in [10]. To apply Main Theorem B, we first show in Corollary 3.2.10 that \(E_{n}^{hG}\to E_{n}\) admits descent using the following fact. From descent theory, we know if \(A\to B\to C\) is descendable, then so is \(A\to B\). If in addition \(A\to B\to C\) is a composition of profinite Galois extensions, we prove \(B\to C\) is also descendable in Proposition 3.2.6. The descent spectral sequence for \(\operatorname{\mathsf{pic}}_{K(n)}\left(E_{n}^{hG}\right)\) has the \(E_{1}\)-page: \[E_{1}^{s,t}=\pi_{t}\mathsf{pic}_{K(n)}\!\left[L_{K(n)}\left(\bigwedge_{E_{n}^ {hG}}^{s+1}E_{n}\right)\right]\!\simeq\!\pi_{t}\mathsf{pic}_{K(n)}\!\operatorname {Map}_{c}\left(G^{\times s},E_{n}\right).\] When \(t\geq 1\), we have \(\pi_{t}\mathsf{pic}_{K(n)}\!\operatorname{Map}_{c}\left(G^{\times s},E_{n} \right)\cong\operatorname{Map}_{c}\left(G^{\times s},\pi_{t}\mathsf{pic}_{K(n )}(E_{n})\right)\), which was essentially computed in [10]. When \(t=0\), the \(E_{1}^{s,0}\)-term is the \(K(n)\)-local Picard group of a profinite product of \(E_{n}\). This is the main technical difficulty in identifying the entire \(E_{1}\) and \(E_{2}\)-pages of the profinite descent spectral sequence for \(K(n)\)-local Picard groups. From the aforementioned property of the \(K(n)\)-local Picard groups in Proposition 2.3.6, we obtain: \[E_{1}^{s,0}=\operatorname{Pic}_{K(n)}\!\operatorname{Map}_{c}\left(G^{\times s },E_{n}\right)\cong\operatorname{Map}_{c}\left(G^{\times s},\operatorname{ Pic}_{K(n)}(E_{n})\right).\] The case when \(G=\mathbb{G}_{n}\) (note that \(E_{n}^{h\mathbb{G}_{n}}\simeq S_{K(n)}^{0}\)) has been recently studied by Heard and Mor. Heard computes the \(E_{1}\) and \(E_{2}\)-pages in the range \(t>0\) or \(s=t=0\) in [11, Example 6.18]. Mor uses the proetale (condensed) method to identify the entire \(E_{2}\)-page in [14]. In [13, SS3.3] and [12, SS1.3], the authors described a descent filtration (see Construction 3.4.3) on \(\operatorname{Pic}_{K(n)}\left(E_{n}^{hG_{n}}\right)\) using the homotopy fixed point spectral sequence. We next study this filtration using the descent spectral sequence Main Theorem B for \(\operatorname{Pic}_{K(n)}\left(E_{n}^{hG}\right)\) in Section 3.4. **Proposition** (3.4.9).: _The descent filtration in Construction 3.4.3 on \(\operatorname{Pic}_{K(n)}\left(E_{n}^{hG}\right)\) agrees with the filtration associated to the \(t-s=0\) stem of the descent spectral sequence for \(\operatorname{Pic}_{K(n)}\left(E_{n}^{hG}\right)\) in Main Theorem B._ This comparison allows us to prove the following algebraicity result for \(\operatorname{Pic}_{K(n)}\left(E_{n}^{hG}\right)\). **Theorem** (Main Theorem C).: _Fix a prime \(p>2\). Let \(G\leq\mathbb{G}_{n}\) be a closed subgroup such that \(G\cap\left(\mathbb{Z}/p\right)^{\times}\) is cyclic of order \(m\), where \(\left(\mathbb{Z}/p\right)^{\times}\leq\mathbb{Z}_{p}^{\times}=Z(\mathbb{G}_{n})\) is the torsion subgroup of the center \(\mathbb{Z}_{p}^{\times}\) of \(\mathbb{G}_{n}\). Denote the \(p\)-adic cohomological dimension of \(G\) by \(\operatorname{cd}_{p}G\)._ 1. _When_ \(2m+1>\operatorname{cd}_{p}G\)_, the exotic Picard group_ \(\kappa\left(E_{n}^{hG}\right)\) _vanishes and the descent filtration on_ \(K(n)\)_-local Picard group_ \(\operatorname{Pic}_{K(n)}\left(E_{n}^{hG}\right)\) _is:_ \[\begin{CD}0@>{}>{}>H_{c}^{1}(G;\pi_{0}(E_{n})^{\times})@>{}>{}>\operatorname{ Pic}_{K(n)}\left(E_{n}^{hG}\right)@>{\phi_{0}}>{}>\operatorname{Pic}_{K(n)}(E_{n})= \mathbb{Z}/2@>{}>{}>0,\end{CD}\] _where_ \(\phi_{0}\) _is induced by base change along the ring extension_ \(E_{n}^{hG}\to E_{n}\)_._ 2. _When_ \(2m+1=\operatorname{cd}_{p}G\)_, the map_ \(\phi_{1}\colon\operatorname{Pic}_{K(n)}^{0}\left(E_{n}^{hG}\right)\to \operatorname{Pic}_{K(n)}^{alg,0}\left(E_{n}^{hG}\right)=H_{c}^{1}(G;\pi_{0}(E_{ n})^{\times})\) _is surjective._ When \(G=\mathbb{G}_{n}\) and \((p-1)\nmid n\), we have \(\operatorname{cd}_{p}(\mathbb{G}_{n})=n^{2}\) and \(m=\left|\left(\mathbb{Z}/p\right)^{\times}\cap\mathbb{G}_{n}\right|=\left| \left(\mathbb{Z}/p\right)^{\times}\right|=p-1\). In this case, Main Theorem C states that \(\kappa_{n}=0\) if \(2p-1=2m+1>\operatorname{cd}_{p}\mathbb{G}_{n}=n^{2}\) and \((p-1)\nmid n\). This recovers [11, Proposition 7.5]. As an application of Main Theorem B, we compute \(K(1)\)-local Picard groups of homotopy fixed points \(E_{1}^{hG}\) for all closed subgroups \(G\leq\mathbb{Z}_{p}^{\times}\) at all primes. Picard groups of Galois extensions of ring spectra are not only a collection of abelian groups - they are connected by restriction and transfer maps. These data assemble into a Picard _Mackey functor_. Beaudry-Bobkova-Hill-Stojanoska computed of \(\operatorname{Pic}_{K(2)}\left(E_{2}\right)\) at the prime \(2\) as a \(C_{4}\)-Mackey functor in [1]. When \(G\leq\mathbb{Z}_{p}^{\times}\) is a pro-cyclic closed subgroup, the homotopy fixed points \(E_{1}^{hG}\) are all \(K(1)\)-local algebraic \(K\)-theory spectra of some finite fields \(\mathbb{F}_{q}\) (Proposition 4.2.2 and Remark 4.3.2). **Theorem** (Main Theorem D in Section 4.2).: _Let \(p>2\) be an odd prime number, \(k\geq 1\) and \(m\mid(p-1)\) be some positive integers. The Picard groups of \(E_{1}^{hG}\) for all closed subgroups \(G\leq\mathbb{Z}_{p}^{\times}\) are listed below:_ \[\operatorname{Pic}_{K(1)}\left(E_{1}^{hG}\right)=\left\{\begin{array}{ll} \mathbb{Z}/(2m)\oplus\mathbb{Z}_{p},&G=\mathbb{Z}/m\times(1+p^{k}\mathbb{Z}_{p });&\@@cite[cite]{[\@@bibref{}{HMS94}{}{}]}\,\@@cite[cite]{[\@@bibref{}{Str92}{}{} ]}\,\@@cite[cite]{[\@@bibref{}{Str92}{}{}]}\,\@@cite[cite]{[\@@bibref{}{Str92}{}{} ]}\,\@@cite[cite]{[\@@bibref{}{Str92}{}{}]}\,\@@cite[cite]{[\@@bibref{}{Str92}{}{} ]}\,\@@cite[cite]{[\@@bibref{}{Str92}{}{}]}\,\@@cite[cite]{[\@@bibref{}{Str92}{}{} ]}\,\@@cite[cite]{[\@@bibref{}{Str92}{}{}]}\,\@@cite[cite]{[\@@bibref{}{Str92}{}{} ]}\,\@@cite[cite]{[\@@bibref{}{Str92}{}{} ]}\,\@@cite[cite]{[\@@bibref{}{Str92}{}{} ]}\,\@@cite[cite]{[\@@bibref{}{Str92}{}{} ]}\,\@@cite[cite]{[\@@bibref{}{Str92}{}{} ]}\,\@@cite[cite]{[\@@bibref{}{Str92}{}{} ]}\,\@@cite[cite]{[\@@bibref{}{Str92}{}{} ]}\,\@@cite[cite]{[\@@bibref{}{Str92}{}{} ]}\,\@@cite[cite]{[\@@bibref{}{Str92}{}{} ]}\,\@@cite[cite]{[\@@bibref{}{Str92}{}{} ]}\,\@@cite[cite]{[\@@bibref{}{Str92}{}{} ]}\,\@@cite[cite]{[\@@bibref{}{Str92}{}{} ]}\,\@@cite[cite]{[\@@bibref{}{Str92}{}{} ]}\,\@@cite[cite]{[\@@bibref{}{Str92}{}{} ]}\,\@@cite[cite]{[\@@bibref{}{Str92}{}{} ]}\,\@@cite[cite]{[\@@bibref{}{Str92}{}{} ]}\,\@@cite[cite]{[\@@bibref{}{Str92}{}{} ]}\,\@@cite[cite]{[\@@bibref{}{Str92}{}{} ]}\,\@@cite[cite]{[\@@bibref{}{Str92}{}{} ]}\,\@@cite[cite]{[\@@bibref{}{Str92}{}{} ]}\,\@@cite[cite]{[\@@bibref{}{Str92}{}{} ]}\,\@@cite[cite]{[\@@bibref{}{Str92}{}{} ]}\,\@@cite[cite]{[\@@bibref{}{Str92}{}{} ]}\,\@@cite[cite]{[\@@bibref{}{Str92}{}{} ]}\,\@@cite[cite]{[\@@bibref{}{Str92}{}{} ]}\,\@@cite[cite]{[\@@bibref{}{Str92}{}{} ]}\,\@@cite[cite]{[\@@bibref{}{Str92}{}{} ]}\,\@@cite[cite]{[\@@bibref{}{Str92}{}{} ]}\,\@@cite[cite]{[\@@bibref{}{Str92}{}{} ]}\,\@@cite[cite]{[\@@bibref{}{Str92}{}{} ]}\,\@@cite[cite]{[\@@bibref{}{Str92}{}{} ]}\,\@@cite[cite]{[\@@bibref{}{Str92}{}{} ]}\,\@@cite[cite]{[\@@bibref{}{Str92}{}{} ]}\,\@@cite[cite]{[\@@bibref{}{Str92}{}{} ]}\,\@@cite[cite]{[\@@bibref{}{Str92}{}{} ]}\,\@@cite[cite]{[\@@bibref{}{Str92}{}{} ]}\,\@@cite[cite]{[\@@bibref{}{Str92}{}{} ]}\,\@@cite[cite]{[\@@bibref{}{Str92}{}{} ]}\,\@@cite[cite]{[\@@bibref{}{Str92}{}{} ]}\,\@@cite[cite]{[\@@bibref{}{Str92}{}{} ]}\,\@@cite[cite]{[\@@bibref{}{Str92}{}{} ]}\,\@@cite[cite]{[\@@bibref{}{Str92}{}{} ]}\,\@@cite[cite]{[\@@bibref{}{Str92}{}{} ]}\,\@@cite[cite]{[\@@bibref{}{Str92}{}{} ]}\,\@@cite[cite]{[\@@bibref{}{Str92}{}{} ]}\,\@@cite[cite]{[\@@bibref{}{Str92}{}{} ]}\,\@@cite[cite]{[\@@bibref{}{Str92}{}{} ]}\,\@@cite[cite]{[\@@bibref{}{Str92}{}{} ]}\,\@@cite[cite]{[\@@bibref{}{Str92}{}{} ]}\,\@@cite[cite]{[\@@bibref{}{Str92}{}{} ]}\,\@@cite[cite]{[\@@bibref{}{Str92}{}{} ]}\,\@@cite[cite]{[\@@bibref{}{Str92}{}{} ]}\,\@@cite[cite]{[\@@bibref{}{Str92}{}{} ]}\,\@@cite[cite]{[\@@bibref{}{Str92}{}{}{} ]}\,\@@cite[cite]{[\@@bibref{}{}{Str92}{}{} ]}\,\@@cite[cite]{[\@@bibref{}{Str92}{}{}{} ]}\,\@@cite[cite]{[\@@bib{}{}{Str92}{}{} ]}\,\@@cite[cite]{[\@@bibref{}{}{Str92}{}{} ]}\,\@@cite[cite]{[\@@bib{}{}{Str92}{}{} ]}\,\@@cite[cite]{[\@@bib{}{}{Str92}{}{} ]}\,\@@cite[cite]{[\@@bib{}{}{Str92}{}{} ]}\,\@@cite[cite]{[\@@bib{}{}{Str92}{}{}{} ]}\,\@@cite[cite]{[\@@bib{}{}{Str92}{}{} ]}\,\@@cite[cite]{[\@@bib{}{}{Str92}{}{} ]}\,\@@cite[cite]{[\@@bib{}{}{Str92}{}{} ]}\,\@@cite[cite]{[\@@bib{}{}{Str92}{}{} ]}\,\@@cite[cite]{[\@@bib{}{}{Str92}{}{} ]}\,\@@cite[cite]{[\@@bib{}{Str92}{}{}{} ]}\,\@@cite[cite]{[\@@bib{}{}{Str92}{}{} ]}\,\@@cite[cite]{[\@@bib{}{}{Str92}{}{} ]}\,\@@cite[cite]{[\@@bib{}{Str92}{}{}{} ]}\,\@@cite[cite]{[\@@bib{}{Str92}{}{}{} ]}\,\@@cite[cite]{[\@@bibref{}{Str92}{}{}{} ]}\,\@@cite[cite]{[\@@bib{}{Str92}{}{}{} ]}\,\@@cite[cite]{[\@@bib{}{Str92}{}{}{} ]}\,\@@cite[cite]{[\@@bibref{}{}{Str92}{}{}{} ]}\,\@@cite[cite]{[\@@bib{}{Str92}{}{} ]}\,\@@cite[cite]{[\@@bibref{}{Str92}{}{}{} ]}\,\@@cite[cite]{[\@@bib{}{Str92}{}{}{} ]}\,\@@cite[cite]{[\@@bibref{}{Str92}{}{}{} ]}\,\@@cite[cite]{[\@@bib{}{Str92}{}{}{} ]}\,\@@cite[cite]{[\@@bibref{}{Str92}{}{}{} ]}\,\@@cite[cite]{[\@@bibref{}{Str92}{}{}{} ]}\,\@@cite[cite]{[\@@bibref{}{Str92}{}{}{} ]}\,\@@cite[cite]{[\@@bibref{}{Str92}{}{}{} ]}\,\@@cite[cite]{[\@@bibref{}{Str92}{}{}{} ]}\,\@@cite[cite]{[\@@bibref{}{Str92}{}{}{} ]}\,\@@cite[cite]{[\@@bibref{}{Str92}{}{}{} ]}\,\@@cite[cite]{[\@@bibref{}{}{Str92}{}{}{} ]}\,\@@cite[cite]{[\@@bibref{}{Str92}{}{}{} ]}\ When \(p>2\), we observe in Remark 4.2.6 that for an open subgroup \(G=\mathbb{Z}/m\times(1+p^{k}\mathbb{Z}_{p})\leq\mathbb{Z}_{p}^{\times}\), the Picard group \(\operatorname{Pic}_{K(1)}\big{(}E_{1}^{hG}\big{)}\) is topologically generated by \(X=\Sigma^{1/p^{k-1}}E_{1}^{hG}\). This is the unique \(K(1)\)-local invertible spectrum over \(E_{1}^{hG}\) whose \(p^{k-1}\)-st smash power over \(E_{1}^{hG}\) is \(\Sigma E_{1}^{hG}\). However at the prime \(2\), the Picard groups \(\operatorname{Pic}_{K(1)}\big{(}E_{1}^{hG}\big{)}\) are not topologically cyclic by Main Theorem E for any open subgroup \(G\leq\mathbb{Z}_{2}^{\times}\). Moreover, the element \(\Sigma E_{1}^{hG}\) is not divisible by \(p=2\) in \(\operatorname{Pic}_{K(1)}\big{(}E_{1}^{hG}\big{)}\), as observed in Remark 4.3.4. ### Notation and conventions * Pic denotes the Picard _group_ of a symmetric monoidal category and \(\operatorname{\mathsf{pic}}\) denotes the Picard _space_ (\(\infty\)-groupoid) of a symmetric monoidal \(\infty\)-category. See more details in Section 1.1. Please note our notation for Picard spaces is slightly different from that in [11]. * When \(X\) and \(Y\) are two \(K(n)\)-local spectra, we denote their _\(K(n)\)-local_ smash product by \(X\hat{\wedge}Y:=L_{K(n)}(X\wedge Y)\). * All categories and \(\infty\)-categories in this paper are assumed to be presentable. * Let \(\mathcal{C}\) be an \(\infty\)-category. The pro-objects in \(\mathcal{C}\) are defined as follows: * [leftmargin=*] ## 1. Recollections on Picard groups and spaces of \(K(n)\)-local ring spectra In this section, we review basic definitions and properties of Picard groups and spaces of \(K(n)\)-local ring spectra. These well-established materials can be found in sources such as [17, 18, 19]. ### Picard groups and spaces of symmetric monoidal \(\infty\)-categories **Definition 1.1.1**.: Let \((\mathcal{C},\otimes,1_{\mathcal{C}})\) be a presentable symmetric monoidal \(1\)-category. An object \(X\in\mathcal{C}\) is called **invertible** if there is an object \(Y\in\mathcal{C}\) such that \(X\otimes Y\cong 1_{\mathcal{C}}\). The **Picard group** of \(\mathcal{C}\) is defined to be: \[\operatorname{Pic}(\mathcal{C})=\{X\in\mathcal{C}\mid X\text{ is invertible}\}/\cong,\] where the group multiplication is given by \(\otimes\) and the unit is the equivalence class of \(1_{\mathcal{C}}\). **Examples 1.1.2**.: Let \(R\) be a commutative ring. The module category of \(R\) has a natural symmetric monoidal structure \((\mathsf{Mod}_{R},\otimes_{R},R)\). Write \(\operatorname{Pic}(R)\) for \(\operatorname{Pic}(\mathsf{Mod}_{R})\). 1. When \(R=\mathbb{F}\) is a field, \(\operatorname{Pic}(\mathbb{F})=\{\mathbb{F}\}\) is the trivial group. 2. When \(R\) is a Dedekind domain, \(\operatorname{Pic}(R)\cong\operatorname{Cl}(R)\) is isomorphic to the ideal class group of (the fractional field of) \(R\). This group is trivial iff \(R\) is a unique factorization domain. 3. For the category \(\mathsf{Ch}_{R}\) of chain complexes over \(R\) and its derived category \(\mathcal{D}_{R}\), their Picard groups are isomorphic to \(\mathbb{Z}\oplus\operatorname{Pic}(R)\). The first \(\mathbb{Z}\)-summand corresponds to shiftings of invertible \(R\)-modules. In stable homotopy theory, we study the category \(\mathsf{Sp}\) of spectra. This is an example of a symmetric monoidal \(\infty\)-category. **Definition 1.1.3**.: Let \((\mathcal{C},\otimes,1_{\mathcal{C}})\) be a presentable \(\mathbb{E}_{k}\)-monoidal \(\infty\)-category for some for \(1\leq k\leq\infty\). The **Picard space**\(\operatorname{\mathsf{pic}}(\mathcal{C})\) of \(\mathcal{C}\) is the \(\infty\)-groupoid (space) of invertible objects in \(\mathcal{C}\). The Picard group of \(\mathcal{C}\) is the Picard group of its homotopy category \(h\mathcal{C}\). Equivalently, we have \(\operatorname{Pic}(\mathcal{C}):=\operatorname{Pic}(h\mathcal{C})\cong\pi_{ 0}\operatorname{\mathsf{pic}}(\mathcal{C})\). One can check the following fact directly from the definition. **Lemma 1.1.4**.: _When \(\mathcal{C}\) is \(\mathbb{E}_{k}\)-monoidal, its Picard space \(\operatorname{\mathsf{pic}}(\mathcal{C})\) is a group-like \(\mathbb{E}_{k}\)-space. Moreover, there is a fibration of group-like \(\mathbb{E}_{k}\)-spaces,_ \[\operatorname{BAut}(1_{\mathcal{C}})\xrightarrow{}\operatorname{\mathsf{pic} }(\mathcal{C})\xrightarrow{\pi_{0}}\operatorname{Pic}(\mathcal{C}),\] _where \(\operatorname{Aut}(1_{\mathcal{C}})\) is the automorphism space of \(1_{\mathcal{C}}\) in \(\mathcal{C}\) and \(\operatorname{Pic}(\mathcal{C})\) is a discrete group (abelian when \(k\geq 2\)). This fibration sequence of spaces splits._ _Remark 1.1.5_.: In [19], the authors denoted the Picard space of a symmetric monoidal \(\infty\)-category \(\mathcal{C}\) by \(\operatorname{\mathsf{Pic}}(\mathcal{C})\) and its Picard spectrum by \(\operatorname{\mathsf{pic}}(\mathcal{C})\). By Lemma 1.1.4, \(\operatorname{\mathsf{Pic}}(\mathcal{C})\) is a group-like \(\mathbb{E}_{\infty}\)-space and hence can be identified with a connective spectrum. However, when \(\mathcal{C}\) is only \(\mathbb{E}_{k}\)-monoidal for some \(k<\infty\), its Picard space cannot be identified with a spectrum. We use the notation \(\operatorname{\mathsf{pic}}(\mathcal{C})\) for the Picard _space_ of an \(\mathbb{E}_{1}\)-monoidal \(\infty\)-category \(\mathcal{C}\) to make a distinction between Picard spaces and groups. When there is no ambiguity, we will write \(\operatorname{\mathsf{pic}}(R)\) and \(\operatorname{Pic}(R)\) for \(\operatorname{\mathsf{pic}}(\mathsf{Mod}_{R}(\mathcal{C}))\) and \(\operatorname{Pic}(\mathsf{Mod}_{R}(\mathcal{C}))\), respectively. (See Notation 1.2.2) **Lemma 1.1.6**.: _Let \(R\in\mathsf{Alg}_{\mathbb{E}_{2}}(\mathsf{Sp})\) be an \(\mathbb{E}_{2}\)-ring spectrum. Then \(\operatorname{Aut}_{R}(R)\), the automorphism space of \(R\) as an \(R\)-module spectrum is equivalent to \(\operatorname{GL}_{1}R\), the space of units in the ring spectrum \(R\) in the sense of [1]. More precisely, we have a pullback diagram of spaces:_ **Corollary 1.1.7**.: _Let \(R\) be an \(\mathbb{E}_{2}\)-ring spectrum. Homotopy groups of its Picard space are:_ \[\pi_{t}\left(\mathsf{pic}(R)\right)=\begin{cases}\operatorname{Pic}(R),&t=0; \\ \pi_{0}(R)^{\times},&t=1;\\ \pi_{t-1}(R),&t\geq 2.\end{cases}\] **Examples 1.1.8**.: 1. When \(R=S^{0}\), we have \(\mathsf{Mod}_{S^{0}}(\mathsf{Sp})=\mathsf{Sp}\), whose Picard space has homotopy groups: \[\pi_{t}(\mathsf{pic}(\mathsf{Sp}))=\begin{cases}\mathbb{Z},&t=0;\\ \mathbb{Z}^{\times}=\mathbb{Z}/2,&t=1;\\ \pi_{t-1}(S^{0}),&t\geq 2.\end{cases}\] 2. When \(R=HA\) is the Eilenberg-MacLane spectrum for some classical (discrete) commutative ring \(A\), the Dold-Kan Correspondence states that \(\mathsf{Mod}_{HA}(\mathsf{Sp})\simeq\mathcal{D}_{A}\) as symmetric monoidal \(\infty\)-categories. This implies \(\operatorname{Pic}(HA)\cong\operatorname{Pic}(\mathcal{D}_{A})\cong\mathbb{Z }\oplus\operatorname{Pic}(A)\) and \[\pi_{t}(\mathsf{pic}(HA))=\begin{cases}\mathbb{Z}\oplus\operatorname{Pic}(A ),&t=0;\\ A^{\times}&t=1;\\ 0&t\geq 2.\end{cases}\] For later purposes, we review the definition and some properties of algebra objects in monoidal \(\infty\)-categories following [15, 16]. Let \(\mathcal{C}\) be a symmetric monoidal \(\infty\)-category. This data is encoded in a co-Cartesian fibration \(p\colon\mathcal{C}^{\otimes}\to N(\mathsf{Fin}_{*})\simeq\mathbb{E}_{\infty} ^{\otimes}\) of \(\infty\)-operads. **Definition 1.1.9**.: Let \(\mathcal{O}\) be an \(\infty\)-operad with a map \(p^{\prime}\colon\mathcal{O}^{\otimes}\to\mathbb{E}_{\infty}^{\otimes}\). An \(\mathcal{O}\)-algebra in \(\mathcal{C}\) is a map of \(\infty\)-operads \(\alpha\colon\mathcal{O}^{\otimes}\to\mathcal{C}^{\otimes}\) such that \(p^{\prime}\circ\alpha\simeq p\). Denote the \(\infty\)-category of \(\mathcal{O}\)-algebras in \(\mathcal{C}\) by \(\mathsf{Alg}_{\mathcal{O}}(\mathcal{C})\). When \(p=\operatorname{id}\colon\mathbb{E}_{\infty}^{\otimes}\to\mathbb{E}_{\infty} ^{\otimes}\), the corresponding algebra object in \(\mathcal{C}\) is called commutative and we denote \(\mathsf{CAlg}:=\mathsf{Alg}_{\mathbb{E}_{\infty}}\). **Proposition 1.1.10** ([15, Corollaries 3.2.2.5 and 3.2.3.2]).: _If a symmetric monoidal \(\infty\)-category \(\mathcal{C}\) has all limits and filtered colimits, then so does \(\mathsf{Alg}_{\mathcal{O}}(\mathcal{C})\). In that case, the forgetful functor \(U\colon\mathsf{Alg}_{\mathcal{O}}(\mathcal{C})\to\mathcal{C}\) preserves and detects limits and filtered colimits._ **Proposition 1.1.11** ([15, Proposition 7.1.2.6]).: _Let \(R\in\mathsf{Alg}_{\mathbb{E}_{k}}(\mathcal{C})\), where \(2\leq k\leq\infty\). Then the \(\infty\)-category \(\mathsf{Mod}_{R}(\mathcal{C})\) of its left modules in \(\mathcal{C}\) is \(\mathbb{E}_{k-1}\)-monoidal._ ### Picard spaces of \(K(n)\)-local ring spectra The symmetric monoidal category we are studying in this paper is \(\mathsf{Sp}_{K(n)}\), the \(\infty\)-category of \(K(n)\)-local spectra [11, SS10.2]. It admits a structure of presentably symmetric monoidal \(\infty\)-category \(\left(\mathsf{Sp}_{K(n)},\hat{\wedge},L_{K(n)}S^{0}\right)\) such that: (see [10, SS5.1.1]) * For all \(X,Y\in\mathsf{Sp}_{K(n)}\), the \(K(n)\)-local monoidal product is defined by \(X\hat{\wedge}Y=L_{K(n)}(X\wedge Y)\), where \(\wedge\) is the smash product of spectra. * The localization functor \(L_{K(n)}\colon\mathsf{Sp}\to\mathsf{Sp}_{K(n)}\) is strong symmetric monoidal. * The inclusion \(\mathsf{Sp}_{K(n)}\to\mathsf{Sp}\) is lax symmetric monoidal. Next we compare the Picard spaces and groups of ring spectra with those of \(K(n)\)-local ring spectra. **Lemma 1.2.1**.: _Let \(\mathcal{O}\) be an \(\infty\)-operad. The lax monoidal fully faithful embedding \(\mathsf{Sp}_{K(n)}\to\mathsf{Sp}\) of \(\infty\)-categories induces a fully faithful embedding \(\mathsf{Alg}_{\mathcal{O}}\left(\mathsf{Sp}_{K(n)}\right)\to\mathsf{Alg}_{ \mathcal{O}}(\mathsf{Sp})\). Moreover, there is a pullback diagram:_ _where the \(U\) functors take a (\(K(n)\)-local) ring spectrum to its underlying spectrum._ This lemma implies that a \(K(n)\)-local \(\mathcal{O}\)-ring spectrum \(R\) is an \(\mathcal{O}\)-ring spectrum whose underlying spectrum is \(K(n)\)-local. Suppose \(\mathcal{O}=\mathbb{E}_{k}\) for some \(k\geq 2\). A natural question is to compare the Picard spaces \(\mathsf{pic}\left(\mathsf{Mod}_{R}(\mathsf{Sp})\right)\) and \(\mathsf{pic}\left(\mathsf{Mod}_{R}\left(\mathsf{Sp}_{K(n)}\right)\right)\). **Notation 1.2.2**.: Let \(R\in\mathsf{Alg}_{\mathbb{E}_{k}}\left(\mathsf{Sp}_{K(n)}\right)\) for some \(2\leq k\leq\infty\). From now on, we will denote \[\mathsf{Mod}(R) \coloneqq\mathsf{Mod}_{R}(\mathsf{Sp}) \mathsf{Mod}_{K(n)}(R) \coloneqq\mathsf{Mod}_{R}\left(\mathsf{Sp}_{K(n)}\right)\] \[\mathsf{pic}(R) \coloneqq\mathsf{pic}\left(\mathsf{Mod}_{R}\right) \mathsf{pic}_{K(n)}(R) \coloneqq\mathsf{pic}\left(\mathsf{Mod}_{K(n)}R\right)\] \[\mathrm{Pic}(R) \coloneqq\mathrm{Pic}\left(\mathsf{Mod}_{R}\right) \mathrm{Pic}_{K(n)}(R) \coloneqq\mathrm{Pic}\left(\mathsf{Mod}_{K(n)}R\right).\] As the localization functor \(L_{K(n)}\colon\mathsf{Sp}\to\mathsf{Sp}_{K(n)}\) is strong symmetric monoidal, it induces a map of group-like \(\mathbb{E}_{k-1}\)-spaces \(\lambda\colon\mathsf{pic}(R)\to\mathsf{pic}_{K(n)}(R)\) for any \(R\in\mathsf{Alg}_{\mathbb{E}_{k}}\left(\mathsf{Sp}_{K(n)}\right)\). For a group-like \(\mathbb{E}_{k}\)-space \(X\), its identity component \(\tau_{\geq 1}X\) is also a group-like \(\mathbb{E}_{k}\)-space. **Proposition 1.2.3**.: _For \(R\in\mathsf{Alg}_{\mathbb{E}_{k}}\left(\mathsf{Sp}_{K(n)}\right)\), the localization functor induces an equivalence of connected group-like \(\mathbb{E}_{k-1}\)-spaces_ \[(\tau_{\geq 1}\lambda)\colon\tau_{\geq 1}\mathsf{pic}(R)\stackrel{{ \sim}}{{\longrightarrow}}\tau_{\geq 1}\mathsf{pic}_{K(n)}(R).\] Proof.: By Lemma1.1.4, this is equivalent to showing \(\mathrm{Aut}_{\mathsf{Mod}(R)}(R)\simeq\mathrm{Aut}_{\mathsf{Mod}_{K(n)}R}(R)\). As \(\mathsf{Sp}_{K(n)}\to\mathsf{Sp}\) is a full subcategory, any endomorphism of \(R\) is automatically \(K(n)\)-local. As a result, the localization functor induces an equivalence of monoids: \[\mathrm{End}_{\mathsf{Mod}(R)}(R)\stackrel{{\sim}}{{ \longrightarrow}}\mathrm{End}_{\mathsf{Mod}_{K(n)}(R)}(R).\] The claim follows by taking invertible elements in the endomorphism monoids. One central object in \(\mathsf{Sp}_{K(n)}\) is the height \(n\)**Morava \(E\)-theory**\(E_{n}\). This is a Landweber exact spectrum defined using the Lubin-Tate deformation theory of the height \(n\) Honda formal group \(\Gamma_{n}\). Homotopy groups of \(E_{n}\) are given by \[\pi_{*}(E_{n})=\mathbb{WF}_{p^{n}}\llbracket u_{1},\cdots,u_{n-1}\rrbracket[u^{ a^{1}}],\qquad|u_{i}|=0,|u|=-2.\] The graded Lubin-Tate deformation ring \(\pi_{*}(E_{n})\) admits an action by the **Morava stabilizer group**\(\mathbb{G}_{n}\coloneqq\mathrm{Aut}(\Gamma_{n}/\mathbb{F}_{p^{n}})\rtimes \mathrm{Gal}(\mathbb{F}_{p^{n}}/\mathbb{F}_{p})\), where \(\mathrm{Aut}(\Gamma_{n}/\mathbb{F}_{p^{n}})\) is the automorphism group of the Honda formal group \(\Gamma_{n}\) (extended to the field \(\mathbb{F}_{p^{n}}\)). **Theorem 1.2.4** (Goerss-Hopkins-Miller, Lurie, [12, 13, 14]).: _The spectrum \(E_{n}\) is a \(K(n)\)-local \(\mathbb{E}_{\infty}\)-ring spectrum. The \(\mathbb{G}_{n}\)-action on \(\pi_{*}(E_{n})\) lifts to \(\mathbb{E}_{\infty}\)-ring automorphisms of \(E_{n}\)._ Following the discussion above, the two Picard spaces \(\mathfrak{pic}(E_{n})\) and \(\mathfrak{pic}_{K(n)}(E_{n})\) of \(E_{n}\) can only differ in their path components. By [14, Lemma 6.7], we have an isomorphism \(\operatorname{Pic}(E_{n})\cong\operatorname{Pic}_{K(n)}(E_{n})\) of Picard groups. This follows from [13, Proposition 10.11]. **Theorem 1.2.5** (Baker-Richter, [13]).: _The Picard group of \(E_{n}\) in \(\mathsf{Sp}\) is algebraic in the sense of an isomorphism_ \[\operatorname{Pic}(\text{graded}\;(E_{n})_{*}\text{-Mod})\stackrel{{ \sim}}{{\longrightarrow}}\operatorname{Pic}(E_{n}).\] _The former is isomorphic to \(\mathbb{Z}/2\) since \(E_{n}\) is even-periodic and \(\pi_{0}(E_{n})\) is a complete local ring._ By Corollary 1.1.7, homotopy groups of the \(K(n)\)-local Picard space of \(E_{n}\) are: \[\pi_{t}\left(\mathfrak{pic}_{K(n)}(E_{n})\right)\cong\pi_{t}\left(\mathfrak{ pic}(E_{n})\right)=\begin{cases}\mathbb{Z}/2,&t=0;\\ \pi_{0}(E_{n})^{\times},&t=1;\\ \pi_{t-1}(E_{n}),&t\geq 2.\end{cases} \tag{1.2.6}\] ## 2. The inverse limit topology on \(K(n)\)-local Picard groups The Picard group of the \(K(n)\)-local category \(\mathsf{Sp}_{K(n)}\) has a profinite topology indexed by a tower of generalized Moore spectra. This has been described in [12, 13]. In this section, we show that the \(K(n)\)-local Picard functors \(\mathfrak{pic}_{K(n)}\) and \(\operatorname{Pic}_{K(n)}\) can be lifted to _pro_-spaces and _pro_-groups, respectively. The inverse limit topology on \(\operatorname{Pic}\left(\mathsf{Sp}_{K(n)}\right)\) then recovers the profinite topology in those earlier works. Moreover, it plays an essential role in our proof of the profinite descent spectral sequence for Picard spaces in Main Theorem B, which is in turn the foundation of all other main results of this paper. Here are more details. In Proposition 2.1.4, we incorporate Burklund's result in [15] to refine the tower of generalized Moore spectra \(\{M_{j}\}\) of type \(n\) in [13]. Then we show in Theorem 2.2.1 that the module category of a \(K(n)\)-local \(\mathbb{E}_{\infty}\)-ring spectrum is equivalent to the limit of its base changes by the generalized Moore spectra of type \(n\) with some \(\mathbb{E}_{k}\)-algebra structure: \[\mathsf{Mod}_{K(n)}(R)\stackrel{{\sim}}{{\longrightarrow}}\lim_ {j}\mathsf{Mod}(R\wedge M_{j}).\] This can be viewed as Grothendieck existence theorem (see Remark 2.2.3) for \(K(n)\)-local ring spectra. Once the equivalence is established, it is a formal argument to prove it is symmetric monoidal and hence induces an equivalence of Picard spaces and isomorphism of Picard groups: (Main Theorem A) \[\mathfrak{pic}_{K(n)}(R)\stackrel{{\sim}}{{\longrightarrow}}\lim _{j}\mathfrak{pic}(R\wedge M_{j}),\qquad\operatorname{Pic}_{K(n)}(R)\cong \lim_{j}\operatorname{Pic}(R\wedge M_{j}).\] This isomorphism endows \(K(n)\)-local Picard groups with a natural inverse limit topology. In Proposition 2.3.6 and Proposition 2.3.7, we prove that the \(K(n)\)-local Picard group functor \(\operatorname{Pic}_{K(n)}\colon\mathsf{CAlg}\left(\mathsf{Sp}_{K(n)}\right)\to \mathsf{Pro}(\mathsf{Ab})\) commutes with profinite products (see Definition 2.3.4) and filtered colimits. ### Generalized Moore spectra of type \(n\) Generalized Moore spectra are the building blocks of the \(K(n)\)-local category. This is illustrated by the following results of Hovey-Strickland. **Theorem 2.1.1** (Hovey-Strickland, [14, Propositions 4.22 and 7.10.(e)]).: _For each height \(n\), there is a tower of generalized Moore spectra_ \[...\xrightarrow{g_{3}}M_{2}\xrightarrow{g_{2}}M_{1}\xrightarrow{g_{1}}M_{0},\] _such that:_ 1. _All the_ \(M_{j}\)_'s are finite complexes of type_ \(n\)_._ 2. \((E_{n})_{*}(M_{j})\cong(E_{n})_{*}/J_{j}\) _for some open invariant ideal_ \(J_{j}\unlhd(E_{n})_{*}\)_._ 3. \(\bigcap_{j}J_{j}=\{0\}\)_._ 4. \(M_{j}\)_'s are_ \(\mu\)_-spectra, i.e. spectra with a left unital multiplication (see_ _[_14_, Definition 4.8]__). Their unit maps_ \(\eta_{j}\colon S^{0}\to M_{j}\) _satisfy_ \(\eta_{j+1}=\eta_{j}\circ g_{j}\) _for all_ \(j\)_._ _Denote the Bousfield localization at the height \(n\) Morava \(E\)-theory \(E_{n}\) by \(L_{n}\). For any spectrum \(X\), we have a natural equivalence:_ \[L_{K(n)}X\simeq\operatorname*{holim}_{j}(L_{n}X\wedge M_{j}). \tag{2.1.2}\] _In particular, if \(X\) is \(E_{n}\)-local, then \(L_{K(n)}X\simeq\operatorname*{holim}_{j}(X\wedge M_{j})\). When \(X\in\mathsf{Sp}_{K(n)}\subseteq\mathsf{Sp}_{E_{n}}\), we further have \(X\simeq\operatorname*{holim}_{j}(X\wedge M_{j})\)._ This shows any \(K(n)\)-local spectrum is a _pro-spectrum_ indexed by a tower of generalized Moore spectra. In the last decade, significant progress has been made in our understanding of multiplicative structures on generalized Moore spectra, for example [1, 1, 10]. Recently, Burklund proved: **Theorem 2.1.3** (Burklund, [13]).: _Let \(R\) be an \(\mathbb{E}_{m+1}\)-ring spectrum where \(m\geq 2\). Suppose \(v\in\pi_{\text{even}}(R)\) is an element such that the cofiber \(R/v\) is a \(\mu\)-spectrum. Then the tower of \(v^{i}\)-cofibers of \(R\)_ \[...\longrightarrow R/v^{i+1}\longrightarrow R/v^{i}\longrightarrow... \longrightarrow R/v\] _satisfies that \(R/v^{q}\) is an \(\mathbb{E}_{j}\)-\(R/v^{q+1}\)-algebra when \(j\leq m\) and \(q\geq j\). Moreover, we have:_ 1. _The Moore spectrum_ \(S^{0}/2^{q}\) _is an_ \(\mathbb{E}_{j}\)_-_\(S^{0}/2^{q+1}\)_-algebra when_ \(q\geq\frac{3}{2}(j+1)\)_._ 2. _For an odd prime_ \(p\)_, the Moore spectrum_ \(S^{0}/p^{q}\) _is an_ \(\mathbb{E}_{j}\)_-_\(S^{0}/p^{q+1}\)_-algebra when_ \(q\geq j+1\)_._ 3. _For any prime_ \(p\)_, height_ \(n\)_, and natural number_ \(j\)_, there is a_ \(p\)_-local generalized Moore spectrum_ \(M\) _of type_ \(n\) _that admits an_ \(\mathbb{E}_{j}\)_-ring spectrum structure._ Burklund's result allows us to lift the equivalence in (2.1.2) to the level of \(K(n)\)-local ring spectra in Theorem 2.1.5. We begin by rigidifying the tower of generalized Moore spectra in Theorem 2.1.1 as follows. **Proposition 2.1.4**.: _The generalized Moore spectrum \(M_{j}\) in Theorem 2.1.1 can be chosen so that it is an \(\mathbb{E}_{j}\)-algebra over \(M_{k}\) for any \(k\geq j\geq 2\)._ Proof.: The strategy is to incorporate the criterion in Burklund's Theorem 2.1.3 into Hovey-Strickland's proof of the first half of Theorem 2.1.1 in [14, Proposition 4.22]. We will construct the tower \(\{\cdots\to M_{j+1}\to M_{j}\to\cdots\to M_{1}\}\) by induction on the height \(n\). For \(n=1\), this is already stated in Theorem 2.1.3. Suppose that a tower of type-\((n-1)\) generalized Moore spectra \(\{\cdots\to W_{3}\to W_{2}\}\) has been constructed such that \(W_{j}\) is an \(\mathbb{E}_{j}\)-algebra over \(W_{k}\) for any \(k\geq j\geq 2\). We will construct a type-\(n\) tower \(\{M_{j}\}\) as cofibers of \(v_{n}\)-self maps of \(\{W_{j+1}\}\). It is necessary to start with an \(\mathbb{E}_{j+1}\)-algebra \(W_{j+1}\) since we need \(\mathbb{E}_{j+1}\)-algebras to produce \(\mathbb{E}_{j}\)-quotients by Theorem 2.1.3. Set \(M_{2}=W_{3}/v^{2}\) where \(v\) is some type-\(n\) self map of \(W_{3}\) such that \(W_{3}/v\) is a \(\mu\)-spectrum. The existence of such a map \(v\) is guaranteed by [14, Proposition 4.11]. Then Theorem 2.1.3 implies \(M_{2}\) is an \(\mathbb{E}_{2}\)-\(W_{3}\)-algebra. Suppose we have constructed \(M_{j-1}\to\cdots\to M_{2}\) with desired properties and \(M_{j-1}\) is an \(\mathbb{E}_{j-1}\)-\(W_{j}\)-algebra. The induction step in the proof of [11, Proposition 4.22] produces a map between cofiber sequences. A priori, the right square above consists of \(\mu\)-spectrum maps. By our inductive hypothesis, \(g_{j+1}\) is a map of \(\mathbb{E}_{j}\)-rings and \(q_{j}\) is a map of \(\mathbb{E}_{j-1}\) rings. We need to find a self map \(w_{j+1}\) of \(W_{j+1}\) so that the map \(q_{j+1}\) is an \(\mathbb{E}_{j}\)-ring map and \(f_{j}\) is an \(\mathbb{E}_{j-1}\)-ring map. Theorem 2.1.3 implies that \(q_{j+1}\) can be made \(\mathbb{E}_{j}\) when we replace \(w_{j+1}\) by its \(j\)-th power. Replacing \(N\) with a larger power accordingly, we can make the left square commute as in the proof of [11, Proposition 4.22]. Then we obtain a new \(f_{j}\) as a map between cofibers. To prove \(f_{j}\) is \(\mathbb{E}_{j-1}\), consider the pushout \(P\) of the \(\mathbb{E}_{j}\)-ring maps \(q_{j+1}\) and \(g_{j+1}\) is the diagram above. Then \(\bar{g}_{j+1}\) and \(\bar{q}_{j+1}\) are both \(\mathbb{E}_{j}\)-ring maps. So it suffices to show \(f_{j}^{\prime}\colon P=M_{j}\wedge_{W_{j+1}}W_{j}\to M_{j-1}\) is \(\mathbb{E}_{j-1}\). One can check that \(f_{j}^{\prime}\) sits in a cofiber sequence: \[M_{j}\wedge_{W_{j+1}}\Sigma^{N|w_{j}|}W_{j}\xrightarrow{1\wedge w_{j}^{N}}M_{ j}\wedge_{W_{j+1}}W_{j}\xrightarrow{f_{j}^{\prime}}M_{j-1}.\] As the quotient-by-\(w_{j}\) map \(q_{j}\) is already \(\mathbb{E}_{j-1}\), the quotient-by-\(w_{j}^{N}\) map \(f_{j}^{\prime}\) must also be \(\mathbb{E}_{j-1}\) by Theorem 2.1.3. Consequently, we have constructed a complex \(M_{j}\) such that \(q_{j+1}\colon W_{j+1}\to M_{j}\) is an \(\mathbb{E}_{j}\)-ring map and \(f_{j}\colon M_{j}\to M_{j-1}\) is an \(\mathbb{E}_{j-1}\)-ring map. This finishes the inductive step. **Theorem 2.1.5**.: _There is an equivalence of \(K(n)\)-local \(\mathbb{E}_{\infty}\)-rings \(S^{0}_{K(n)}\to\lim_{j}L_{n}M_{j}\)._ Proof.: The equivalence on the level of \(K(n)\)-local spectra follows from Hovey-Strickland's (2.1.2). By Proposition 1.1.10, it remains to show that this is a map of \(\mathbb{E}_{\infty}\)-rings. Proposition 2.1.4 implies the map \(S^{0}_{K(n)}\to L_{n}M_{j}\) is a map of \(\mathbb{E}_{j}\)-rings for any \(j\geq 2\). Then it is a map of \(\mathbb{E}_{\infty}\)-rings by the following Lemma 2.1.6. **Lemma 2.1.6**.: _Let \(\mathcal{C}\) be a symmetric monoidal \(\infty\)-category whose underlying \(\infty\)-category is complete._ 1. _Consider a sequence of algebra objects_ \(\cdots\to R_{k+1}\to R_{k}\to\cdots\to R_{1}\) _in_ \(\mathcal{C}\)_. If_ \(R_{k}\) _is an_ \(\mathbb{E}_{k}\)_-algebra over_ \(R_{k+1}\) _for all_ \(k\)_, then_ \(R\mathrel{\mathop{:}}=\lim_{k}R_{k}\in\mathsf{CAlg}(\mathcal{C})\)_._ 2. _Let_ \(R,R^{\prime}\in\mathsf{CAlg}(\mathcal{C})\)_. Suppose_ \(f\colon R\to R^{\prime}\) _is a morphism in_ \(\mathsf{Alg}_{\mathbb{E}_{k}}(\mathcal{C})\) _for all_ \(k\)_, then_ \(f\) _is a morphism in_ \(\mathsf{CAlg}(\mathcal{C})\)_._ Proof.: By [17, Corollary 3.2.2.5] (Proposition 1.1.10), the \(\infty\)-categories \(\mathsf{Alg}_{\mathbb{E}_{k}}(\mathcal{C})\) are complete for all \(k\) when the underlying \(\infty\)-category \(\mathcal{C}\) is complete. It follows that the limit \(R\) is an \(\mathbb{E}_{k}\)-algebra for all \(k\). This implies \(R\) is \(\mathbb{E}_{\infty}\) as explained below. Recall from Definition 1.1.9, an \(\mathcal{O}\)-algebra in \(\mathcal{C}\) is a lax monoidal functor of \(\infty\)-operads \(\mathcal{O}^{\otimes}\to\mathcal{C}^{\otimes}\). Denote by \(\mathbb{E}_{k}^{\otimes}\) the \(\infty\)-operad associated to \(\mathbb{E}_{k}\)-algebras. The colimit of the \(\infty\)-operads \(\mathbb{E}_{k}^{\otimes}\) is equivalent to the commutative \(\infty\)-operad \(\mathbb{E}_{\infty}^{\otimes}\) by [17, Corollary 5.1.1.5]. This implies: \[\mathsf{CAlg}\left(\mathcal{C}\right)=\mathsf{Fun}^{\mathrm{lax}}\left( \mathbb{E}_{\infty}^{\otimes},\mathcal{C}^{\otimes}\right)\simeq\mathsf{Fun}^{ \mathrm{lax}}\left(\operatorname*{colim}_{k}\mathbb{E}_{k}^{\otimes},\mathcal{C}^ {\otimes}\right)\simeq\lim_{k}\mathsf{Fun}^{\mathrm{lax}}\left(\mathbb{E}_{k}^{ \otimes},\mathcal{C}^{\otimes}\right)=\lim_{k}\mathsf{Alg}_{\mathbb{E}_{k}} \left(\mathcal{C}\right).\] Both claims then follow from the descriptions of objects and morphisms in the limit category on the right hand side. _Remark 2.1.7_.: Davis-Lawson proved in [14] that any tower of generalized Moore spectra \(\{M_{j}\}\) of type \(n\) in Theorem 2.1.1 is an \(\mathbb{E}_{\infty}\)-algebra in the category of _pro_-spectra. In view of Proposition 2.1.4, Burklund's Theorem 2.1.3 in [13] is a refinement of Davis-Lawson's result. ### Limits of module categories In the remaining of the paper, fix a tower of generalized Moore spectra \(\{M_{j}\}\) satisfying both Theorem 2.1.1 and Proposition 2.1.4. For \(R\in\mathsf{CAlg}\left(\mathsf{Sp}_{K(n)}\right)\), we will lift the equivalence \(R\stackrel{{\sim}}{{\longrightarrow}}\lim_{j}R\wedge M_{j}\) in (2.1.2) to the level of module categories, first as \(\infty\)-categories in Theorem 2.2.1, then a symmetric monoidal \(\infty\)-categories in Proposition 2.2.7. This lifting is not a formal result, as explained in Remark 2.2.2. Main Theorem A is then obtained by applying the Picard space functor to the equivalence in Theorem 2.2.1. **Theorem 2.2.1**.: _Let \(R\) be a \(K(n)\)-local \(\mathbb{E}_{\infty}\)-ring spectrum for some \(2\leq k\leq\infty\). Denote by \(R_{j}\) the \(\mathbb{E}_{j}\)-ring \(R\wedge M_{j}\). Then the limit of base change functors is an equivalence of \(\infty\)-categories:_ \[F\colon\mathsf{Mod}_{K(n)}R\stackrel{{\sim}}{{\longrightarrow} }\lim_{j}\mathsf{Mod}_{K(n)}R_{j}\stackrel{{\simeq}}{{ \leftarrow}}\lim_{j}\mathsf{Mod}(R_{j}).\] Before the proof, we make some observations. _Remark 2.2.2_.: Limits of (discrete) commutative rings do not necessarily commute with taking module (derived) categories. One counterexample is \(\mathcal{D}\mathbb{Z}_{p}\neq\lim_{k}\mathcal{D}\mathbb{Z}/p^{k}\), where \(\mathbb{Q}_{p}\) is a \(\mathbb{Z}_{p}\)-module but not a limit of \(\mathbb{Z}/p^{k}\)-modules/complexes. This discrepancy is fixed if we consider \(p\)-complete (in the derived sense) \(\mathbb{Z}_{p}\)-chain complexes instead. Indeed, there is an equivalence of categories: \[\left(\mathcal{D}\mathbb{Z}_{p}\right)_{p}^{\wedge}\stackrel{{ \sim}}{{\longrightarrow}}\lim_{k}\mathcal{D}\mathbb{Z}/p^{k}.\] This is essentially the height \(n=1\) case of Theorem 2.2.1, since the Moore spectrum \(S^{0}/p^{k}\) is a finite complex of type \(1\). Alternatively, we can restrict to _coherent_ modules over \(\mathbb{Z}_{p}\). (Note \(\mathbb{Q}_{p}\) is not a coherent \(\mathbb{Z}_{p}\)-module.) The equivalence then follows from the Grothendieck existence theorem below. _Remark 2.2.3_.: Theorem 2.2.1 can be viewed as a Grothendieck existence theorem for \(K(n)\)-local commutative ring spectra. In algebraic geometry, the Grothendieck existence theorem states: 1. (Coherent sheaf version, [15, Theorem 5.1.4]) Let \(R\) be a Noetherian ring complete with respect to an ideal \(I\), and \(X\) be a proper \(R\)-scheme. Write \(X_{j}\coloneqq X\times_{R}\operatorname{Spec}R/I^{j}\) and \(\mathfrak{X}=\operatorname{colim}_{j}X_{j}\) for the completion of \(X\) at \(I\). Then the restriction functor induces an equivalence of categories of coherent sheaves: \[\mathsf{Coh}(\mathfrak{X})\stackrel{{\sim}}{{\longrightarrow}} \lim_{j}\mathsf{Coh}(X_{j}).\] In [1, Theorem 1.3], Alper-Hall-Rydh proved this statement for certain quotient stacks. 2. (Functor points of view version, [1, Theorems 2.20 and 2.22]) Let \(\mathfrak{X}\) be an algebraic stack locally of finite type over a field \(k\). Then for every complete local Noetherian \(k\)-algebra \((R,\mathfrak{m})\), the restriction functor induces an equivalence of categories (groupoids): \[\mathfrak{X}(\operatorname{Spec}R)\stackrel{{\sim}}{{ \longrightarrow}}\lim_{j}\mathfrak{X}(\operatorname{Spec}R/\mathfrak{m}^{j}).\] **Lemma 2.2.4**.: _Let \(R\) be a \(K(n)\)-local ring spectrum. Then any \(R_{k}=R\wedge M_{k}\)-module spectrum \(A\) in \(\mathsf{Sp}\) is \(K(n)\)-local._ Proof.: Note that \(R_{k}\) is \(K(n)\)-local because \(R\) is \(K(n)\)-local and \(M_{k}\) is a finite complex. As an \(M_{k}\)-module spectrum, \(R_{k}\) is a retract of \(M_{k}\wedge R_{k}\), since the composition \(R_{k}\xrightarrow{\ n_{k}\wedge 1}M_{k}\wedge R_{k}\to R_{k}\) is the identity map. [11, Theorem 13.1] then implies \(R_{k}\wedge A\in\mathsf{Sp}_{K(n)}\). From this, we conclude \(A\) is \(K(n)\)-local as a retract of \(R_{k}\wedge A\). Proof of Theorem 2.2.1.: The isomorphism \(\lim_{j}\mathsf{Mod}_{K(n)}(R_{j})\cong\lim_{j}\mathsf{Mod}(R_{j})\) follows from Lemma 2.2.4. To establish the first equivalence \(\mathsf{Mod}_{K(n)}R\xrightarrow{\ \ \ \ \ }\lim_{j}\mathsf{Mod}_{K(n)}R_{j}\), we prove the equivalence by defining two functors \(F\) and \(G\) inverse to each other Note that objects in the limit category are towers of \(R_{j}\)-module spectra \(\{A_{j}\}\) with equivalences of \(R_{j-1}\)-modules \(\alpha_{j}\colon R_{j-1}\wedge_{R_{j}}A_{j}\xrightarrow{\ \ \ }A_{j-1}\). The functor \(F\) sends \(A\) to \(\{A_{j}=R_{j}\wedge_{R}A\simeq M_{j}\wedge A\}\). The functor \(G\) sends a tower of \(R_{j}\)-modules \(\{A_{j}\}\) to \(\lim_{j}A_{j}\) in \(R\)-modules, where structure maps are given by: We need to check \(G\) is well-defined, namely \(\lim_{j}A_{j}\) is a \(K(n)\)-local \(R\)-module spectrum. The limit is \(K(n)\)-local since \(\mathsf{Sp}_{K(n)}\) is closed under limits. The \(R\)-module structure on \(\lim_{j}A_{j}\) is given by the limit of the maps \[R\wedge\left(\lim_{j}A_{j}\right)\longrightarrow R_{k}\wedge A_{k} \longrightarrow A_{k}.\] As \(\lim_{j}A_{j}\in\mathsf{Sp}_{K(n)}\), the structure map of \(\lim_{j}A_{j}\) as an \(R\)-module has a canonical factorization: This shows \(\lim_{j}A_{j}\in\mathsf{Mod}_{K(n)}R\) and the functor \(G\) is well-defined. Next we prove that \(F\) is an equivalence of \(\infty\)-categories with inverse \(G\). Note that any \(K(n)\)-local spectrum \(A\) is \(E_{n}\)-local, (2.1.2) implies the composition \(G\circ F\) is naturally homotopic to \(\operatorname{id}\). The harder direction is to show the composition \(F\circ G\) is also homotopic to \(\operatorname{id}\). For any \(\{A_{j}\}\in\lim_{j}\mathsf{Mod}_{K(n)}R_{j}\), we have \[F\circ G(\{A_{j}\})=\left\{R_{k}\wedge_{R}\left(\lim_{j}A_{j}\right)\right\} \simeq\left\{\lim_{j}\left(M_{k}\wedge A_{j}\right)\right\}.\] We now construct a map \(\{f_{j}\}\) between the two towers \(F\circ G(\{A_{j}\})\) and \(\{A_{j}\}\) such that each \(f_{j}\) is an equivalence. By [11, Proposition 4.16], there are splittings when \(j>k\). \[M_{k}\wedge A_{j}\simeq M_{k}\wedge M_{j}\wedge_{M_{j}}A_{j}\simeq(M_{k} \wedge M_{j})\wedge_{M_{j}}A_{j}\simeq\left(M_{k}\vee\bigvee_{i}\Sigma^{d_{j,i }}M_{k}\right)\wedge_{M_{j}}A_{j}\simeq A_{k}\vee\bigvee_{i}\Sigma^{d_{j,i}}A _{k} \tag{2.2.5}\] where \(d_{j,i}>0\) and the last equivalence is a wedge of suspensions of the following equivalences: \[M_{k}\wedge_{M_{j}}A_{j}\simeq R_{k}\wedge_{R_{j}}A_{j}\simeq R_{k}\wedge_{R_{ j-1}}R_{j-1}\wedge_{R_{j}}A_{j}\xrightarrow{\ \ \ \ \ }R_{k}\wedge_{R_{j-1}}A_{j-1}\xrightarrow{\ \ \ \ }...\xrightarrow{\ \ \ \ }R_{k}\wedge_{R_{k+1}}A_{k+1}\xrightarrow{\ \ \ \ }A_{k}.\] Taking the limit of \(M_{k}\wedge A_{j}\) as \(j\to\infty\), the structure map on the \(A_{k}\)-summand is the identity, and are multiplications by nilpotent elements in \(R_{k}\) on all other summands. This yields an equivalence \(f_{k}\colon A_{k}\xrightarrow{\rightharpoonup}\lim_{j}M_{k}\wedge A_{j}\), which is induced by the inclusion of the first summand in (2.2.5). The compatibility of \(f_{k}\) then follows from the commutative diagram when \(j\geq k\): The maps between towers \(\{f_{j}\colon A_{j}\to R_{j}\wedge_{R}\left(\lim_{k}A_{k}\right)\}\) then assemble into an equivalence in \(\lim_{j}\operatorname{\mathsf{Mod}}_{K(n)}R_{j}\). Consequently, we have proved the composition \(F\circ G\) is homotopic to \(\operatorname{id}\). Next we will upgrade Theorem2.2.1 to an equivalence of symmetric monoidal \(\infty\)-categories. By Proposition1.1.11, each of the categories \(\operatorname{\mathsf{Mod}}(R_{j})\) in the limit system is \(\operatorname{\mathbb{E}}_{j-1}\)-monoidal. **Lemma 2.2.6**.: _Let \(\{\cdots\to\mathcal{C}_{k+1}\to\mathcal{C}_{k}\to\cdots\to\mathcal{C}_{1}\}\) be an inverse system of monoidal \(\infty\)-categories such that \(\mathcal{C}_{k}\) is \(\operatorname{\mathbb{E}}_{k}\)-monoidal and the functor \(\mathcal{C}_{k+1}\to\mathcal{C}_{k}\) is \(\operatorname{\mathbb{E}}_{k}\)-monoidal. Then \(\lim_{k}\mathcal{C}_{k}\) has a natural \(\operatorname{\mathbb{E}}_{\infty}\)-monoidal structure such that \(\lim_{k}\mathcal{C}_{k}\to\mathcal{C}_{j}\) is \(\operatorname{\mathbb{E}}_{j}\)-monoidal for any \(j\). Moreover, let \(f\colon\mathcal{C}\to\mathcal{D}\) be a functor such that \(\mathcal{C}\) and \(\mathcal{D}\) are both symmetric monoidal. If \(f\) is \(\operatorname{\mathbb{E}}_{k}\) for any \(k\), then it is a symmetric monoidal functor._ Proof.: Let \(\mathcal{O}^{\otimes}\) be an \(\infty\)-operad. By [17, Corollary 1.4.15], an \(\mathcal{O}\)-monoidal \(\infty\)-category is an \(\mathcal{O}\)-algebra object in \(\operatorname{\mathsf{Cat}}_{\infty}\) with Cartesian symmetric monoidal product. The claim now follows from Lemma2.1.6 since \(\operatorname{\mathsf{Cat}}_{\infty}\) is closed under small limits. **Proposition 2.2.7**.: _The equivalence in Theorem2.2.1 is symmetric monoidal._ Proof.: Recall from Proposition2.1.4 that the generalized Moore spectrum \(M_{j}\) is an \(\operatorname{\mathbb{E}}_{j}\)-ring over \(M_{j+1}\). By [17, Remark 7.1.3.7], the base change functors are \(\operatorname{\mathbb{E}}_{j-1}\)-monoidal: \[R_{j}\wedge_{R} -\operatorname{\mathsf{Mod}}_{K(n)}R\longrightarrow\operatorname{ \mathsf{Mod}}_{K(n)}R_{j}\simeq\operatorname{\mathsf{Mod}}(R_{j}),\] \[R_{j}\wedge_{R_{j+1}} -\operatorname{\mathsf{Mod}}_{K(n)}R_{j+1}\longrightarrow \operatorname{\mathsf{Mod}}_{K(n)}R_{j}\simeq\operatorname{\mathsf{Mod}}(R_{ j}).\] As the equivalence in Theorem2.2.1 is a limit of \(\operatorname{\mathbb{E}}_{j}\)-monoidal maps, we conclude from Lemma2.2.6 that it is also a map of \(\operatorname{\mathbb{E}}_{\infty}\)-monoidal categories. **Lemma 2.2.8**.: _Let \(\{\cdots\to X_{n}\to X_{n-1}\to\cdots\to X_{1}\}\) be an inverse system of group-like spaces such that \(X_{k}\) is a group-like \(\operatorname{\mathbb{E}}_{k}\)-space and \(X_{k+1}\to X_{k}\) is a map of group-like \(\operatorname{\mathbb{E}}_{k}\)-spaces. Then \(X=\operatorname{holim}X_{k}\) is a group-like \(\operatorname{\mathbb{E}}_{\infty}\)-space and the map \(X\to X_{k}\) is a morphism of group-like \(\operatorname{\mathbb{E}}_{k}\)-spaces._ Proof.: Recall that a group-like \(\operatorname{\mathbb{E}}_{k}\)-space is a group-like \(\operatorname{\mathbb{E}}_{k}\)-algebra in \(\operatorname{\mathsf{Top}}\). The claim then follows from Lemma2.1.6. Alternatively, this statement can be proved using the Recognition Principle that \(X_{k}\) is a group-like \(\operatorname{\mathbb{E}}_{k}\)-space iff it is equivalent to \(\Omega^{k}Y_{k}\) for some \(Y_{k}\). **Proposition 2.2.9** (Mathew-Stojanoska, [16, Proposition 2.2.3]).: _The Picard space functor \(\operatorname{\mathsf{Cat}}_{\infty}^{\operatorname{\mathbb{E}}_{k}}\to \operatorname{\mathsf{Alg}}_{\operatorname{\mathbb{E}}_{k}}^{\operatorname{ \mathbb{EP}}}(\operatorname{\mathsf{Top}})\) commutes with limits and filtered colimits._ Applying the Picard space functor to the monoidal equivalence in Theorem2.2.1, we finally conclude: **Main Theorem A**.: _Let \(R\) be a \(K(n)\)-local \(\mathbb{E}_{\infty}\)-ring spectrum. The limit of base change maps of Picard spaces is an equivalence of group-like \(\mathbb{E}_{\infty}\)-spaces:_ \[\mathfrak{pic}_{K(n)}(R)\stackrel{{\sim}}{{\longrightarrow}} \lim_{j}\mathfrak{pic}_{K(n)}(R_{j})\stackrel{{\cong}}{{ \leftarrow}}\lim_{j}\mathfrak{pic}(R_{j}).\] _This induces an isomorphism of Picard groups:_ \[f\colon\operatorname{Pic}_{K(n)}(R)\stackrel{{\sim}}{{ \longrightarrow}}\lim_{j}\operatorname{Pic}_{K(n)}(R_{j})\cong\lim_{j} \operatorname{Pic}(R_{j}).\] Proof.: We has a sequence of equivalences of group-like \(\mathbb{E}_{\infty}\)-spaces: \[\mathfrak{pic}_{K(n)}(R)\mathrel{\mathop{:}}=\mathfrak{pic}\left(\mathsf{Mod }_{K(n)}(R)\right)\stackrel{{\sim}}{{\longrightarrow}}\lim_{2.2.1,2.2.7}\mathfrak{pic}\left(\lim_{j}\mathsf{Mod}_{K(n)}(R_{j})\right) \stackrel{{\sim}}{{\longrightarrow}}\lim_{j}\mathfrak{pic}_{K(n )}(R_{j})\stackrel{{\cong}}{{\longrightarrow}}\lim_{j} \mathfrak{pic}(R_{j}).\] It remains to compute homotopy groups of \(\lim_{j}\mathfrak{pic}(R_{j})\). The Milnor sequence of the homotopy limit of Picard spaces implies \(f\) surjective. To show it is injective, let \(X\) be an element in \(\ker f\). Then for any \(j\), we have \(X\wedge M_{j}\simeq X\wedge_{R}R_{j}\simeq R_{j}\). It follows from Hovey-Strickland's (2.1.2) that \(X\simeq\lim_{j}X\wedge M_{j}\simeq\lim_{j}R_{j}\simeq R\). This proves the injectivity of \(f\). _Remark 2.2.10_.: We note that the results in this subsection work for a \(K(n)\)-local \(\mathbb{E}_{k}\)-ring \(R\) for \(k\geq 2\) as well. Recall from Proposition1.1.11, the module category \(\mathsf{Mod}_{K(n)}(R)\) is \(\mathbb{E}_{k-1}\)-monoidal. The limit of base change maps induces an equivalence of \(\mathbb{E}_{k-1}\)-monoidal \(\infty\)-categories: \[\mathsf{Mod}_{K(n)}(R)\stackrel{{\sim}}{{\longrightarrow}}\lim _{j}\mathsf{Mod}(R_{j}).\] On the level of Picard spaces, this gives rise to an equivalence of group-like \(\mathbb{E}_{k-1}\)-spaces \(\mathfrak{pic}_{K(n)}(R)\simeq\lim_{j}\mathfrak{pic}(R_{j})\), which induces an isomorphism of groups \(\operatorname{Pic}_{K(n)}(R)\mathrel{\mathop{:}}=\lim_{j}\operatorname{Pic} (R_{j})\). ### Properties of \(K(n)\)-local Picard groups In Main Theorem A, we have lifted the Picard group functor to the pro-abelian groups indexed by a tower of generalized Moore spectra. **Proposition 2.3.1**.: _Let \(R\) be a \(K(n)\)-local \(\mathbb{E}_{\infty}\)-ring spectrum._ 1. ([11, Theorem 14.2.(d)]) _The Picard group_ \(\operatorname{Pic}_{K(n)}(R)\) _is a topological abelian group, where a basis for closed neighborhoods of_ \(R\) _as the unit in the Picard group is given by_ \[V_{j}=\{X\in\operatorname{Pic}_{K(n)}(R)\:|\:X\wedge M_{j}\simeq R\wedge M_{j}\}.\] 2. _Let_ \(f\colon R_{1}\to R_{2}\) _be a morphism in_ \(\mathsf{CAlg}\left(\mathsf{Sp}_{K(n)}\right)\)_. Then its induced map on_ \(K(n)\)_-local Picard groups is continuous with respect to the inverse limit topology described above._ _Remark 2.3.2_.: When \(R=S^{0}_{K(n)}\), this inverse limit topology on \(\operatorname{Pic}_{K(n)}\mathrel{\mathop{:}}=\operatorname{Pic}_{K(n)} \left(S^{0}_{K(n)}\right)\) has been described in [11] (see also [10, SS1.2]). **Proposition 2.3.3**.: _Let \(R\) be a \(K(n)\)-local \(\mathbb{E}_{\infty}\)-ring spectrum. The inverse limit topology on \(\operatorname{Pic}_{K(n)}(R)\) described in Proposition2.3.1 does not depend on the choice of the tower of generalized Moore spectra \(\{M_{j}\}\) in Theorem2.1.1._ Proof.: This follows from the fact that any two towers of generalized Moore spectra of type \(n\) are equivalent in the homotopy category of \(\mathsf{Pro}(\mathsf{Sp})\) by [11, Proposition 4.22] (see restatement in [10, Theorem 6.1]). One important property of Picard spaces and groups in \(\mathsf{CAlg}\) is that they commute with finite products and filtered colimits of rings. We next show that the \(K(n)\)-local Picard functors \(\mathsf{pic}_{K(n)}\colon\mathsf{CAlg}\left(\mathsf{Sp}_{K(n)}\right)\to \mathsf{Pro}(\mathsf{Top}_{*})\) and \(\operatorname{Pic}_{K(n)}\colon\mathsf{CAlg}\left(\mathsf{Sp}_{K(n)}\right) \to\mathsf{Pro}(\mathsf{Ab})\) preserve _profinite_ products (see definition below) and filtered colimits. This is a key step in our study of profinite descent spectral sequences for \(K(n)\)-local Picard groups in Section 3. **Definition 2.3.4**.: For any cofiltered limit of finite sets \(X=\lim_{\alpha}X_{\alpha}\) and \(R\in\mathsf{CAlg}\left(\mathsf{Sp}_{K(n)}\right)\), define the \(K(n)\)-local _profinite_ product of \(R\) indexed by \(X\) as: \[\operatorname{Map}_{c}(X,R)\mathrel{\mathop{:}}=L_{K(n)}\operatorname*{colim} _{\alpha}\operatorname{Map}(X_{\alpha},R).\] By construction, \(\operatorname{Map}_{c}(X,R)\) is a \(K(n)\)-local \(\mathbb{E}_{\infty}\)-ring spectrum. We also have equivalences: \[\operatorname{Map}_{c}(X,R) \mathrel{\simeq}\lim_{j}L_{n}\operatorname*{colim}_{\alpha} \operatorname{Map}(X_{\alpha},R)\wedge M_{j} \tag{2.1.2}\] \[\mathrel{\simeq}\lim_{j}\operatorname*{colim}_{\alpha} \operatorname{Map}(X_{\alpha},R)\wedge M_{j}\] \[\mathrel{\simeq}\lim_{j}\operatorname*{colim}_{\alpha} \operatorname{Map}(X_{\alpha},R\wedge M_{j}).\] The last equivalence justifies the notation \(\operatorname{Map}_{c}(X,R)\). **Proposition 2.3.6**.: _There are natural equivalences of group-like \(\mathbb{E}_{\infty}\)-spaces and isomorphisms of pro-abelian groups:_ \[\mathsf{pic}_{K(n)}\operatorname{Map}_{c}(X,R) \mathrel{\simeq}\operatorname{Map}_{c}\left(X,\mathsf{pic}_{K( n)}(R)\right)\mathrel{\mathop{:}}=\lim_{j}\operatorname*{colim}_{\alpha} \left(X_{\alpha},\mathsf{pic}(R\wedge M_{j})\right),\] \[\operatorname{Pic}_{K(n)}\operatorname{Map}_{c}(X,R) \mathrel{\cong}\operatorname{Map}_{c}\left(X,\operatorname{ Pic}_{K(n)}(R)\right)\mathrel{\mathop{:}}=\lim_{j}\operatorname*{colim}_{ \alpha}\left(X_{\alpha},\operatorname{Pic}(R\wedge M_{j})\right).\] Proof.: The statement for \(K(n)\)-local Picard groups follows from [11, Proposition 2.4.1] and Main Theorem A. \[\operatorname{Pic}_{K(n)}\operatorname{Map}_{c}(X,R) \mathrel{\simeq}\operatorname{Pic}_{K(n)}\left[\lim_{j} \operatorname*{colim}_{\alpha}\operatorname{Map}(X_{\alpha},R\wedge M_{j})\right]\] (2.3.5) \[\mathrel{\simeq}\lim_{j}\operatorname{Pic}\left[\operatorname*{ colim}_{\alpha}\operatorname{Map}(X_{\alpha},R\wedge M_{j})\right]\] Main Theorem A \[\mathrel{\simeq}\lim_{j}\operatorname*{colim}_{\alpha} \operatorname{Pic}\left[\operatorname{Map}(X_{\alpha},R\wedge M_{j})\right]\] [11, Proposition 2.4.1 ] \[\mathrel{\simeq}\lim_{j}\operatorname*{colim}_{\alpha} \operatorname{Map}(X_{\alpha},\operatorname{Pic}\left(R\wedge M_{j}\right))\] Pic preserves finite products \[\mathrel{=}\operatorname{Map}_{c}\left(\lim_{\alpha}X_{\alpha}, \lim_{j}\operatorname{Pic}\left(R\wedge M_{j}\right)\right)\] \[\mathrel{\simeq}\operatorname{Map}_{c}\left(X,\operatorname{ Pic}_{K(n)}(R)\right).\] Main Theorem A The proof for \(\mathsf{pic}_{K(n)}\) is entirely parallel. By [11, Proposition 2.4.1], the Picard group functor \(\operatorname{Pic}\colon\mathsf{CAlg}(\mathsf{Sp})\to\mathsf{Ab}\) preserves filtered colimits. The \(K(n)\)-local Picard groups satisfy a similar property, but only as _pro_-abelian groups. **Proposition 2.3.7**.: _The \(K(n)\)-local Picard group functor preserves filtered colimits._ Proof.: Let \(L_{K(n)}\operatorname{colim}_{k}R_{k}\) be a filtered colimit of \(K(n)\)-local \(\operatorname{\mathbb{E}_{\infty}}\)-ring spectra \(R_{k}\). Similar to the proof of Proposition 2.3.6 above, we have isomorphisms: \[\operatorname{Pic}_{K(n)}\left(L_{K(n)}\operatorname{colim}_{k}R_{k}\right) \cong\operatorname{Pic}_{K(n)}\left(\lim_{j}\operatorname{colim}_{k}R_{k} \wedge M_{j}\right)\cong\lim_{j}\operatorname{colim}_{k}\operatorname{Pic}(R_{ k}\wedge M_{j}).\] From the inverse limit topology on \(\operatorname{Pic}_{K(n)}(R_{k})\) in Main Theorem A, we can see the right hand side of the isomorphism is the colimit of \(\operatorname{Pic}_{K(n)}(R_{k})\) in \(\operatorname{\mathsf{Pro}(\mathsf{Ab})}\). _Remark 2.3.8_.: The _discrete_\(K(n)\)-local Picard group functor does not preverse filter colimits, since the limit functor \(\operatorname{\mathsf{Pro}(\mathsf{Ab})}\to\mathsf{Ab}\) does not. We will also see explicit counterexamples in our computations of \(K(1)\)-local Picard groups of Galois extensions of the \(K(1)\)-local spheres in Remark 4.2.8 and Remark 4.3.6. ## 3. Profinite descent for \(K(n)\)-local Picard groups In this section, we study \(K(n)\)-local Picard groups in the context of profinite descent in \(\operatorname{\mathsf{Sp}}_{K(n)}\). We begin by reviewing the theory of \(K(n)\)-local (pro)finite Galois extensions in [10]. Our main examples are maps between homotopy fixed points \(E_{n}^{hG}\) of \(E_{n}\) constructed by Devinatz-Hopkins in [1]. In Section 3.2, we connect profinite Galois extensions with descent theory in [15, 16]. In particular, we show in Proposition 3.2.6 that if \(A\to B\to C\) is a composition of two profinite Galois extensions such that \(A\to C\) is descendable, then so are \(A\to B\) and \(B\to C\). In particular, this implies \(E_{n}^{hG}\to E_{n}\) admits descent for any closed subgroup \(G\leq\mathbb{G}_{n}\). Building on the preparations above, we set up the profinite descent spectral sequence for \(K(n)\)-local Picard groups and identify its entire \(E_{1}\) and \(E_{2}\)-pages in Main Theorem B. A key step is to commute the \(K(n)\)-local Picard groups with profinite products in \(\mathsf{CAlg}\left(\operatorname{\mathsf{Sp}}_{K(n)}\right)\) (Proposition 2.3.6), which allows us to compute the \(s=0\) line on the \(E_{1}\)-page of the spectral sequence. Specializing to our main examples of descendable \(K(n)\)-local profinite Galois extensions, we obtain profinite descent spectral sequences between Picard groups of the homotopy fixed points \(E_{n}^{hG}\) in Corollary 3.3.14. This will be our main computational tool in Section 4. By analyzing this spectral sequence, we obtain some algebraicity result for \(\operatorname{Pic}_{K(n)}\left(E_{n}^{hG}\right)\) in Main Theorem C. ### Review on \(K(n)\)-local profinite Galois extensions First, we recall the descent spectral sequences for _finite_\(G\)-Galois extensions of \(\operatorname{\mathbb{E}_{\infty}}\)-ring spectra. **Definition 3.1.1** (Rognes, [10, Definitions 4.1.3]).: Let \(G\) be a finite group. A map of ring spectra \(A\to B\) is called \(G\)-Galois if there is an \(A\)-linear \(G\)-action on \(B\) such that: \[A\simeq B^{hG},\qquad B\wedge_{A}B\simeq\operatorname{Map}(G,B).\] The extension is called faithful if the base change functor \(B\wedge_{A}-:\mathsf{Mod}(A)\to\mathsf{Mod}(B)\) is conservative, i.e. \(N\in\mathsf{Mod}(A)\) is zero iff \(B\wedge_{A}N\simeq*\). When \(A\to B\) is a finite Galois extension, there is a **homotopy fixed point spectral sequence** (HFPSS) to compute \(\pi_{*}(A)\) from \(\pi_{*}(B)\) together with the \(G\)-actions on it. From the definition of homotopy fixed points, we have \[A\simeq B^{hG}:=\operatorname{Map}(EG,B)^{G}\simeq\operatorname{Map}(|EG_{ \bullet}|,B)^{G}\simeq\operatorname{Tot}\left[\operatorname{Map}(G^{x\star +1},B)^{G}\right]\simeq\operatorname{Tot}\left[\operatorname{Map}(G^{x\star },B)\right]. \tag{3.1.2}\] Associated to this totalization is a **Bousfield-Kan spectral sequence** (BKSS) whose \(E_{1}\)-page is \[\operatorname{{}^{\mathrm{HFP}}}E_{1}^{s,t}=\pi_{t}\mathrm{Map}(G^{\times s},B) \cong\mathrm{Map}(G^{\times\bullet},\pi_{t}(B))\Longrightarrow\pi_{t-s}\left( B^{hG}\right)\cong\pi_{t-s}(A). \tag{3.1.3}\] One can further check the \(d_{1}\)-differentials in this spectral sequence are the same as the cobar differentials to compute the group cohomology of \(G\). This identifies the \(E_{2}\)-page of the spectral sequence as: \[\operatorname{{}^{\mathrm{HFP}}}E_{2}^{s,t}=H^{s}(G;\pi_{t}(B))\Longrightarrow \pi_{t-s}(A). \tag{3.1.4}\] In [10], Mathew-Stojanoska studied Picard groups of finite Galois extensions of ring spectra. **Theorem 3.1.5** ([10, page 3153]).: _Let \(G\) be a finite group. If \(A\to B\) is a faithful \(G\)-Galois extension of \(\mathbb{E}_{\infty}\)-ring spectra, then there is a natural equivalence of symmetric monoidal \(\infty\)-categories:_ \[\mathsf{Mod}(A)\simeq\left(\mathsf{Mod}(B)\right)^{hG}:=\mathrm{Tot}\left[ \mathrm{Map}(G^{\times\bullet},\mathsf{Mod}(B))\right].\] _This induces an equivalence of Picard spaces:_ \[\mathsf{pic}(A)\simeq\tau_{\geq 0}\mathrm{Tot}\left[\mathrm{Map}(G^{\times \bullet},\mathsf{pic}(B))\right].\] _Similar to (3.1.3) and (3.1.4), we obtain a homotopy fixed point spectral sequence:_ \[\operatorname{{}^{\mathsf{pic}}}E_{1}^{s,t}=\mathrm{Map}(G^{\times s},\pi_{t} \mathsf{pic}(B)),\qquad\operatorname{{}^{\mathsf{pic}}}E_{2}^{s,t}=H^{s}(G; \pi_{t}\mathsf{pic}(B))\Longrightarrow\pi_{t-s}\mathsf{pic}(A). \tag{3.1.6}\] By Corollary 1.1.7, the two spectral sequences (3.1.3) and (3.1.6) have very similar \(E_{1}\) and \(E_{2}\)-pages. It is a natural question to compare their differentials. Mathew-Stojanoska showed: **Theorem 3.1.7** ([10, Comparison Tool 5.2.4]).: _Denote by \(\operatorname{{}^{\mathrm{HFP}}}d_{r}^{s,t}\) and \(\operatorname{{}^{\mathsf{pic}}}d_{r}\) the differentials in the two spectral sequences (3.1.3) and (3.1.6), respectively. When \(t-s>0\) and \(s>0\), or \(2\leq r\leq t-1\), we have \(\operatorname{{}^{\mathsf{pic}}}d_{r}^{s,t-1}=\operatorname{{}^{\mathrm{HFP}} }d_{r}^{s,t}\)._ We make some observations on the comsimplicial spectra (3.1.2). The second assumption \(B\wedge_{A}B\simeq\mathrm{Map}(G,B)\) in Definition 3.1.1 implies that for any \(s\geq 0\), we have equivalences of ring spectra: \[\bigwedge_{A}^{s+1}B\simeq\bigwedge_{B}^{s}(B\wedge_{A}B)\simeq\bigwedge_{B}^ {s}\mathrm{Map}(G,B)\simeq\mathrm{Map}(G^{\times s},B).\] As a result, we have a level-wise equivalence of cosimplicial ring spectra \([\mathrm{Map}(G^{\times\bullet},B)]\simeq\left[\bigwedge_{A}^{s+1}B\right]\). The right hand side of this equivalence is the cobar complex for the Galois extension \(A\to B\). The first assumption \(A\simeq B^{hG}\) in Definition 3.1.1 then translates to an equivalence: \[A\simeq\mathrm{Tot}\left[\bigwedge_{A}^{s+1}B\right].\] We are now ready to study _profinite_ Galois extensions of \(K(n)\)-local \(\mathbb{E}_{\infty}\)-ring spectra. **Definition 3.1.8** (Rognes, [11, Definitions 8.1.1]).: Let \(G=\lim_{k}G/U_{k}\) be a cofiltered limit of finite groups. Consider a directed system of maps of \(K(n)\)-local \(\mathbb{E}_{\infty}\)-ring spectra \(A\to B_{k}\). Suppose each \(A\to B_{k}\) is a faithful \(G/U_{k}\)-Galois extension in the sense of Definition 3.1.1 such that there are natural equivalences \(B_{k}\simeq B_{j}^{h(U_{k}/U_{j})}\) when \(U_{j}\) is an open normal subgroup of \(U_{k}\). Then \(A\to B:=L_{K(n)}\operatorname{colim}_{k}B_{k}\) is a \(K(n)\)-local _pro-\(G\)-_Galois extension of \(A\). _Remark 3.1.9_.: A special case of profinite Galois extensions is when \(G\) is _countably based_ (second countable). By [10, Corollary 1.1.13], \(G\) is countably based iff there is a sequence of open normal subgroups \(U_{1}\supseteq U_{2}\supseteq\cdots\) of \(G\) such that \(G\cong\lim_{k}G/U_{k}\). For such a profinite group \(G\), a profinite \(G\) Galois extension is a sequence of extensions \(A\to B_{1}\to B_{2}\to\cdots\to B\) such that \(A\to B_{k}\) is a \(G/U_{k}\)-extension and \(B\cong\operatorname{colim}_{k}B_{k}\). By [10, Example 1.10], any compact analytic \(p\)-adic group is countably based. This includes the Morava stabilizer group \(\mathbb{G}_{n}\) and its closed subgroups. Recall \(\hat{\wedge}\) is the monoidal product in \(\mathsf{Sp}_{K(n)}\). In [10], Devinatz-Hopkins constructed "continuous homotopy fixed points" \(E_{n}^{hG}\) for all closed subgroups \(G\leq\mathbb{G}_{n}\). This is our main example of \(K(n)\)-local profinite \(G\)-Galois extensions. **Construction 3.1.10** (Devinatz-Hopkins).: For an _open_ subgroup \(G\leq\mathbb{G}_{n}\), \(E_{n}^{hG}\) is defined as \[E_{n}^{hG}:=\operatorname{Tot}\left[\operatorname{Map}\left(\mathbb{G}_{n}/G, E_{n}^{\wedge\star+1}\right)\right].\] In particular, \(E_{n}^{hG}\) satisfies \[E_{n}\hat{\wedge}E_{n}^{hG}\simeq\operatorname{Map}(\mathbb{G}_{n}/G,E_{n}).\] For a _closed_ subgroup \(G\) of \(\mathbb{G}_{n}\), the continuous homotopy fixed points is defined as [10, Definition 1.5]: \[E_{n}^{hG}\coloneqq L_{K(n)}\operatorname*{colim}_{k}E_{n}^{h(U_{k}G)},\] where \(\mathbb{G}_{n}=U_{0}\ni U_{1}\ni U_{2}\ni\cdots\) is a decreasing sequence of open normal subgroups of \(\mathbb{G}_{n}\) such that \(\bigcap_{k}U_{k}=\{e\}\). **Theorem 3.1.11** (Devinatz-Hopkins, [10]).: _The spectra \(E_{n}^{hG}\) constructed above satisfy:_ 1. \(E_{n}^{hG}\) _is a_ \(K(n)\)_-local_ \(\mathbb{E}_{\infty}\)_-ring spectrum. In particular, there is an equivalence_ \(E_{n}^{h\mathbb{G}_{n}}\simeq S_{K(n)}^{0}\)_._ 2. _When_ \(G\leq\mathbb{G}_{n}\) _is a finite subgroup, there is an equivalence:_ \[E_{n}^{hG}\stackrel{{\sim}}{{\longrightarrow}}E_{n}^{h^{\prime}G }:=\operatorname{Map}\left(EG_{+},E_{n}\right)^{G},\] _where the right hand side is the_ \(G\)_-homotopy fixed point spectrum for finite group actions._ 3. _Suppose_ \(G\leq\mathbb{G}_{n}\) _is a closed subgroup,_ \(U\trianglelefteq G\) _is a closed normal subgroup such that_ \(G/U\) _is finite. Then_ \(E_{n}^{hG}\simeq\left(E_{n}^{hU}\right)^{h(G/U)}\)_, where the outer homotopy fixed point is the categorical homotopy fixed point for finite group actions._ **Theorem 3.1.12** (Rognes, [11, Theorem 5.4.4]).: _For inclusions \(U\trianglelefteq G\leq\mathbb{G}_{n}\) of closed groups such that \(U\trianglelefteq G\) is open (finite index) and normal, the extension \(E_{n}^{hG}\simeq\left(E_{n}^{hU}\right)^{h(G/U)}\to E_{n}^{hU}\) is a faithful \(K(n)\)-local \(G/U\)-Galois extension. In particular:_ * _For each finite subgroup_ \(F\) _of_ \(\mathbb{G}_{n}\)_, the map_ \(E_{n}^{hF}\to E_{n}\) _is a faithful_ \(F\)_-Galois extension._ * _For each open normal subgroup_ \(U\) _of_ \(\mathbb{G}_{n}\)_, the map_ \(S_{K(n)}^{0}\simeq E_{n}^{hG_{n}}\to E_{n}^{hU}\) _is a faithful_ \((\mathbb{G}_{n}/U)\)_-Galois extension._ **Corollary 3.1.13**.: _For any closed subgroup \(G\leq\mathbb{G}_{n}\), the extension \(E_{n}^{hG}\to E_{n}\) is a \(K(n)\)-local profinite \(G\)-Galois extension. When \(G\trianglelefteq G_{n}\) is a closed normal subgroup, the extension \(S_{K(n)}^{0}\simeq E_{n}^{h\mathbb{G}_{n}}\to E_{n}^{hG}\) is a \(K(n)\)-local profinite \(\mathbb{G}_{n}/G\)-Galois extension._ **Example 3.1.14**.: Another example of profinite Galois extensions of ring spectra is \(K(1)\)-local algebraic \(K\)-theory of profinite Galois extensions of discrete rings. In [11], Thomason showed \(K(1)\)-local algebraic \(K\)-theory satisfies etale descent. This means if \(R_{1}\to R_{2}\) is a \(G\)-Galois extension of discrete rings for a finite group \(G\), then \(L_{K(1)}(R_{1})\to L_{K(1)}K(R_{2})\) is a \(G\)-Galois extension of \(K(1)\)-local \(\mathbb{E}_{\infty}\)-ring spectra. Suppose if \(R_{1}\to R_{2}\) is a \(G\)-Galois extension for a profinite group \(G\simeq\lim_{k}G/U_{k}\) such that \(R_{2}\cong\operatorname*{colim}_{k}R_{1}^{U_{k}}\). By [10, Theorem 1.5], etale \(K\)-theory commutes with filtered colimits of rings. It follows that \(L_{K(1)}(R_{1})\to L_{K(1)}(R_{2})\) is a \(K(1)\)-local profinite \(G\)-Galois extension. When a prime \(p\) is invertible in a ring \(R\), the extension \(R\to R[\zeta_{p^{\infty}}]\coloneqq R\otimes\mathbb{Z}[\zeta_{p^{\infty}}]\) is a \(\mathbb{Z}_{p}^{\times}\)-Galois extension of rings. By [1, Theorem 1.4], there is a \(\mathbb{Z}_{p}^{\times}\)-equivariant equivalence of \(K(1)\)-local \(\mathbb{E}_{\infty}\)-ring spectra \(L_{K(1)}K(R[\zeta_{p^{\infty}}])\simeq KU_{p}^{\wedge}\hat{\wedge}L_{K(1)}K(R)\) over \(L_{K(1)}K(R)\). As a result, the profinite Galois extension \(L_{K(1)}K(R)\to L_{K(1)}K(R[\zeta_{p^{\infty}}])\) is equivalent to the base change of the \(K(1)\)-local \(\mathbb{Z}_{p}^{\times}\)-Galois extension \(S^{0}_{K(1)}\to E_{1}\simeq KU_{p}^{\wedge}\) in Corollary 3.1.13 by \(L_{K(1)}K(R)\). The assumption \(p\in R^{\times}\) can be dropped since \(L_{K(1)}K(R)\simeq L_{K(1)}K(R[1/p])\) by [1, Theorem 1.1]. **Example 3.1.15**.: Let \(T(n)=v_{n}^{-1}F_{n}\) be the \(v_{n}\)-mapping telescope of a finite complex \(F_{n}\) of type \(n\). In [1, Theorem A], Carmeli-Schlank-Yanovski showed that every finite abelian Galois extension of \(S^{0}_{K(n)}\) in \(\mathsf{Sp}_{K(n)}\) lifts to the \(T(n)\)-local category \(\mathsf{Sp}_{T(n)}\). This gives a family of examples of \(T(n)\)-local profinite Galois extensions. ### Descent and Picard groups The foundation of our construction of profinite descent spectral sequences is the descent theory for ring spectra in [16]. **Definition 3.2.1** ([16, Definitions 3.17, 3.18]).: Let \(\mathcal{C}\) be a stable symmetric monoidal \(\infty\)-category. A full subcategory \(\mathcal{D}\subseteq\mathcal{C}\) is thick if it is closed under finite limits and colimits, and under retracts. It is called an \(\otimes\)-ideal if for any \(d\in\mathcal{D}\) and \(c\in\mathcal{C}\), we have \(d\otimes c\in\mathcal{D}\). An commutative algebra \(A\in\mathsf{CAlg}(\mathcal{C})\) is said to be **descendable** (or admits descent) if the thick \(\otimes\)-ideal generated by \(A\) is all of \(\mathcal{C}\). We say a ring map \(A\to B\) is descendable if \(B\) is descendable as an object in \(\mathsf{CAlg}_{A}(\mathcal{C})\). **Proposition 3.2.2** ([16, Propositions 3.20, 3.22]).: _Let \(A\in\mathsf{CAlg}(\mathcal{C})\). If \(A\) admits descent, then there is an equivalence:_ \[1_{\mathcal{C}}\stackrel{{\sim}}{{\longrightarrow}}\operatorname {Tot}\left[A^{\otimes\bullet+1}\right].\] _Moreover, the equivalence above lifts to module categories:_ \[\mathcal{C}=\mathsf{Mod}(1_{\mathcal{C}})\stackrel{{\sim}}{{ \longrightarrow}}\operatorname{Tot}\left[\mathsf{Mod}\left(A^{\otimes\bullet +1}\right)\right].\] _Remark 3.2.3_.: The equivalence \(1_{\mathcal{C}}\stackrel{{\sim}}{{\longrightarrow}} \operatorname{Tot}\left[A^{\otimes\bullet+1}\right]\) does _not_ imply \(A\) admits descent. Instead, \(A\) is descendable if the totalization on the right hand side defines a constant object in \(\mathsf{Pro}(\mathcal{C})\). Equivalently, this happens iff \(1_{\mathcal{C}}\) is equivalent to a retract of a partial totalization \(\operatorname{Tot}_{m}\left[A^{\otimes\bullet+1}\right]\) for some natural number \(m\). Galois extensions are related to descent theory by: **Proposition 3.2.4** ([16, Proposition 3.21]).: _Let \(G\) be a finite group. A faithful \(G\)-Galois extension of ring spectra \(A\to B\) admits descent._ Following the Devinatz-Hopkins' Construction 3.1.10 of \(E_{n}^{hG}\) in [1], we define: **Construction 3.2.5**.: Let \(A\to B\) be a descendable \(K(n)\)-local profinite \(G\)-Galois extension, where \(G=\lim_{k}G/U_{k}\) is a cofiltered limit of its quotients by a directed system of open normal subgroups \(\{U_{k}\}\) of \(G\) such that \(\bigcap U_{k}=\{e\}\). * For an open subgroup \(U\leq G\), set \[B^{hU}:=\operatorname{Tot}\left[\operatorname{Map}\left(G/U,\widehat{\bigwedge }_{A}^{\bullet+1}B\right)\right].\] * For an closed subgroup \(H\leq G\), set \[B^{hH}\coloneqq L_{K(n)}\operatorname*{colim}_{k}B^{h(HU_{k})}.\] **Proposition 3.2.6**.: _Let \(A\to B\) be a descendable \(K(n)\)-local profinite \(G\)-Galois extension. Then for any closed subgroup \(H\leq G\), the map \(B^{hH}\to B\) is a descendable \(K(n)\)-local profinite \(H\)-Galois extension. If \(H\) is a closed normal subgroup, then \(A\to B^{hH}\) is a descendable \(K(n)\)-local profinite \(G/H\)-Galois extension._ _Remark 3.2.7_.: If \(A\to B\to C\) admits descent, it is not always true that \(B\to C\) admits descent. One counterexample is \(R\mapsto R[x]\twoheadrightarrow R[x]/(x)\cong R\) for any ring \(R\). _Remark 3.2.8_.: In [11], Rognes also defined profinite Galois extensions in \(\mathsf{Sp}_{L}\), the Bousfield localization of \(\mathsf{Sp}\) at a spectrum \(L\). We note that all the definitions and constructions above go through \(\mathsf{Sp}_{L}\). In particular, Proposition 3.2.6 holds for profinite Galois extensions in \(\mathsf{Sp}_{L}\). Proof.: Similar to Corollary 3.1.13, we can show that: * When \(H\leq G\) is closed, \(B^{hH}\to B\) is a \(K(n)\)-local profinite \(H\)-Galois extension. * When \(H\trianglelefteq G\) is a closed normal subgroup, \(A\to B^{hH}\) is a \(K(n)\)-local profinite \(G/H\)-Galois extension. It remains to prove both maps are descendable. As the composition \(A\to B^{hH}\to B\) is descendable by assumption, the first map \(A\to B^{hH}\) is descendable by [10, Proposition 3.24]. To show \(F\colon B^{hH}\to B\) is also descendable, consider the pushout diagram of ring spectra: The left vertical map \(B^{hH}\to B^{hH}\hat{\wedge}_{A}B\) is a base change of a descendable map \(A\to B\) and hence admits descent by [10, Corollary 3.21]. We now show that \(B^{hH}\hat{\wedge}_{A}B\) is a retract of \(B\hat{\wedge}_{A}B\), which implies \(f\hat{\wedge}1\colon B^{hH}\hat{\wedge}_{A}B\to B\hat{\wedge}_{A}B\) admits descent by Definition 3.2.1. Then \(f\) itself admits descent again by [10, Proposition 3.24]. Similar to the computation in (3.3.5), we can identify \(f\hat{\wedge}1\) with the natural map \(\operatorname{Map}_{c}(G/H,B)\to\operatorname{Map}_{c}(G,B)\) induced by the quotient \(q\colon G\to G/H\). By [11, Proposition 2.2.2, Exercise 2.2.3], there is a continuous section \(s\colon G/H\to G\) of \(q\) such that \(q\circ s=\operatorname{id}\). This implies that the composition: \[\operatorname{Map}_{c}(G/H,B)\xrightarrow{q^{*}}\operatorname{Map}_{c}(G,B) \xrightarrow{s^{*}}\operatorname{Map}_{c}(G/H,B)\] is the identity. Hence \(B^{hH}\hat{\wedge}_{A}B\simeq\operatorname{Map}_{c}(G/H,B)\) is a retract of \(B\hat{\wedge}_{A}B\simeq\operatorname{Map}_{c}(G,B)\). Having set up the general theory, we apply it to study descent between continuous homotopy fixed points of \(E_{n}\). **Theorem 3.2.9** ([10, Chapter 8]).: _The \(E_{n}\)-local unit map \(L_{n}S^{0}\to E_{n}\) admits descent._ Since the localization functor \(L_{K(n)}\colon\mathsf{Sp}\to\mathsf{Sp}_{K(n)}\) is strong monoidal, \(L_{K(n)}S^{0}\to L_{K(n)}E_{n}\simeq E_{n}\) also admits descent by [10, Corollary 3.21]. Proposition 3.2.6 then implies: **Corollary 3.2.10**.: _Let \(G_{1}\leq G_{2}\leq\mathbb{G}_{n}\) be two closed subgroups such that \(G_{1}\) is normal. Then \(E_{n}^{hG_{2}}\to E_{n}^{hG_{1}}\) is a descendable \(K(n)\)-local profinite \(G_{2}/G_{1}\)-Galois extension. In particular,_ 1. _For any closed subgroup_ \(G\leq\mathbb{G}_{n}\)_,_ \(E_{n}^{hG}\to E_{n}\) _is a descendable_ \(K(n)\)_-local profinite_ \(G\)_-Galois extension._ 2. _For any closed normal subgroup_ \(G\trianglelefteq G_{n}\)_,_ \(S_{K(n)}^{0}\to E_{n}^{hG}\) _is a descendable_ \(K(n)\)_-local profinite_ \(\mathbb{G}_{n}/G\)_-Galois extension._ ### Profinite descent spectral sequences Our goal in this subsection is to extend the two spectral sequences (3.1.3) and (3.1.6) to \(K(n)\)-local profinite Galois extensions. Before stating the spectral sequences, we need one final technical condition: **Condition 3.3.1**.: Fix a tower of generalized Moore spectra \(\{M_{j}\}\) of type \(n\) as in Theorem 2.1.1 and Proposition 2.1.4. We say a \(K(n)\)-local spectrum \(X\) satisfies ML if the inverse system \(\{\pi_{t}(X\wedge M_{j})\}_{j\geq 1}\) satisfies the Mittag-Leffler condition for any \(t\in\mathbb{Z}\). _Remark 3.3.2_.: For a \(K(n)\)-local spectrum \(X\), we have \(X\simeq\lim_{j}X\wedge M_{j}\) by (2.1.2). Then the Milnor sequence yields short exact sequences of homotopy groups: \[0\to\lim_{j}^{1}\pi_{t+1}(X\wedge M_{j})\longrightarrow\pi_{t}(X) \longrightarrow\lim_{j}\pi_{t}(X\wedge M_{j})\to 0.\] This means the homotopy groups of \(X\) are \(L\)-completed, but not necessarily completed. The ML Condition 3.3.1 implies the vanishing of the \(\lim^{1}\)-terms in the Milnor sequence. As a result, if \(X\in\mathsf{Sp}_{K(n)}\) satisfies the ML condition, then \(\pi_{t}(X)\cong\lim_{j}\pi_{t}(X\wedge M_{j})\) has a natural inverse limit topology as a pro-abelian group. **Main Theorem B**.: _Let \(A\to B\) be a descendable \(K(n)\)-local profinite \(G\)-Galois extension of \(\mathbb{E}_{\infty}\)-ring spectra and \(B\) satisfies the ML Condition 3.3.1. Then there are profinite descent spectral sequences:_ \[{}^{\mathrm{HFP}}E_{1}^{s,t} =\mathrm{Map}_{c}(G^{\times s},\pi_{t}(B)), \qquad\qquad{}^{\mathrm{HFP}}E_{2}^{s,t} =H_{c}^{s}(G;\pi_{t}(B)) \Longrightarrow\pi_{t-s}(A);\] \[{}^{\mathrm{pic}}E_{2}^{s,t} =\mathrm{Map}_{c}(G^{\times s},\pi_{t}\mathfrak{pic}_{K(n)}(B)), \quad{}^{\mathrm{pic}}E_{2}^{s,t} =H_{c}^{s}(G;\pi_{t}\mathfrak{pic}_{K(n)}(B)) \Longrightarrow\pi_{t-s}\mathfrak{pic}_{K(n)}(A),\qquad t-s\geq 0.\] _In particular, the second spectral sequence abuts to \(\mathrm{Pic}_{K(n)}\left(A\right)\) when \(t=s\). The differentials in both spectral sequences are the form \(d_{r}^{s,t}\colon E_{r}^{s,t}\to E_{r}^{s+r,t+r-1}\). When \(t-s>0\) and \(s>0\), or \(2\leq r\leq t-1\), we have \({}^{\mathrm{pic}}d_{r}^{s,t-1}={}^{\mathrm{HFP}}d_{r}^{s,t}\)._ Proof.: By Proposition 3.2.2, the descendability assumption on the Galois extension \(A\to B\) implies that there are equivalences: \[A \rightharpoonup{\sim}\mathrm{Tot}\left[\smash{\mathop{\bigwedge} \limits^{\star+1}_{A}}B\right], \tag{3.3.4}\] \[\mathsf{Mod}_{K(n)}A \rightharpoonup{\sim}\mathrm{Tot}\left[\mathsf{Mod}_{K(n)}\left( \smash{\mathop{\bigwedge}\limits^{\star+1}_{A}}B\right)\right]. \tag{3.3.3}\] We first need to identify the ring spectra \(B^{\hat{\wedge}_{A}s+1}\) for \(s\geq 0\). Recall that \(B\simeq L_{K(n)}\operatorname{colim}_{k}B_{k}\), where each \(B_{k}\) is a finite \(G/U_{k}\)-Galois extension of \(A\). Similar to [10, Equation 8.1.2], we have equivalences: \[\smash{\mathop{\bigwedge}\limits^{s+1}_{A}}B \simeq L_{K(n)}\operatorname{colim}_{k}B\hat{\wedge}_{A}\left( \smash{\mathop{\bigwedge}\limits^{s}_{A}}B_{k}\right)\] \[\simeq L_{K(n)}\operatorname{colim}_{k}\smash{\mathop{\bigwedge }\limits^{s}_{B}}\left(B\hat{\wedge}_{A}B_{k}\right)\] \[\simeq L_{K(n)}\operatorname{colim}_{k}\smash{\mathop{\bigwedge }\limits^{s}_{B}}\left(B\hat{\wedge}_{B_{k}}B_{k}\hat{\wedge}_{A}B_{k}\right)\] \[\simeq L_{K(n)}\operatorname{colim}_{k}\smash{\mathop{\bigwedge }\limits^{s}_{B}}\left[B\hat{\wedge}_{B_{k}}\mathrm{Map}(G/U_{k},B_{k})\right] \qquad\qquad\qquad\qquad\qquad\text{Definition \ref{eq:M}}\] \[\simeq L_{K(n)}\operatorname{colim}_{k}\smash{\mathop{\bigwedge }\limits^{s}_{B}}\left[\mathrm{Map}(G/U_{k},B)\right]\] \[\simeq L_{K(n)}\operatorname{colim}_{k}\mathrm{Map}((G/U_{k})^{ \times s},B)\] \[=\mathrm{Map}_{c}(G^{\times s},B) \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\text{Definition \ref{eq:M}}\] \[\simeq\lim_{j}\operatorname*{colim}_{k}\operatorname{Map}\bigl{(}(G/U_{k})^{ \times s},B\wedge M_{j}\bigr{)}.\] It follows that \(E_{1}\)-page of the BKSS associated to the cosimplicial spectrum (3.3.3) has the form: \[\operatorname{{}^{\text{\rm\tiny{HFP}}}}E_{1}^{s,t}=\pi_{t}\left(\widehat{ \bigwedge}_{A}^{s+1}B\right)\cong\pi_{t}\left(\lim_{j}\operatorname*{colim}_{k }\operatorname{Map}\bigl{(}(G/U_{k})^{\times s},B\wedge M_{j}\bigr{)}\right).\] The Mittag-Leffler assumption on \(\{\pi_{t}(B\wedge M_{j})\}\) implies that the inverse system \[\left\{\pi_{t}\Bigl{[}\operatorname*{colim}_{k}\operatorname{Map} \bigl{(}(G/U_{k})^{\times s},B\wedge M_{j}\bigr{)}\Bigr{]}\right\} \cong\left\{\operatorname*{colim}_{k}\operatorname{Map}\bigl{(} (G/U_{k})^{\times s},\pi_{t}(B\wedge M_{j})\bigr{)}\right\}\] \[\cong\left\{\left[\operatorname*{colim}_{k}\operatorname{Map} \bigl{(}(G/U_{k})^{\times s},\mathbb{Z}\bigr{)}\right]\otimes_{\mathbb{Z}}\pi_ {t}(B\wedge M_{j})\right\}\] also satisfies Mittag-Leffler since the condition is preserved under base change by [10]. It follows that \[\operatorname{{}^{\text{\rm\tiny{HFP}}}}E_{1}^{s,t} \cong\pi_{t}\left[\lim_{j}\operatorname*{colim}_{k}\operatorname{ Map}\bigl{(}(G/U_{k})^{\times s},B\wedge M_{j}\bigr{)}\right]\] \[\cong\lim_{j}\pi_{t}\left[\operatorname*{colim}_{k}\operatorname{ Map}\bigl{(}(G/U_{k})^{\times s},B\wedge M_{j}\bigr{)}\right]\] \[\cong\lim_{j}\operatorname*{colim}_{k}\operatorname{Map}\bigl{(} (G/U_{k})^{\times s},\pi_{t}(B\wedge M_{j})\bigr{)} \tag{3.3.6}\] \[=\operatorname{Map}_{c}\left(G^{\times s},\pi_{t}(B)\right).\] This identifies the \(E_{1}\)-page of the first spectral sequence. For \(K(n)\)-local Picard groups, apply the functor \(\mathfrak{pic}\) to the equivalence (3.3.4), we have: \[\mathfrak{pic}_{K(n)}(A) \simeq\tau_{\geq 0}\text{\rm Tot}\left[\mathfrak{pic}_{K(n)}\left( \widehat{\bigwedge}_{A}^{s+1}B\right)\right]\] Proposition 2.2.9 ([17, Proposition 2.2.3]) \[\simeq\tau_{\geq 0}\text{\rm Tot}\left[\mathfrak{pic}_{K(n)} \operatorname{Map}_{c}\left(G^{\times\bullet},B\right)\right]\] (3.3.5) \[\simeq\tau_{\geq 0}\text{\rm Tot}\left[\operatorname{Map}_{c} \left(G^{\times\bullet},\mathfrak{pic}_{K(n)}B\right)\right].\] Proposition 2.3.6 The \(E_{1}\)-page of this BKSS is: \[\mathfrak{pic}_{K(n)}\text{\rm Map}_{c}\left(G^{\times s},B\right)=\begin{cases} \operatorname{Pic}_{K(n)}\operatorname{Map}_{c}\left(G^{\times s},B\right),&t= 0;\\ \left[\pi_{0}\operatorname{Map}_{c}\left(G^{\times s},B\right)\right]^{\times},&t= 1;\\ \pi_{t-1}\operatorname{Map}_{c}\left(G^{\times s},B\right),&t\geq 2.\end{cases}\] When \(t\geq 2\), this is the same as in (3.3.6): \[\mathfrak{pic}_{1}^{s,t}=\pi_{t-1}\operatorname{Map}_{c}\left(G^{\times s},B \right)\cong\operatorname{Map}_{c}\left(G^{\times s},\pi_{t-1}(B)\right).\] When \(t=1\), again by (3.3.6), we have an isomorphism of topological rings: \[\pi_{0}\operatorname{Map}_{c}\left(G^{\times s},B\right)\cong\operatorname{ Map}_{c}\left(G^{\times s},\pi_{0}(B)\right).\] The unit of the continuous mapping ring sits in a pullback diagram of topological spaces: Notice that \((-)^{\times}\cong\operatorname{Hom}_{\mathsf{CAlg}_{2}}(\mathbb{Z}[x^{\pm 1}],-)\) is a right adjoint. This implies taking units commutes with arbitrary products of commutative rings. Furthermore, the set of units in a topological commutative ring is a topological abelian group with subspace topology. This implies a unit in \(\operatorname{Map}_{c}\left(G^{\times s},\pi_{0}(B)\right)\) is a continuous map \(G^{\times s}\to\pi_{0}(B)\) whose image is in \(\pi_{0}(B)^{\times}\). As a result, the \(t=1\) line on the \(E_{1}\)-page of the BKSS is: \[\operatorname{\mathsf{pic}}E_{1}^{s,1}=\operatorname{Map}_{c}\left(G^{\times s },\pi_{0}(B)\right)^{\times}\cong\operatorname{Map}_{c}\left(G^{\times s},\pi _{0}(B)^{\times}\right).\] When \(t=0\), the computation follows from Proposition2.3.6 and (3.3.5): \[\operatorname{\mathsf{pic}}E_{1}^{s,0}=\operatorname{Pic}_{K(n)}\operatorname{ Map}_{c}\left(G^{\times s},B\right)\cong\operatorname{Map}_{c}\left(G^{\times s },\operatorname{Pic}_{K(n)}(B)\right).\] So far, we have computed the \(E_{1}\)-pages of the BKSS's. They are isomorphic to the cobar resolutions for \(\pi_{t}(B)\) and \(\pi_{t}\left(\operatorname{\mathsf{pic}}_{K(n)}(B)\right)\) as continuous \(G\)-modules, respectively. We can further prove that the differentials also agree with the cobar differentials to compute continuous group cohomology. This implies the \(E_{2}\)-pages of the two spectral sequences are indeed as claimed: \[\operatorname{\mathsf{HFP}}E_{2}^{s,t}=H_{c}^{s}(G;\pi_{t}(B)) \Longrightarrow\pi_{t-s}(A);\] \[\operatorname{\mathsf{pic}}E_{2}^{s,t}=H_{c}^{s}\left(G;\pi_{t} \operatorname{\mathsf{pic}}_{K(n)}(B)\right) \Longrightarrow\pi_{t-s}\operatorname{\mathsf{pic}}_{K(n)}(A), \qquad t-s\geq 0.\] At last, we need to compare differentials in the two BKSS's. By Proposition1.2.3, we have equivalences \[\tau_{\geq 1}\operatorname{\mathsf{pic}}_{K(n)}\left(\widehat{\bigwedge}_{A}^{ s+1}B\right)\simeq\tau_{\geq 1}\operatorname{\mathsf{pic}}\left(\widehat{ \bigwedge}_{A}^{s+1}B\right)\simeq\Sigma\mathfrak{gl}_{1}\left(\widehat{ \bigwedge}_{A}^{s+1}B\right).\] The comparison of differentials then follows exactly the same way as in Theorem3.1.7 ([13, Comparison Tool 5.2.4]). _Remark 3.3.7_.: The purpose of the ML Condition3.3.1 is to identify the \(E_{1}\) and \(E_{2}\)-pages of the spectral sequences. In the proof above, it is _not_ enough to just assume \(\pi_{t}(B)\cong\lim_{j}\pi_{t}(B\wedge M_{j})\), or equivalently \(\lim_{j}^{1}\pi_{t+1}(B\wedge M_{j})=0\). This is because vanishing of \(\lim^{1}\) is _not_ preserved under base change. As is explained in [10], "the Mittag-Leffler condition is equivalent to the universal vanishing of \(\varprojlim^{1}\)" under base change. Following Corollary3.2.10, we now apply Main TheoremB to study descent spectral sequences between continuous homotopy fixed points of \(E_{n}\) and their Picard groups. From the construction in Proposition2.1.4, we have \(\pi_{*}(E_{n}\wedge M_{j})\cong\pi_{*}(E_{n})/J_{j}\) for some decreasing sequence of invariant open ideals \(J_{1}\supsetneq J_{2}\supsetneq J_{3}\supsetneq\dots\) of \(\pi_{0}(E_{n})\). Hence \(\pi_{t}(E_{n})/J_{j}\) are all finite groups and \(E_{n}\) satisfies the ML Condition3.3.1. At this point, we have verified that the extension \(E_{n}^{hG}\to E_{n}\) satisfies the two assumptions in Main TheoremB. From the associated cobar complex: \[\operatorname{\mathrm{Tot}}\left[\widehat{\bigwedge}_{E_{n}^{hG}}^{\bullet+1}E _{n}\right]\simeq\operatorname{\mathrm{Tot}}\left[\operatorname{Map}_{c}(G^{ \times\bullet},E_{n})\right], \tag{3.3.8}\] we recover the continuous homotopy fixed point spectral sequence in [10]: \[\operatorname{\mathsf{HFP}}E_{2}^{s,t}\left(E_{n}^{hG}\right)=H_{c}^{s}(G;\pi_{ t}(E_{n}))\Longrightarrow\pi_{t-s}\left(E_{n}^{hG}\right). \tag{3.3.9}\] We note that the HFPSS in [10] is obtained from the cobar complex of a descendable extension \(E_{n}^{hG}\to E_{n}^{hG}\hat{\wedge}E_{n}\): \[\operatorname{\mathrm{Tot}}\left[\widehat{\bigwedge}_{E_{n}^{hG}}^{\bullet+1} \left(E_{n}^{hG}\hat{\wedge}E_{n}\right)\right]\simeq\operatorname{\mathrm{Tot }}\left[E_{n}^{hG}\hat{\wedge}E_{n}^{\hat{\wedge}\bullet+1}\right]\simeq \operatorname{\mathrm{Tot}}\left[\operatorname{Map}_{c}\left((\mathbb{G}_{n}/G )\times\mathbb{G}_{n}^{\times\bullet},E_{n}\right)\right]. \tag{3.3.10}\] **Proposition 3.3.11**.: _The two BKSS's associated to the cosimplicial spectra in (3.3.8) and (3.3.10) are isomorphic starting from their \(E_{2}\)-pages._ Proof.: Denote the entries in the BKSS for (3.3.10) by \({}^{\mathrm{DH}}E_{r}^{s,t}\left(E_{n}^{hG}\right)\). The \(E_{n}^{hG}\)-algebra map \(E_{n}^{hG}\hat{\wedge}E_{n}\to E_{n}\hat{\wedge}E_{n}\to E_{n}\) induces a map between the cobar complexes \[f\colon\mathrm{Tot}\left[\widehat{\bigwedge}_{E_{n}^{hG}}^{\star+1}\left(E_{n} ^{hG}\hat{\wedge}E_{n}\right)\right]\longrightarrow\mathrm{Tot}\left[\widehat{ \bigwedge}_{E_{n}^{hG}}^{\star+1}E_{n}\right].\] It follows that \(f\) induces a map between the two spectral sequences \(f_{\star}\colon\,{}^{\mathrm{DH}}E_{\star}^{\star,\star}\left(E_{n}^{hG} \right)\rightarrow^{\mathrm{HFP}}E_{\star}^{\star,\star}\left(E_{n}^{hG}\right)\). On \(E_{1}\)-pages, the map \[f_{\star}\colon\,{}^{\mathrm{DH}}E_{1}^{s,t}\left(E_{n}^{hG}\right)=\mathrm{ Map}_{c}\left(\left(\mathbb{G}_{n}/G\right)\times\mathbb{G}_{n}^{\times s},\pi_{t} \left(E_{n}\right)\right)\rightarrow^{\mathrm{HFP}}E_{1}^{s,t}\left(E_{n}^{ hG}\right)=\mathrm{Map}_{c}\left(G^{\times s},\pi_{t}\left(E_{n}\right)\right)\] is induced by the inclusion \(G^{\times s}\subseteq\left(\mathbb{G}_{n}/G\right)\times\mathbb{G}_{n}^{ \times s}\) of subspaces. By Shapiro's Lemma, \(f_{\star}\) is an isomorphism on \(E_{2}\)-pages: \[f_{\star}\colon\,{}^{\mathrm{DH}}E_{2}^{s,t}\left(E_{n}^{hG}\right)=H_{c}^{s} \left(\mathbb{G}_{n};\mathrm{Map}_{c}\left(\mathbb{G}_{n}/G,\pi_{t}\left(E_{n }\right)\right)\right)\stackrel{{\sim}}{{\longrightarrow}}\,{} ^{\mathrm{HFP}}E_{2}^{s,t}\left(E_{n}^{hG}\right)=H_{c}^{s}\left(G;\pi_{t} \left(E_{n}\right)\right).\] Then \(f_{\star}\) is an isomorphism on \(E_{r}\)-pages for all \(r\geq 2\) by [1, Theorem 5.3]. On \(K(n)\)-local Picard groups, we obtain a descent spectral sequence: \[\mathfrak{pic}E_{2}^{s,t}\left(E_{n}^{hG}\right)=H_{c}^{s}(G;\pi_{t}\mathfrak{ pic}_{K(n)}(E_{n}))\Longrightarrow\pi_{t-s}\mathfrak{pic}_{K(n)}\left(E_{n}^{hG} \right),\qquad t-s\geq 0. \tag{3.3.12}\] _Remark 3.3.13_.: When \(G=\mathbb{G}_{n}\), \(s=t=0\) or \(t\geq 1\), Heard has identified the \(E_{2}\)-page of (3.3.12) in [1, Example 6.18]. More recently, Mor uses the proetale site to construct a model for the continuous action of \(\mathbb{G}_{n}\) on Morava \(E\)-theory, and identifies the entire \(E_{2}\)-page of the descent spectral sequence to compute Picard groups for \(S_{K(n)}^{0}\simeq E_{n}^{hG_{n}}\) in [13]. For computational purposes, we might need to compute \(\mathrm{Pic}_{K(n)}\left(E_{n}^{hG}\right)\) by descending from \(E_{n}^{hH}\) for some closed normal subgroup \(H\triangleleft\mathbb{G}_{n}\) contained in \(G\). **Corollary 3.3.14**.: _Let \(G_{1}\leq G_{2}\leq\mathbb{G}_{n}\) be two closed subgroups such that \(G_{1}\) is normal in \(G_{2}\). There are descent spectral sequences:_ \[\begin{split}\mathrm{HFP}& E_{2}^{s,t}=H_{c}^{s} \left(G_{2}/G_{1};\pi_{t}\left(E_{n}^{hG_{1}}\right)\right)\Longrightarrow\pi_ {t-s}\left(E_{n}^{hG_{2}}\right),\\ \mathfrak{pic}& E_{2}^{s,t}=H_{c}^{s}\left(G_{2}/G_{1 };\pi_{t}\mathfrak{pic}_{K(n)}\left(E_{n}^{hG_{1}}\right)\right)\Longrightarrow \pi_{t-s}\mathfrak{pic}_{K(n)}\left(E_{n}^{hG_{2}}\right),\ t-s\geq 0.\end{split}\] (Dev05) _Furthermore, when \(t-s>0\) and \(s>0\), or \(2\leq r\leq t-1\), the two spectral sequences above are related by \(\mathfrak{pic}\,d_{r}^{s,t-1}=^{\mathrm{HFP}}d_{r}^{s,t}\)._ Proof.: By [1, Lemma 3.5], \(\pi_{t}\left(E_{n}^{hG}\wedge M\right)\) is a finite abelian group for any closed subgroup \(G\) of \(\mathbb{G}_{n}\) and generalized Moore spectrum \(M\) of type \(n\). This means the system \(\left\{\pi_{t}\left(E_{n}^{hG}\wedge M_{j}\right)\right\}\) satisfies the ML Condition 3.3.1. By Corollary 3.2.10, the extension \(E_{n}^{hG_{2}}\to E_{n}^{hG_{1}}\) isdescendable. The claims then follow from Main Theorem B. ### The descent filtration on \(K(n)\)-local Picard groups The inverse limit topology on \(K(n)\)-local Picard groups in Main Theorem A is a filtration on \(\mathrm{Pic}_{K(n)}\left(E_{n}^{hG}\right)\). Another important filtration on \(K(n)\)-local Picard groups is the descent filtration [1, SS3.3][1, SS1.3], defined through the Devinatz-Hopkins homotopy fixed spectral sequence (3.3.9) to compute \(\pi_{\star}\left(S_{K(n)}^{0}\right)\). We now use the descent spectral sequence for \(K(n)\)-local Picard groups in Main Theorem B to study this descent filtration on \(\mathrm{Pic}_{K(n)}\left(E_{n}^{hG}\right)\). Similar to the HFPSS in Main Theorem B, we have more generally **Proposition 3.4.1**.: _Let \(A\to B\) be a descendable \(K(n)\)-local profinite \(G\)-Galois extension. For a \(K(n)\)-local \(A\)-module \(X\), if the inverse system \(B\hat{\wedge}_{A}X\) satisfies the ML Condition 3.3.1, then there is a homotopy fixed point spectral sequence:_ \[{}^{\mathrm{HFP}}E_{1}^{s,t}(X)=\mathrm{Map}_{c}\left(G^{\times s},\pi_{t} \left(B\hat{\wedge}_{A}X\right)\right),\qquad{}^{\mathrm{HFP}}E_{2}^{s,t}(X)=H _{c}^{s}\left(G;\pi_{t}\left(B\hat{\wedge}_{A}X\right)\right)\Longrightarrow\pi _{t-s}(X).\] Proof.: By (3.3.4), there is an equivalence of \(A\)-modules: \[X\stackrel{{\sim}}{{\longrightarrow}}\mathrm{Tot}\left[\left( \stackrel{{\sim}}{{\bigwedge}}_{A}^{\star+1}B\right)\hat{\wedge}_{A }X\right]\simeq\mathrm{Tot}\left[\mathrm{Map}_{c}\left(G^{\times\bullet},B \wedge_{A}X\right)\right].\] The Mittag-Leffler condition on \(\{\pi_{t}(B\wedge_{A}X\wedge M_{j})\}\) implies the \(E_{1}\)-page of the BKSS associated to this cosimplicial spectra is: \[{}^{\mathrm{HFP}}E_{1}^{s,t}(X)=\pi_{t}\left[\mathrm{Map}_{c}\left(G^{\times \bullet},B\wedge_{A}X\right)\right]\cong\mathrm{Map}_{c}\left(G^{\times \bullet},\pi_{t}(B\wedge_{A}X)\right).\] This identifies the \(E_{1}\)-page of the spectral sequence. Similar to Main Theorem B, the \(d_{1}\)-differentials are the same as those in the cobar resolution of \(\pi_{t}(B\wedge_{A}X)\) as a continuous \(G\)-module. It follows that the terms on the \(E_{2}\)-page are continuous group cohomology of \(G\). _Remark 3.4.2_.: Let \(\mathfrak{m}=(p,u_{1},\cdots,u_{n-1})\) be the maximal ideal of \(\pi_{0}(E_{n})\). In [1, Theorem 4.3], Barthel-Heard showed that if \(\pi_{t}(E_{n}\hat{\wedge}X)\) is either pro-free and finite generated as an \(\pi_{0}(E_{n})\)-module or has bounded \(\mathfrak{m}\)-torsion, then \(E_{2}\)-terms of the HFPSS for \(X\) are also continuous group cohomology of \(G\). Note that both assumptions imply \(E_{n}\hat{\wedge}X\) satisfies the ML Condition 3.3.1. **Construction 3.4.3**.: A descent filtration on \(\mathrm{Pic}_{K(n)}\left(E_{n}^{hG}\right)\) can be constructed following [1, Definition 3.27] and [1, Equation 1.21] as below: \[\mathrm{Pic}_{K(n)}\left(E_{n}^{hG}\right)\supseteq\mathrm{Pic}_{K(n)}^{0} \left(E_{n}^{hG}\right)\supseteq\kappa\left(E_{n}^{hG}\right)\supseteq \kappa^{(2)}\left(E_{n}^{hG}\right)\supseteq\kappa^{(3)}\left(E_{n}^{hG} \right)\supseteq\cdots. \tag{3.4.4}\] * By Proposition 2.3.1, the map of \(K(n)\)-local \(\mathbb{E}_{\infty}\)-ring spectra \(E_{n}^{hG}\to E_{n}\) induces a continuous group homomorphism \(\phi_{0}\colon\mathrm{Pic}_{K(n)}\left(E_{n}^{hG}\right)\to\mathrm{Pic}_{K(n) }\left(E_{n}\right)\). By (1.2.6), we have \(\mathrm{Pic}_{K(n)}\left(E_{n}\right)\cong\mathrm{Pic}\left(E_{n}\right) \cong\mathbb{Z}/2\), generated by \(\Sigma E_{n}\). As a result, \(\phi_{0}\) is surjective. Set \[\mathrm{Pic}_{K(n)}^{0}\left(E_{n}^{hG}\right)\left(E_{n}^{hG}\right):=\ker \phi_{0}=\left\{X\in\mathrm{Pic}_{K(n)}\left(E_{n}^{hG}\right)|\,E_{n}\hat{ \wedge}_{E_{n}^{hG}}X\simeq E_{n}\right\}.\] * For any \(X\in\mathrm{Pic}_{K(n)}^{0}\left(E_{n}^{hG}\right)\), we note \(E_{n}\hat{\wedge}_{E_{n}^{hG}}X\simeq E_{n}\) satisfies the ML Condition 3.3.1.2 Then Proposition 3.4.1 yields a homotopy fixed spectral sequence with \(E_{2}\)-page Footnote 2: This is also true for any \(X\in\mathrm{Pic}_{K(n)}\left(E_{n}^{hG}\right)\), since \(E_{n}\hat{\wedge}_{E_{n}^{hG}}X\simeq\Sigma E_{n}\) if \(X\notin\mathrm{Pic}_{K(n)}^{0}\left(E_{n}^{hG}\right)\). \[E_{2}^{s,t}(X)=H_{c}^{s}\left(G;\pi_{t}\left(E_{n}\hat{\wedge}_{E_{n}^{hG}}X \right)\right)\Longrightarrow\pi_{t-s}(X). \tag{3.4.5}\] The homotopy groups \(\pi_{t}\left(E_{n}\hat{\wedge}_{E_{n}^{hG}}X\right)\) have a natural continuous \(G\)-action, which is adjoint to \[\pi_{t}\left(E_{n}\hat{\wedge}_{E_{n}^{hG}}X\right)\to\pi_{t}\left(E_{n}\hat{ \wedge}_{E_{n}^{hG}}E_{n}\hat{\wedge}_{E_{n}^{hG}}X\right)\cong\mathrm{Map}_{c} \left(G,\pi_{t}\left(E_{n}\hat{\wedge}_{E_{n}^{hG}}X\right)\right).\] Notice we have a \(G\)-equivariant isomorphism \(\pi_{t}\left(E_{n}\hat{\wedge}_{E_{n}^{hG}}X\right)\cong\pi_{0}\left(E_{n} \hat{\wedge}_{E_{n}^{hG}}X\right)\widehat{\otimes}_{\pi_{0}(E_{n})}\pi_{t}(E_{n})\), where \(G\) acts on \(\pi_{t}(E_{n})\) as a subgroup of \(\mathbb{G}_{n}\). This implies the \(G\)-action on \(\pi_{t}\left(E_{n}\hat{\wedge}_{E_{n}^{hG}}X\right)\) is determined by that on \(\pi_{0}\left(E_{n}\hat{\wedge}_{E_{n}^{hG}}X\right)\), which is non-equivariantly isomorphic to \(\pi_{0}(E_{n})\). All such \(G\)-actions on \(\pi_{0}(E_{n})\) is then classified by the (even) **algebraic \(K(n)\)-local Picard group** of \(E_{n}^{hG}\): \[\mathrm{Pic}_{K(n)}^{0,alg}\left(E_{n}^{hG}\right):=H_{c}^{1}\left(G;\pi_{0}(E_ {n})^{\times}\right).\] The assignment \(X\mapsto\pi_{0}\left(E_{n}\hat{\wedge}_{E_{n}^{hG}}X\right)\) as a continuous \(G\)-module then defines a map \(\phi_{1}\colon\mathrm{Pic}_{K(n)}^{0}\left(E_{n}^{hG}\right)\to\mathrm{Pic}_{K(n )}^{0,alg}\left(E_{n}^{hG}\right)\). Set the **exotic \(K(n)\)-local Picard group** of \(E_{n}^{hG}\) to be \[\kappa\left(E_{n}^{hG}\right):=\phi_{1}^{-1}\left(E_{n}^{hG}\right)=\left\{X \in\mathrm{Pic}_{K(n)}^{0}\left|\ \pi_{0}\left(E_{n}\hat{\wedge}_{E_{n}^{hG}}X\right)\cong\pi_{0}(E_{n})\text{ as $G$- modules}\right.\right\}.\] It is conventional to write \(\kappa_{n}\) for \(\kappa\left(E_{n}^{hG_{n}}\right)\). * When \(X\in\kappa\left(E_{n}^{hG}\right)\), we have isomorphisms \(E_{2}^{s,t}\left(E_{n}^{hG}\right)\cong E_{2}^{s,t}(X)\) for all \(s,t\). Let \(\iota_{X}\in E_{2}^{0,0}(X)\) be the image of the unit \(1\in H_{c}^{0}(G;\pi_{0}(E_{n}))=E_{2}^{0,0}\left(E_{n}^{hG}\right)\) under this isomorphism. Define \[\kappa^{(r)}\left(E_{n}^{hG}\right):=\left\{X\in\kappa\left(E_{n}^{hG}\right) \mid\iota_{X}\text{ as $r$-cycle in \eqref{eq:exotic}}\right\}.\] If \(X\in\kappa^{(r)}\left(E_{n}^{hG}\right)\), then \(\iota_{X}\) defines an isomorphism \(E_{m}^{s,t}\left(E_{n}^{hG}\right)\overset{\sim}{\longrightarrow}E_{m}^{s,t}(X)\) for \(2\leq m\leq r+1\) and all \(s,t\). For each \(r\), the assignment \(X\mapsto d_{r+1}(\iota_{X})\) defines a map \(\phi_{r+1}\colon\kappa^{(r)}\left(E_{n}^{hG}\right)\to E_{r+1}^{r+1,r}(X) \cong E_{r+1}^{r+1,r}\left(E_{n}^{hG}\right)\). _Remark 3.4.6_.: For any inclusion of closed subgroups \(G_{1}\leq G_{2}\) of \(\mathbb{G}_{n}\), the filtration above is preserved under the base change maps \(\mathrm{Pic}_{K(n)}\left(E_{n}^{hG_{2}}\right)\to\mathrm{Pic}_{K(n)}\left(E_{ n}^{hG_{2}}\right)\). When \(G=\mathbb{G}_{n}\), we have \(\mathrm{Pic}_{K(n)}\left(E_{n}^{h\mathbb{G}_{n}}\right)=\mathrm{Pic}\left( \mathsf{Sp}_{K(n)}\right)\). This Picard group has been studied extensively using the filtration above. At height \(1\) and all primes, \(\mathrm{Pic}_{K(1)}\) were computed in [10, 11]: \[\mathrm{Pic}_{K(1)}=\begin{cases}\mathbb{Z}_{p}\oplus\mathbb{Z}/2(p-1),&p>2;\\ \mathbb{Z}_{2}\oplus\mathbb{Z}/4\oplus\mathbb{Z}/2,&p=2.\end{cases}\] At height \(2\), the algebraic \(K(2)\)-local Picard groups are computed by Hopkins when \(p\geq 5\) (see [1, 2, 10]) and by Karamanov [12] when \(p=3\). In both cases, we have \[\mathrm{Pic}_{K(2)}^{alg,0}=H_{c}^{1}\left(\mathbb{G}_{2};\pi_{0}(E_{2})^{ \times}\right)\cong H_{c}^{1}\left(\mathbb{G}_{2};\mathbb{W}\mathbb{F}_{p^{2} }^{\times}\right)\cong\mathbb{Z}_{p}\oplus\mathbb{Z}_{p}\oplus\mathbb{Z}/(p^ {2}-1),\qquad p\geq 3.\] The \(K(2)\)-local exotic Picard group \(\kappa_{2}\) vanishes when \(p\geq 5\)[10, 11]. It was computed at \(p=3\) by Goerss-Henn-Mahowald-Rezk in [1], and more recently at \(p=2\) by Beaudry-Bobkova-Goerss-Henn-Pham-Stojanoska in [1]. \[\kappa_{2}=\begin{cases}(\mathbb{Z}/8)^{2}\oplus(\mathbb{Z}/2)^{3},&p=2;\\ \mathbb{Z}/3\oplus\mathbb{Z}/3,&p=3;\\ 0,&p\geq 5.\end{cases}\] _Remark 3.4.7_.: In [1, Definition 3.3], the authors used \(\kappa(G)\) to denote \[\kappa(G)=\left\{\begin{array}{ll}X\in\kappa_{n}&\text{There exists a $z\in\pi_{0}\left(E_{n}^{hG}\hat{\wedge}X\right)$ such that its image}\\ \text{ in $\pi_{0}\left(E_{n}\hat{\wedge}X\right)$ is a $\mathbb{G}_{n}$-equivariant generator}\end{array}\right\}.\] By [1, Proposition 3.7], we have \(\kappa(G)\) is contained in \(\ker\left(\kappa_{n}\to\kappa\left(E_{n}^{hG}\right)\right)\). [1, Lemma 3.9] says the inclusion in the other direction is also true if the edge homomorphism \(\pi_{0}\left(E_{n}^{hG}\right)\to H_{c}^{0}(G;\pi_{0}(E_{n}))\) in the HFPSS (3.3.9) is surjective. This happens in all cases with complete computations ([1, Example 3.10]). **Lemma 3.4.8**.: 1. _The descending filtration (_3.4.4_) is Hausdorff in the sense that_ \(\bigcap_{r}\kappa^{(r)}\left(E_{n}^{hG}\right)=\left\{E_{n}^{hG}\right\}\)_._ 2. _The maps_ \(\phi_{0},\phi_{1},\phi_{2},\cdots\) _in Construction_ 3.4.3 _are all group homomorphisms such that_ \(\kappa^{(r)}\left(E_{n}^{hG}\right)=\ker\phi_{r}\) _for all_ \(r\geq 2\) 3. _For each fixed prime_ \(p\)_, height_ \(n\)_, and closed subgroup_ \(G\leq\mathbb{G}_{n}\)_, the filtration (_3.4.4_) is finite in the sense_ \(\kappa^{(r)}\left(E_{n}^{hG}\right)=0\) _when_ \(r\gg 0\)_._ Proof.: 1. As a invertible module over itself, \(E_{n}^{hG}\in\kappa\left(E_{n}^{hG}\right)\) by construction. The unit \(1\in E_{2}^{0,0}\left(E_{n}^{hG}\right)\) is a permanent cycle, since it converges to the unit map \(S^{0}\to E_{n}^{hG}\) of \(E_{n}^{hG}\) as a ring spectra. This implies \(E_{n}^{hG}\in\bigcap_{r}\kappa^{(r)}\left(E_{n}^{hG}\right)\). Then \(\iota_{X}\) is a permanent cycle, representing a map of \(K(n)\)-local spectra \(f\colon S_{K(n)}^{0}\to X\). As \(X\) is an \(E_{n}^{hG}\)-module, the map \(f\) is adjoint to an \(E_{n}^{hG}\)-module map \(\widetilde{f}\colon E_{n}^{hG}\to X\). One can then check \(\widetilde{f}\) induces on isomorphism on the \(E_{2}\)-pages of the HFPSS's \(E_{2}^{*,*}\left(E_{n}^{hG}\right)\to E_{2}^{*,*}\left(X\right)\) sending \(1\) to \(\iota_{X}\). By [1, Theorem 5.3], \(\widetilde{f}\) is a weak equivalence. 2. We need to show all the \(\phi_{r}\)'s are additive. The map \(\phi_{0}\colon\mathrm{Pic}_{K(n)}\left(E_{n}^{hG}\right)\to\mathrm{Pic}_{K(n) }(E_{n})\) is a group homomorphism by Proposition 2.3.1. For \(\phi_{1}\), notice that any \(X,Y\in\mathrm{Pic}_{K(n)}^{0}\left(E_{n}^{hG}\right)\), we have a Kunneth isomorphism \[\pi_{0}\left(E_{n}\hat{\wedge}_{E_{n}^{hG}}X\hat{\wedge}_{E_{n}^{hG}}Y\right) \cong\pi_{0}\left(E_{n}\hat{\wedge}_{E_{n}^{hG}}X\right)\widehat{\otimes}_{ \pi_{0}(E_{n})}\pi_{0}\left(E_{n}\hat{\wedge}_{E_{n}^{hG}}Y\right).\] This implies \(\phi_{1}\colon\mathrm{Pic}_{K(n)}^{0}\left(E_{n}^{hG}\right)\to\mathrm{Pic}_{ K(n)}^{alg,0}\left(E_{n}^{hG}\right)\) is a group homomorphisms. When \(r\geq 2\), the Leibnitz rule on differentials implies \(\phi_{r}\) is a group homomorphism. 3. If \(X\in\kappa^{(r)}\), then \(E_{r+1}^{s,t}(X)\cong E_{r+1}^{s,t}\left(E_{n}^{hG}\right)\) for all \(s,t\). By [1, Section 2.3][1, Theorem 5.3][1, Corollary 15], there is a horizontal vanishing line at \(s=N\) on \(E_{r}^{*,*}\left(E_{n}^{hG}\right)\) when \(r\gg 0\). This means if \(X\in\kappa^{(N)}\left(E_{n}^{hG}\right)\), then \(\iota_{X}\) is automatically an \(m\)-cycle for all \(m\geq N\). Similar to the second half of the proof of part (1), it follows that \(X\simeq E_{n}^{hG}\). Hence \(\kappa^{(m)}\left(E_{n}^{hG}\right)=\left\{E_{n}^{hG}\right\}\) when \(m\geq N\). **Proposition 3.4.9**.: _The descent filtration (3.4.4) on \(\mathrm{Pic}_{K(n)}\left(E_{n}^{hG}\right)\) is same as the filtration associated to the \(t-s=0\) stem of the descent spectral sequence (3.3.12) for \(\mathrm{Pic}_{K(n)}\left(E_{n}^{hG}\right)\)._ Proof.: Recall that the descent spectral sequence (3.3.12) is the Bousfield-Kan spectral sequence of the cosimplicial diagram: \[\mathfrak{pic}_{K(n)}\left(E_{n}^{hG}\right)\simeq\mathrm{Tot}\left[\mathfrak{ pic}_{K(n)}\left(\widehat{\bigwedge}_{E_{n}^{hG}}^{\bullet+1}E_{n}\right)\right].\] The filtration on \(\mathrm{Pic}_{K(n)}\left(E_{n}^{hG}\right)\) from this spectral sequence is defined to be: \[\mathrm{Fil}^{\geq m}\pi_{0}\mathfrak{pic}_{K(n)}\left(E_{n}^{hG}\right) \coloneqq\ker\left(\pi_{0}\mathfrak{pic}_{K(n)}\left(E_{n}^{hG} \right)\longrightarrow\pi_{0}\mathrm{Tot}_{m-1}\left[\mathfrak{pic}_{K(n)} \left(\widehat{\bigwedge}_{E_{n}^{hG}}^{\bullet+1}E_{n}\right)\right]\right)\] \[\cong\ker\left(\mathrm{Pic}\left(\mathsf{Mod}_{K(n)}E_{n}^{hG} \right)\longrightarrow\mathrm{Pic}\left(\mathrm{Tot}_{m-1}\left[\mathsf{Mod}_{K (n)}\left(\widehat{\bigwedge}_{E_{n}^{hG}}^{\bullet+1}E_{n}\right)\right] \right)\right),\] where * \(\mathrm{Tot}_{m-1}\) is the partial totalization of the cosimipicial diagram for \(0\leq\bullet\leq m-1\); * the functor \(\mathsf{Mod}_{K(n)}E_{n}^{hG}\to\mathrm{Tot}_{m-1}\left[\mathsf{Mod}_{K(n)} \left(\widehat{\bigwedge}_{E_{n}^{hG}}^{\bullet+1}E_{n}\right)\right]\) sends a \(K(n)\)-local \(E_{n}^{hG}\)-module spectrum \(X\) to its \((m-1)\)-truncated \(K(n)\)-local \(E_{n}\)-Adams tower \(\mathrm{Tot}_{m-1}\left[\left(\widehat{\bigwedge}_{E_{n}^{hG}}^{\bullet+1}E_{n} \right)\hat{\wedge}_{E_{n}^{hG}}X\right]\). As a result, \(X\in\mathrm{Fil}^{\geq m}\mathrm{Pic}_{K(n)}\left(E_{n}^{hG}\right)\) iff its image in \(\mathrm{Tot}_{m-1}\left[\mathsf{Mod}_{K(n)}\left(\widehat{\bigwedge}_{E_{n}^{hG} }^{\bullet+1}E_{n}\right)\right]\) is equivalent to that of \(E_{n}^{hG}\), i.e. there is an equivalence of \((m-1)\)-partial totalizations: \[\mathrm{Tot}_{m-1}\left[\widehat{\bigwedge}_{E_{n}^{hG}}^{\bullet+1}E_{n}\right] \simeq\mathrm{Tot}_{m-1}\left[\left(\widehat{\bigwedge}_{E_{n}^{hG}}^{\bullet+1}E_{n }\right)\hat{\wedge}_{E_{n}^{hG}}X\right]. \tag{3.4.10}\] By [14, Proposition 2.25], this partial totalization is equivalent to the \(m-1\)-truncated \(K(n)\)-local \(E_{n}\)-Adams tower of \(E_{n}^{hG}\)-modules. When \(m=1\), (3.4.10) is an equivalence of spectra \(E_{n}\simeq E_{n}\hat{\wedge}_{E_{hG}}X\). It follows that \(\mathrm{Fil}^{21}\mathrm{Pic}_{K(n)}\left(E_{n}^{hG}\right)=\mathrm{Pic}_{K(n) }^{0}\left(E_{n}^{hG}\right)\). When \(m=2\), the equivalence of truncated Adams towers becomes: Taking \(\pi_{0}\), the diagram translates to a \(G\)-equivariant isomorphism \(\iota_{X}\colon\pi_{0}(E_{n})\stackrel{{\sim}}{{\longrightarrow} }\pi_{0}\left(E_{n}\hat{\wedge}_{E_{hG}}X\right)\). From this, we obtain \(\mathrm{Fil}^{22}\mathrm{Pic}_{K(n)}\left(E_{n}^{hG}\right)=\kappa\left(E_{n} ^{hG}\right)\). When \(m\geq 3\), set \([\iota_{X}]:=\iota_{X}(1)\in E_{2}^{0,0}\left(E_{n}^{hG}\right)\). Notice that \(d_{r}([\iota_{X}])\) is defined only using the first \(r\)-terms in the \(K(n)\)-local \(E_{n}\)-Adams tower of \(X\) as an \(E_{n}^{hG}\)-module. Then the equivalence (3.4.10) implies the \(d_{r}\)-differentials supported the \(s=0\) lines of the HFPSS's for \(X\) and \(E_{n}^{hG}\) are the same for \(r\leq m-1\). In particular, we have \(d_{2}([\iota_{X}])=\cdots=d_{m-1}([\iota_{X}])=0\). This yields the inclusion \(\mathrm{Fil}^{2m}\mathrm{Pic}_{K(n)}\left(E_{n}^{hG}\right)\subseteq\kappa^{( m-1)}\left(E_{n}^{hG}\right)\). Conversely, if \([\iota_{X}]\) is an \((m-1)\)-cycle in the BKSS for \(\mathrm{Tot}\left[\widehat{\bigwedge}_{E_{n}^{hG}}^{\star+1}E_{n}\hat{\wedge} _{E_{n}^{hG}}X\right]\), then it is also an \((m-1)\)-cycle for the BKSS associated to the partial totalization \(\mathrm{Tot}_{m-1}\left[\widehat{\bigwedge}_{E_{n}^{hG}}^{\star+1}E_{n}\hat{ \wedge}_{E_{n}^{hG}}X\right]\). Notice the latter BKSS collapses on the \(E_{m}\)-page by construction, \([\iota_{X}]\) is a permanent cycle there and converges to an element \(\bar{\iota}_{X}\in\pi_{0}\mathrm{Tot}_{m-1}\left[\widehat{\bigwedge}_{E_{n}^ {hG}}^{\star+1}E_{n}\hat{\wedge}_{E_{n}^{hG}}X\right]\). Similar to the proof of part (1) of Lemma 3.4.8, we can show that \(\bar{\iota}_{X}\) is adjoint to a weak equivalence \[f_{X}\colon\mathrm{Tot}_{m-1}\left[\widehat{\bigwedge}_{E_{n}^{hG}}^{\star+1}E _{n}\right]\longrightarrow\mathrm{Tot}_{m-1}\left[\left(\widehat{\bigwedge}_ {E_{n}^{hG}}^{\star+1}E_{n}\right)\hat{\wedge}_{E_{n}^{hG}}X\right].\] This proves the inclusion in the other direction \(\mathrm{Fil}^{\geq m}\mathrm{Pic}_{K(n)}\left(E_{n}^{hG}\right)\supseteq \kappa^{(m-1)}\left(E_{n}^{hG}\right)\) when \(m\geq 3\). **Question 3.4.11**.: _Proposition 3.4.9 only identifies the descent filtration in Construction 3.4.3 with the one arising from the descent spectral sequence for Picard groups (3.3.12). When \(r\geq 2\), one can further wonder if the detection maps \(\phi_{r}\) in Construction 3.4.3 are identified with the maps \(\mathrm{Fil}^{2r}\mathrm{Pic}_{K(n)}\left(E_{n}^{hG}\right)\rightarrow\ ^{\mathsf{ pic}}E_{r}^{r,r}\left(E_{n}^{hG}\right)\) in the descent spectral sequence (3.3.12) for \(\mathsf{pic}_{K(n)}\left(E_{n}^{hG}\right)\). (Proposition 3.4.9 only implies their kernels are the same.)_ _More precisely, for any \(X\in\mathrm{Fil}^{\geq r}\mathrm{Pic}_{K(n)}\left(E_{n}^{hG}\right),r\geq 1\), pick an equivalence \(f_{X}\colon E_{n}\stackrel{{\sim}}{{\longrightarrow}}E_{n}\hat{ \wedge}_{E_{hG}}X\) and let \(\iota_{X}\in\pi_{0}\left(E_{n}\hat{\wedge}_{E_{hG}}X\right)\) be the image of \(1\in\pi_{0}(E_{n})\) under \((f_{X})_{\star}\). Does associated graded map of the filtration_ _send \(X\) to \(\left(f_{X}^{-1}\right)_{\star}\left(d_{r}(\iota_{X})\right)\)?_ By analyzing the HFPSS for the \(K(n)\)-local sphere, Hopkins-Mahowald-Sadofsky proved in [11, Proposition 7.5] that the exotic \(K(n)\)-local Picard group \(\kappa_{n}=\kappa\left(S_{K(n)}^{0}\right)\) vanishes when \((p-1)\nmid n\) and \(2p-1>n^{2}\). According to Proposition 3.4.9, we can study the descent filtration on \(\mathrm{Pic}_{K(n)}\left(E_{n}^{hG}\right)\) from the descent spectral sequence (3.3.12) for Picard groups. This leads to the following algebraicity result: **Main Theorem C**.: _Fix a prime \(p>2\). Let \(G\leq\mathbb{G}_{n}\) be a closed subgroup such that \(G\cap\left(\mathbb{Z}/p\right)^{\times}\) is cyclic of order \(m\), where \(\left(\mathbb{Z}/p\right)^{\times}\leq\mathbb{Z}_{p}^{\times}=Z(\mathbb{G}_{n})\) is the torsion subgroup of the center \(\mathbb{Z}_{p}^{\times}\) of \(\mathbb{G}_{n}\). Denote the \(p\)-adic cohomological dimension of \(G\) by \(\operatorname{cd}_{p}G\)._ 1. _When_ \(2m+1>\operatorname{cd}_{p}G\)_, the exotic Picard group_ \(\kappa\left(E_{n}^{hG}\right)\) _vanishes and the descent filtration on_ \(K(n)\)_-local Picard group_ \(\operatorname{Pic}_{K(n)}\left(E_{n}^{hG}\right)\) _is:_ \[0\xrightarrow{}H_{c}^{1}(G;\pi_{0}(E_{n})^{\times})\xrightarrow{}\operatorname {Pic}_{K(n)}\left(E_{n}^{hG}\right)\xrightarrow{\phi_{0}}\mathbb{Z}/2 \xrightarrow{}0.\] 2. _When_ \(2m+1=\operatorname{cd}_{p}G\)_, the map_ \(\phi_{1}\colon\operatorname{Pic}_{K(n)}^{0}\left(E_{n}^{hG}\right)\to \operatorname{Pic}_{K(n)}^{alg,0}\left(E_{n}^{hG}\right)=H_{c}^{1}(G;\pi_{0}( E_{n})^{\times})\) _is surjective._ _Remark 3.4.12_.: There is no guarantee that the extension above splits or not. We will see examples of both scenarios in Main Theorem D and Main Theorem E. _Remark 3.4.13_.: When \(G=\mathbb{G}_{n}\) and \((p-1)\nmid n\), the bound in (1) becomes \(2p-1>\operatorname{cd}_{p}\mathbb{G}_{n}=n^{2}\). Pstragowski proved a slightly weaker bound \(2p-2>n^{2}+n\) in [11, Theorem 1.1] using the algebraicity of the category of \(E_{n}\)-local spectra \(\operatorname{Sp}_{E_{n}}\) in [11]. This case has also been discussed in [11, Remark 2.6] and [10, Proposition 1.25] as a consequence of Heard's partial identification of the \(E_{2}\)-page of the descent spectral sequence (3.3.12) in [10, Example 6.18]. Proof.: Consider the Hochschild-Lyndon-Serre spectral sequence to compute the \(E_{2}\)-page of the descent spectral sequence (3.3.12) of \(\mathfrak{pic}_{K(n)}\left(E_{n}^{hG}\right)\). \[E_{2}^{r,s}=H_{c}^{r}\left(G/(G\cap\left(\mathbb{Z}/p\right)^{\times});H^{s} \left(G\cap\left(\mathbb{Z}/p\right)^{\times};\pi_{2k}(E_{n})\right)\right) \Longrightarrow H_{c}^{r+s}\left(G;\pi_{2k}(E_{n})\right).\] The finite group \(G\cap\left(\mathbb{Z}/p\right)^{\times}\) has an order \(m\) coprime to \(p\) since \(p>2\). This means \(E_{2}^{r,s}=0\) unless \(s=0\). The center \(Z(\mathbb{G}_{n})=\mathbb{Z}_{p}^{\times}\) of the Morava stabilizer group \(\mathbb{G}_{n}\) acts on \(\pi_{2k}(E_{n})\) by multiplication by the \(k\)-th power. Since the \(G\cap\left(\mathbb{Z}/p\right)^{\times}\) is cyclic of order \(m\), its action on \(\pi_{2k}(E_{n})\) has no fixed points unless \(m\) divides \(k\) (it is trivial then). As a result, the HLSSS collapses and \(H_{c}^{s}\left(G;\pi_{2k}(E_{n})\right)=0\) unless \(m\mid k\). The \(p\)-adic cohomological dimension \(G\) then gives a horizontal vanishing line on the \(E_{2}\)-page of the descent spectral sequence (3.3.12) for \(\mathfrak{pic}_{K(n)}\left(E_{n}^{hG}\right)\). The claim follows by analyzing the \(t-s=-1,0,1\) stems on the \(E_{2}\)-page of the descent spectral sequence in Figure 1. * When \(2m+1\geq\operatorname{cd}_{p}(G)\), the first two nonzeros terms on the \(0\)-stem are \(E_{2}^{0,0}=\mathbb{Z}/2\) and \(E_{2}^{1,1}=\operatorname{Pic}_{K(n)}^{alg,0}=H_{c}^{1}(G;\pi_{0}(E_{n})^{ \times})\). They are both permanent cycles, since the targets of all potential non-zero differentials supported by them are above the horizontal vanishing line at \(s=\operatorname{cd}_{p}(G)\). This implies \(\phi_{1}\colon\operatorname{Pic}_{K(n)}^{0}\left(E_{n}^{hG}\right)=\operatorname {Fil}^{\geq 1}\operatorname{Pic}_{K(n)}\left(E_{n}^{hG}\right)\twoheadrightarrow{ \mathfrak{pic}}E_{\infty}^{1,1}=E_{2}^{1,1}=\operatorname{Pic}_{K(n)}^{alg,0} \left(E_{n}^{hG}\right)=H_{c}^{1}(G;\pi_{0}(E_{n})^{\times})\) is a surjection. * When \(2m+1>\operatorname{cd}_{p}(G)\), the two terms \(E_{2}^{0,0}\) and \(E_{2}^{1,1}\) are the _only_ non-zero permanent cycles in the \(0\)-stem, since the next term \(E_{2}^{2m+1,2m+1}\) is above the horizontal vanishing line at \(s=\operatorname{cd}_{p}(G)\). As a result, \(\phi_{1}\) is also injective in this case and we obtain the descent filtration. _Remark 3.4.14_.: Similar to Main Theorem C, Culver and the second author proved in [10] that when \((p-1)\nmid n\), the map \[\phi_{2p-1}\colon\kappa_{n}=\kappa_{n}^{(2p-2)}=\operatorname{Fil}^{\geq 2p-1} \operatorname{Pic}_{K(n)}\twoheadrightarrow^{\mathfrak{pic}}E_{2p-1}^{2p-1,2p-1} \left(E_{n}^{hG_{n}}\right)=\mathfrak{pic}E_{2}^{2p-1,2p-1}\left(E_{n}^{hG_{n }}\right)=H_{c}^{2p-1}(G;\pi_{2p-2}(E_{n}))\] is an isomorphism when \(4p-3>n^{2}\), and is a surjection when \(4p-3=n^{2}\). This is because the term \(\mathfrak{pic}E_{2}^{0,1}=H_{c}^{0}(\mathbb{G}_{n};\pi_{0}(E_{n})^{\times})= \mathbb{Z}_{p}^{\times}\) does not support any differential in the DSS (3.3.12), as any \(\alpha\in\mathbb{Z}_{p}\cong H_{c}^{0}(\mathbb{G}_{n};\pi_{0}(E_{n}))\) is a permanent cycle in the HFPSS for the ring spectrum \(E_{n}^{h\mathbb{G}_{n}}\simeq S_{K(n)}^{0}\). When \((p-1)\nmid n\), we also have \(\operatorname{cd}_{p}(\mathbb{G}_{n})=n^{2}\). This claim could be generalized to the exotic Picard group \(\kappa\left(E_{n}^{hG}\right)\) for closed subgroups \(G\) of \(\mathbb{G}_{n}\) if the term \(\mathfrak{pic}\,E_{2}^{0,1}=H_{c}^{0}(G;\pi_{0}(E_{n})^{\times})\), or equivalently \(\operatorname{{}^{HFP}}E_{2}^{0,0}=H_{c}^{0}(G;\pi_{0}(E_{n}))\) does not support any differentials. As is mentioned in Remark 3.4.7[2, Example 3.10], this happens in all cases with complete computations. ## 4. Computations of \(K(1)\)-local Picard Mackey functors As an application to the profinite descent spectral sequences for Picard spaces in Corollary 3.3.14, we compute \(K(1)\)-local Picard groups of \(E_{1}^{hG}\) for all closed subgroups of \(\mathbb{G}_{1}=\mathbb{Z}_{p}^{\times}\) at all primes in the remaining of the paper. When \(G\leq\mathbb{Z}_{p}^{\times}\) is a pro-cyclic open subgroup, the homotopy fixed point \(E_{1}^{hG}\) is equivalent to the \(K(1)\)-local algebraic \(K\)-theory spectrum of a finite field (see Proposition 4.2.2 and Remark 4.3.2). ### \(K(n)\)-local Picard groups and Mackey functors The Picard groups \(\operatorname{Pic}_{K(n)}\left(E_{n}^{hG}\right)\) are not just a collection of (pro-)abelian groups. When we vary the closed subgroups \(G\), those individual Picard groups form a _Mackey functor_, i.e. there are restriction and transfer maps between them. **Examples 4.1.1**.: Let \(G\) be a finite group and \(A\) be an abelian group. 1. The **constant \(G\)-Mackey functor** with value \(A\) is defined by \(\underline{A}(G/H):=A\) for all subgroups \(H\leq G\). The restriction maps are all identities and the transfer maps is determined by the double coset formula. 2. The constant Mackey functor \(\underline{A}\) has an _opposite_\(\underline{A^{\mathrm{op}}}\) whose underlying groups are still \(A\), but the restriction and transfer maps are switched. 3. When \(A\) has a \(G\)-action, the **cohomological \(G\)-Mackey functor** of \(A\) is defined by \[\frac{H^{s}(-;A)(G/H):=H^{s}(H;A).\] where a subgroup \(H\leq G\) acts on \(A\) by restricting the \(G\)-action. Given subgroups \(H_{1}\leq H_{2}\) of \(G\), the restriction and transfer maps of the Mackey functor is given the corresponding map in group cohomology. See definitions in [1]. 4. When \(G\) acts trivially on \(A\), we have natural isomorphisms \(H^{1}(G;A)\cong\mathrm{Hom}(G,A)\). This implies \(\underline{\mathrm{Hom}(-;A)}\) is a cohomological \(G\)-Mackey functor. This is an example of _globally-defined_ Mackey functor in [11, page 829]. 5. When a group \(G\) acts trivially on \(A=\mathbb{Q}/\mathbb{Z}\), we have \(H^{1}(G;\mathbb{Q}/\mathbb{Z})\cong\mathrm{Hom}(G,\mathbb{Q}/\mathbb{Z})\) is the Pontryagin dual of \(G\), which is non-canonically isomorphic to \(G\) if \(G\) is finite abelian. We will denote the cohomological Mackey functor \(\underline{\mathrm{Hom}(-;\mathbb{Q}/\mathbb{Z})}\) by \((-)^{\vee}\). The examples above (except for \(\underline{A^{\mathrm{op}}}\)) can be generalized to profinite groups if we only consider transfer maps between closed subgroups \(H_{1}\leq H_{2}\) such that \([H_{2}:H_{1}]<\infty\). **Proposition 4.1.2** ([1, Proposition 3.1 and Corollary 3.12]).: _Let \(G\) be a finite group and \(X\) be an \(\mathbb{E}_{\infty}\)-ring spectrum with a \(G\)-action. Suppose \(X^{hG}\to X\) is a faithful \(G\)-Galois extension, then the assignment \(G/H\mapsto\mathrm{Pic}\left(X^{hH}\right)\) is a \(G\)-Mackey functor. More precisely, let \(H_{1}\leq H_{2}\leq G\) be subgroups._ * _The restriction map_ \(\mathrm{Res}\colon\mathrm{Pic}\left(X^{hH_{2}}\right)\to\mathrm{Pic}\left(X^{ hH_{1}}\right)\) _is induced by the base change of invertible_ \(X^{hH_{2}}\)_-modules_ \(A\mapsto A\wedge_{X^{hH_{2}}}X^{hH_{1}}\)_._ * _The transfer map_ \(\mathrm{Tr}\colon\mathrm{Pic}\left(X^{hH_{1}}\right)\to\mathrm{Pic}_{H_{2}} \left(X^{hH_{1}}\right)\cong\mathrm{Pic}\left(X^{hH_{2}}\right)\) _is induced by the multiplicative norm_ \(A\mapsto\mathrm{Norm}_{H_{1}}^{H_{2}}A\)_._ We note that the restriction maps can be defined for any pairs of subgroups \(H_{1}\leq H_{2}\), whereas this definition of transfer maps only applies when \([H_{2}:H_{1}]\) is finite. As \(\mathbb{G}_{1}=\mathbb{Z}_{p}^{\times}\) is abelian, all of its (closed) subgroups are normal. This means for any \(H_{1}\leq H_{2}\leq\mathbb{Z}_{p}^{\times}\) with \([H_{2}:H_{1}]<\infty\), the map \(E_{1}^{hH_{2}}\to E_{1}^{hH_{1}}\) is a faithful \(K(1)\)-local \((H_{2}/H_{1})\)-Galois extensions. Hence there are restriction and transfer maps between their Picard groups. ### Computations at height \(1\) and odd primes Let \(p\) be an odd prime. Next we compute the \(K(1)\)-local Picard group at \(p\) as a Mackey functor for the profinite group \(\mathbb{Z}_{p}^{\times}\). Let's first recall some basic facts about the structure of this group. The first step in the computation of the \(\mathbb{Z}_{p}^{\times}\)-Picard Mackey functor is to determine all of its _closed_ subgroups. The group \(\mathbb{Z}_{p}^{\times}\) is pro-cyclic, since it is the limit of cyclic groups \((\mathbb{Z}/p^{v})^{\times}\cong C_{(p-1)p^{v-1}}\). For any element \(\alpha\in\mathbb{Z}_{p}^{\times}\), let \(\langle\alpha\rangle\) be the closed subgroup of \(\mathbb{Z}_{p}^{\times}\) generated by \(\alpha\). **Lemma 4.2.1**.: _When \(p\) is an odd prime, the profinite group \(\mathbb{Z}_{p}^{\times}\) has the following properties:_ 1. _The group_ \(\mathbb{Z}_{p}^{\times}\) _is procyclic._ 2. _For_ \(1\leq k\leq\infty\)_, the subset_ \(1+p^{k}\mathbb{Z}_{p}=\{a\in\mathbb{Z}_{p}^{\times}\mid a\equiv 1\mod p^{k}\}\) _is a closed subgroup of_ \(\mathbb{Z}_{p}^{\times}\)_, where_ \(1+p^{\infty}\mathbb{Z}_{p}:=\{1\}\)_. Moreover, for any element_ \(a\equiv 1\mod p\)_, the closed subgroup of_ \(\mathbb{Z}_{p}^{\times}\) _generated by_ \(a\) _is_ \(1+p^{k}\mathbb{Z}_{p}\) _where_ \(k=v_{p}(a-1)\)_._ 3. _The maximal finite subgroup of_ \(\mathbb{Z}_{p}^{\times}\) _is the group of_ \((p-1)\)_-st roots of unity in_ \(\mathbb{Z}_{p}\)_._ 4. _The profinite group_ \(\mathbb{Z}_{p}^{\times}\) _decomposes as a direct product of its closed subgroups:_ \(\mathbb{Z}_{p}^{\times}\cong(\mathbb{Z}/p)^{\times}\times(1+p\mathbb{Z}_{p})\)_._ _._ 5. _Closed subgroups_ \(G\) _of_ \(\mathbb{Z}_{p}^{\times}\) _are of the form_ \(G=G_{\mathrm{fin}}\times G_{\mathrm{pro}}\) _where_ \(G_{\mathrm{fin}}\) _is a subgroup of_ \(\left(\mathbb{Z}/p\right)^{\times}\) _and_ \(G_{\mathrm{pro}}\cong 1+p^{k}\mathbb{Z}_{p}\) _for_ \(1\leqslant k\leqslant\infty\) _is a pro-_\(p\)_-group._ Proof.: 1. By definition, \(\mathbb{Z}_{p}^{\times}=\lim_{k}\left(\mathbb{Z}/p^{k}\right)^{\times}\). Each of the finite groups \(\left(\mathbb{Z}/p^{k}\right)^{\times}\) in the inverse system is cyclic and we can pick a compatible system of generators \(\alpha_{k}\) of \(\left(\mathbb{Z}/p^{k}\right)^{\times}\) such that \(\alpha_{m}\equiv\alpha_{k}\mod p^{k}\) for any \(m\geq k\). The sequence \(\{\alpha_{k}\}\) converges to an element \(\alpha\in\mathbb{Z}_{p}^{\times}\). It is a pro-generator, since the smallest closed group containing \(\alpha\) is \(\mathbb{Z}_{p}^{\times}\) itself. 2. When \(k<\infty\), the subset \(1+p^{k}\mathbb{Z}_{p}\) is the kernel of the projection map \(\mathbb{Z}_{p}^{\times}\to\left(\mathbb{Z}/p^{k}\right)^{\times}\). This implies it is a closed (and also open) subgroup of \(\mathbb{Z}_{p}^{\times}\). When \(v_{p}(a-1)=k>0\) and \(v\geq k\), the residue class of \(a\) in \(\left(\mathbb{Z}/p^{v}\right)^{\times}\) generates the subgroup \(\left\{b\in\left(\mathbb{Z}/p^{v}\right)^{\times}\mid b\equiv 1\mod p^{k}\right\}\). It follows that the smallest closed subgroup containing \(a\) is \(1+p^{k}\mathbb{Z}_{p}\). 3. This follows from Hensel's Lemma. 4. Consider the Teichmuller character \(\omega\colon\mathbb{Z}_{p}^{\times}\to\left(\mathbb{Z}/p\right)^{\times} \to\mathbb{Z}_{p}^{\times}\), which sends \(a\in\mathbb{Z}_{p}^{\times}\) to the unique element \(\omega(a)\in\mathbb{Z}_{p}^{\times}\) such that \(\omega(a)\equiv a\mod p\) and \(\omega(a)^{p-1}=1\). Explicitly, the character is given by the formula \(\omega(a)=\lim_{n\to\infty}a^{p^{n}}\). The direct product decomposition \(f\colon\mathbb{Z}_{p}^{\times}\to\left(\mathbb{Z}/p\right)^{\times}\times \left(1+p\mathbb{Z}_{p}\right)\) is then given by \(f(a)=(\omega(a),a\cdot\omega(a)^{-1})\). One can check this map is a continuous isomorphism of profinite groups. 5. A closed subgroup \(G\leq\mathbb{Z}_{p}^{\times}\) is necessarily (pro-)cyclic since \(\mathbb{Z}_{p}^{\times}\) is. Pick a (pro-)generator \(\alpha\in G\). The explicit formula above implies that \(\omega(\alpha)\in G\), which generates a finite subgroup \(\langle\omega(\alpha)\rangle=G\cap\left(\mathbb{Z}/p\right)^{\times}\). The element \(\alpha\cdot\omega(\alpha)^{-1}\) is then congruent to \(1\) modulo \(p\). Set \(k=v_{p}(\alpha\cdot\omega(\alpha)^{-1}-1)\). We then have \(G=\langle\omega(\alpha)\rangle\times(1+p^{k}\mathbb{Z}_{p})\leq\mathbb{Z}_{p} ^{\times}\) as claimed. Let \(\mathbb{F}_{q}\) be a finite field with \(q\) elements. Quillen's computation [10] of algebraic \(K\) theory of finite fields implies that when \(p\nmid q\), we have an equivalence of \(K(1)\)-local \(\mathbb{E}_{\infty}\)-ring spectra \(L_{K(1)}K(\mathbb{F}_{q})\simeq\left(KU_{p}^{\wedge}\right)^{h(q)}\). The converse is also true. **Proposition 4.2.2**.: _Any finite Galois extension of \(S^{0}_{K(1)}\) is equivalent to the \(K(1)\)-local algebraic \(K\)-theory spectrum for some finite field \(\mathbb{F}_{q}\) when \(p>2\)._ Proof.: By Lemma4.2.1, any open subgroup \(G\) of \(\mathbb{Z}_{p}^{\times}\) is pro-cyclic. It suffices to find a topological generator \(q\in G\) that is (a power of) a prime number. Pick a topological generator \(\alpha\in G\) and write \(G=\langle\omega(\alpha)\rangle\times 1+p^{k}\mathbb{Z}_{p}\) as in the proof of Lemma4.2.1. Then any integer \(q\) satisfying \(q\equiv\alpha\mod p^{k+1}\) is a topological generator of \(G\). This \(q\) can be chosen to be a prime number by Dirichlet's theorem on arithmetic progressions. For a closed subgroup \(G\leq\mathbb{Z}_{p}^{\times}\), we compute \(\mathrm{Pic}\left(E_{1}^{hG}\right)\) in two steps. First, we compute \(\mathrm{Pic}_{K(1)}\left(E_{1}^{hG_{\mathrm{fin}}}\right)\) by the descent spectral sequence [11]: \[E_{2}^{s,t}=H^{s}(G_{\mathrm{fin}};\pi_{t}(\mathfrak{pic}_{K(1)}(E_{1}))) \Longrightarrow\pi_{t-s}\mathfrak{pic}_{K(1)}\left(E_{1}^{hG_{\mathrm{fin}}} \right). \tag{4.2.3}\] For our purpose, we only need to compute \(\pi_{0}\mathfrak{pic}_{K(1)}\left(E_{1}^{hG_{\mathrm{fin}}}\right)=\mathrm{Pic}_ {K(1)}\left(E_{1}^{hG_{\mathrm{fin}}}\right)\). At height \(1\), the Morava stablizer group \(\mathbb{Z}_{p}^{\times}\) acts trivially on \(\pi_{0}\left(\mathfrak{pic}_{K(1)}(E_{1})\right)\cong\mathrm{Pic}_{K(1)}(E_{1}) \cong\mathbb{Z}/2\) and \(\pi_{1}\left(\mathfrak{pic}_{K(1)}(E_{1})\right)\cong\pi_{0}(E_{1})^{\times} \cong\mathbb{Z}_{p}^{\times}\). **Proposition 4.2.4**.: _There is a non-split extension of \(\left(\mathbb{Z}/p\right)^{\times}\)-Mackey functors:_ \[\begin{CD}0@>{}>{}>(-)^{\vee}@>{}>{}>\xrightarrow{\mathrm{Pic}_{K(1)}(E_{1}^{h -})}@>{}>{}>\underline{\mathbb{Z}/2}@>{}>{}>0,\end{CD}\] _In particular, the group \(\operatorname{Pic}_{K(1)}\left(E_{1}^{hG}\right)\) is cyclic of order \(2|G|\) for each subgroup \(G\leq\left(\mathbb{Z}/p\right)^{\times}\)._ Proof.: Let \(G\leq\mathbb{Z}_{p}^{\times}\) be a finite subgroup. It is contained in \(\left(\mathbb{Z}/p\right)^{\times}\) by Lemma 4.2.1. The \(E_{2}\)-page of the descent spectral sequence (4.2.3) is illustrated in Figure 2. On this page of the spectral sequence: * All elements with bigrading \((s,t)\) satisfying \(s\geq 1\) and \(t\geq 2\) are trivial. This is because \(\pi_{t}(\mathsf{pic}_{K(1)}(E_{1}))\cong\pi_{t-1}(E_{1})\) is \(p\)-complete when \(t\geq 2\), and \(G\) is a subgroup of \(\left(\mathbb{Z}/p\right)^{\times}\) whose order \(p-1\) is coprime to \(p\). * \(E_{2}^{0,0}=\operatorname{Pic}_{K(1)}((E_{1})_{*})\cong\mathbb{Z}/2\). The generator of this group is a permanent cycle since it detects \(\Sigma E^{hG}\in\operatorname{Pic}_{K(1)}\left(E^{hG}\right)\). * As the order of \(G\) is coprime to \(p\) and the group acts trivially on \(\pi_{1}\left(\mathsf{pic}_{K(1)}(E_{1})\right)\cong\mathbb{Z}_{p}^{\times}\), we have isomorphisms: \[E_{2}^{1,1}=H^{1}(G,\mathbb{Z}_{p}^{\times})\cong\operatorname{Hom}_{c}(G, \mathbb{Z}_{p}^{\times})\cong\operatorname{Hom}(G,\left(\mathbb{Z}/p\right)^ {\times})\cong G^{\vee}.\] The last term is non-canonically isomorphic to \(G\) itself. We write \(G^{\vee}\) to stress the Mackey functor structure. Elements in \(E_{2}^{1,1}\) are permanent cycles in the spectral sequence for degree reasons. As a result, elements on the \(0\)-stem of the \(E_{2}\)-page are all permanent cycles. Passing to the \(E_{\infty}\)-page, we need to solve an extension problem: \[0\longrightarrow G^{\vee}\cong G\longrightarrow\operatorname{Pic}_{K(1)} \left(E_{1}^{hG}\right)\longrightarrow\mathbb{Z}/2\longrightarrow 0.\] Consider the homotopy fixed point spectral sequence: \[E_{2}^{s,t}=H^{s}(G;\pi_{t}(E_{1}))\Longrightarrow\pi_{t-s}\left(E_{1}^{hG} \right).\] The group \(G\leq\left(\mathbb{Z}/p\right)^{\times}\) acts on \(\pi_{2k}(E_{1})\cong\mathbb{Z}_{p}\) by the character \(G\hookrightarrow\left(\mathbb{Z}/p\right)^{\times}\xrightarrow{\omega} \mathbb{Z}_{p}^{\times}\xrightarrow{\left(-\right)^{k}}\mathbb{Z}_{p}^{\times}\), where \(\omega\) is the Teichmuller character. Then we can compute \[H^{s}(G;\pi_{t}(E_{1}))=\left\{\begin{array}{ll}\pi_{t}(E_{1}),&s=0\text{ and }2|G|\text{ divides }t;\\ 0,&\text{else.}\end{array}\right.\] As a result, the spectrum \(E_{1}^{hG}\) has minimal periodicity \(2|G|\), which implies that \(\operatorname{Pic}_{K(1)}\left(E_{1}^{hG}\right)\) has a cyclic subgroup of order \(2|G|\). Together with the extension above, we conclude that \(\operatorname{Pic}_{K(1)}\left(E_{1}^{hG}\right)\) is indeed cyclic of order \(2|G|\). _Remark 4.2.5_.: For a finite subgroup \(G=\mathbb{Z}/m\leq\mathbb{Z}_{p}^{\times}\), the computation of \(\operatorname{Pic}_{K(1)}\left(E_{1}^{hG}\right)\) also follows from [13, Corollary 2.4.7], since \(\pi_{*}\left(E_{1}^{hG}\right)\cong\mathbb{Z}_{p}[u^{*m}]\) where \(|u^{m}|=-2m\). Write \(G=G_{\operatorname{fin}}\times\left(1+p^{k}\mathbb{Z}_{p}\right)\) as in Lemma4.2.1 and assume \(1\leq k<\infty\). By Corollary3.3.14, we compute \(\operatorname{Pic}_{K(1)}\left(E_{1}^{hG}\right)\) via the profinite descent spectral sequence: \[E_{2}^{s,t}=H_{c}^{s}\left(1+p^{k}\mathbb{Z}_{p};\pi_{t}\mathsf{pic}_{K(1)} \left(E_{1}^{hG_{\operatorname{fin}}}\right)\right)\Longrightarrow\pi_{t-s} \left(\mathsf{pic}_{K(1)}\left(E_{1}^{hG}\right)\right).\] Note that \(1+p^{k}\mathbb{Z}_{p}\) is \(p\)-pro and the order of \(G_{\operatorname{fin}}\) is coprime to \(p\). The homotopy fixed point spectral sequence vanishes from filtration \(2\) and collapses on the \(E_{2}\)-page. At stem \(0\), we have \(\operatorname{Pic}\left(E_{1}^{G_{\operatorname{fin}}}\right)\) at filtration \(0\) and \(\operatorname{Hom}_{c}(G_{\operatorname{pro}},1+p\mathbb{Z}_{p})\cong \operatorname{Hom}_{c}(G,1+p\mathbb{Z}_{p})\) at filtration \(1\). **Main Theorem D**.: _Let \(p>2\) be an odd prime number, \(k\geq 1\) and \(m\mid(p-1)\) be some positive integers. The Picard groups of \(E_{1}^{hG}\) for all closed subgroups \(G\leq\mathbb{Z}_{p}^{\times}\) are listed below:_ \[\operatorname{Pic}_{K(1)}\left(E_{1}^{hG}\right)=\left\{\begin{array}{ll} \mathbb{Z}/(2m)\oplus\mathbb{Z}_{p},&G=\mathbb{Z}/m\times(1+p^{k}\mathbb{Z}_{ p});&\@@cite[cite]{[\@@bibref{}{HMS}{}{}]}\text{ for }G=\mathbb{Z}_{p}^{\times}\\ \mathbb{Z}/(2m),&G=\mathbb{Z}/m.&\@@cite[cite]{[\@@bibref{}{BR05}{}{}]} \text{\@@cite[cite]{[\@@bibref{}{MS}{}{}]}\text{\@@cite[cite]{[\@@bibref{}{MS }{}{}]}}}\end{array}\right.\] _We have isomorphisms of \(\mathbb{Z}_{p}^{\times}\)-Mackey functors:_ \[\operatorname{Pic}_{K(1)}\left(E_{1}^{h(-)}\right)\cong\operatorname{Hom}_{c} (-,1+p\mathbb{Z}_{p})\times\operatorname{Pic}_{K(1)}\left(E_{1}^{h(-)_{ \operatorname{fin}}}\right).\] _More precisely, let \(m_{1}\mid m_{2}\mid(p-1)\) and \(k_{2}\leq k_{1}\) be some positive integers. The formulas for restriction and transfer maps between subgroups of finite indices are:_ \[\operatorname{Pic}_{K(1)}\left(E_{1}^{h(\mathbb{Z}/m_{2}\times(1+p ^{k_{2}}\mathbb{Z}_{p}))}\right) \cong\,\mathbb{Z}_{p}\oplus\mathbb{Z}/2m_{2}\] \[\operatorname{Pic}_{K(1)}\left(E_{1}^{h(\mathbb{Z}/m_{1}\times(1+ p^{k_{1}}\mathbb{Z}_{p}))}\right) \cong\,\mathbb{Z}_{p}\oplus\mathbb{Z}/2m_{1},\] _where the numbers \(d\) denote multiplication by \(d\) on the corresponding summands._ _Remark 4.2.6_.: When \(G=\mathbb{Z}_{p}^{\times}\), the group \(\operatorname{Pic}_{K(1)}=\mathbb{Z}_{p}\oplus\mathbb{Z}/(2p-2)\) is topologically generated by \(S_{K(1)}^{1}\), which corresponds to \((1,1)\) under the identification. The computation in Main Theorem D implies that for \(G=\mathbb{Z}/m\times(1+p^{k}\mathbb{Z}_{p})\) with \(k<\infty\), the image of \(S_{K(1)}^{1}\) in \(\operatorname{Pic}_{K(1)}\left(E_{1}^{hG}\right)=\mathbb{Z}_{p}\oplus\mathbb{ Z}/(2m)\) is \((p^{k-1},1)\). As this element is uniquely divisible by \(p^{k-1}\) in the Picard group, we obtain a unique \(K(n)\)-local spectrum \(X=\Sigma^{1/p^{k-1}}E_{1}^{hG}\), whose \(p^{k-1}\)-st smash power over \(E_{1}^{hG}\) is \(\Sigma E_{1}^{hG}\). This element is a topological generator of \(\operatorname{Pic}_{K(1)}\left(E_{1}^{hG}\right)\). The computations in Main Theorem D allows us to verify Proposition2.3.7 in the example below: **Corollary 4.2.7**.: _Let \(\mathbb{Z}/m\leq\mathbb{Z}_{p}^{\times}\) be a finite subgroup of order \(m\). Then we have colimits in the category of pro-abelian groups:_ \[\operatorname*{colim}_{\operatorname{Res}}\operatorname{Pic}_{K(1)}\left(E_{1} ^{h(\mathbb{Z}/m\times(1+p^{k}\mathbb{Z}_{p}))}\right)\cong\mathbb{Z}/(2m) \cong\operatorname{Pic}_{K(1)}\left(E_{1}^{h\mathbb{Z}/m}\right).\] _Remark 4.2.8_.: Note that the claim fails if we took the colimit in \(\mathsf{Ab}\), as the colimit will have an extra \(\mathbb{Q}_{p}\) summand. This is an example of Remark 2.3.8 where the discrete Picard functor \(\operatorname{Pic}_{K(1)}\colon\mathsf{CAlg}\left(\mathsf{Sp}_{K(1)}\right) \to\mathsf{Ab}\) does not preserve filtered colimits unless we lift it to \(\mathsf{Pro}(\mathsf{Ab})\) as in Main Theorem A. ### Computations at height \(1\) and prime \(2\) The \(p=2\) case is more complicated than the odd prime cases, largely due to the extension problems. It is necessary to use the Mackey functor perspective (restriction maps) to resolve this issue. **Lemma 4.3.1**.: _For \(\alpha\in\mathbb{Z}_{2}^{\times}\), let \(\{\alpha\}\) be the closed subgroup generated by \(\alpha\). The closed subgroup lattice of \(\mathbb{Z}_{2}^{\times}=\{\pm 1\}\times(1+4\mathbb{Z}_{2})\) is:_ _In the diagram above,_ * \((\alpha)\leq\mathbb{Z}_{2}^{\times}\) _is the closed subgroup generated by_ \(\alpha\)_. When_ \(\alpha\equiv 1\mod 4\)_, we have_ \((\alpha)=1+2^{v_{2}(\alpha-1)}\mathbb{Z}_{2}\)_._ * \(\{\pm 1\}\times(2^{k}+1)=\{\pm 1\}\times(2^{k}-1)\) _as subgroups of_ \(\mathbb{Z}_{2}^{\times}\)_._ * _Each dash, except for the ones from_ \(\{1\}\) _and_ \(\{\pm 1\}\) _to profinite groups, indicates a subgroup of index_ \(2\)_._ _Subgroups in the \(m\)-th column have index \(2^{m}\) in \(\mathbb{Z}_{2}^{\times}\)._ Proof.: The direct product decomposition \(\mathbb{Z}_{2}^{\times}=\{\pm 1\}\times(1+4\mathbb{Z}_{2})\) preserves the profinite topology on both sides. As such, a subgroup \(H\leq\mathbb{Z}_{2}^{\times}\) is closed iff \(H\cap(1+4\mathbb{Z}_{2})\) is a closed subgroup of \(1+4\mathbb{Z}_{2}\), which is pro-cyclic. This implies \(H\cap(1+4\mathbb{Z}_{2})=1+2^{k}\mathbb{Z}_{2}\) for some \(2\leq k\leq\infty\). When \(k=2\), the closed subgroup \(H\) contains \(1+4\mathbb{Z}_{2}\). Such subgroups are in one-to-one correspondence to those of the quotient \(\mathbb{Z}_{2}^{\times}/(1+4\mathbb{Z}_{2})\cong C_{2}\). As a result, \(H=\mathbb{Z}_{2}^{\times}\) or \(1+4\mathbb{Z}_{2}\). When \(2<k<\infty\), we claim that \(H\) is a subgroup of \(\{\pm 1\}\times(1+2^{k-1}\mathbb{Z}_{2})\). Otherwise, \(H\) would contain an element \(\alpha\) such that \(v_{2}(\pm\alpha-1)<k-1\), which implies \(v_{2}(\alpha^{2}-1)\leq k-1\). The closed subgroup generated by \(\alpha^{2}\) then contains \(1+2^{k-1}\mathbb{Z}_{2}\), which contradicts the assumption that \(H\cap(1+4\mathbb{Z}_{2})=1+2^{k}\mathbb{Z}_{2}\). It follows that \(H\) corresponds to a subgroup of the subquotient \(\left[\{\pm 1\}\times(1+2^{k-1}\mathbb{Z}_{2})\right]/(1+2^{k}\mathbb{Z}_{2})\) of \(\mathbb{Z}_{2}^{\times}\). This subquotient is isomorphic to the Klein group \(C_{2}\times C_{2}\). From its subgroup lattice, we conclude \(H\) is either \((2^{k}+1)=1+2^{k}\mathbb{Z}_{2}\), \((2^{k-1}-1)\), or \(\{\pm 1\}\times(1+2^{k}\mathbb{Z}_{2})=\{\pm 1\}\times(2^{k}+1)\). When \(k=\infty\), we have \(1+2^{\infty}\mathbb{Z}_{2}=\{1\}\) and \(H=\{1\}\) or \(\{\pm 1\}\). Otherwise, pick an element \(\alpha\in H\) not equal to \(\pm 1\). Let \(m=v_{2}(\alpha^{2}-1)<\infty\). Then we have \(H\supseteq\{\alpha^{2}\}=1+2^{m}\mathbb{Z}_{2}\), contradicting the assumption that \(H\cap(1+4\mathbb{Z}_{2})=\{1\}\). _Remark 4.3.2_.: Similar to Proposition 4.2.2, if \(G\leq\mathbb{Z}_{2}^{\times}\) is pro-cyclic and open, then the homotopy fixed point \((KU_{2}^{\wedge})^{hG}\) is equivalent to \(L_{K(1)}K(\mathbb{F}_{q})\) for some finite field \(\mathbb{F}_{q}\). By Lemma 4.3.1, \(G=\{\alpha\}\) where \(\alpha=2^{k}\pm 1\). Any integer \(q\) satisfying \(q\equiv\alpha\mod 2^{k+1}\) is a topological generator of \(G\). We can choose \(q\) to be a prime number by Dirichlet's theorem on arithmetic progressions. **Main Theorem E**.: _Let \(G\leq\mathbb{Z}_{2}^{\times}\) be a closed subgroup. Then_ \[\operatorname{Pic}_{K(1)}\left(E_{1}^{hG}\right)=\left\{\begin{array}{ll} \mathbb{Z}_{2}\oplus\mathbb{Z}/4\oplus\mathbb{Z}/2,&G=\mathbb{Z}_{2}^{\times}; &\text{[HMS94, Theorem 3.3]}\\ &\mathbb{Z}_{2}\oplus\mathbb{Z}/2,&G=\{5\}\text{ or }\{3\};\\ \mathbb{Z}_{2}\oplus\mathbb{Z}/8\oplus\mathbb{Z}/2,&G=\{\pm 1\}\times\{2^{k}+1 \},k\geq 3;\\ \mathbb{Z}_{2}\oplus\mathbb{Z}/2\oplus\mathbb{Z}/2,&G=\{2^{k}+1\}\text{ or }(2^{k}-1),k\geq 3;\\ \mathbb{Z}/8,&G=\{\pm 1\};&\text{[MS16, Theorem 7.1.2]}\\ &\mathbb{GL21, Proposition 7.15]}\\ \mathbb{Z}/2,&G=\{1\}.&\text{[BR05, Theorem 43 ff.]}\end{array}\right.\] _The restriction and transfer maps between any two closed subgroups \(G_{1}\leq G_{2}\leq\mathbb{Z}_{2}^{\times}\) with \([G_{2}:G_{1}]=2\) are given by the following seven cases (\(k\geq 3\) in Cases IV, V, VI):_ \[\operatorname{Pic}_{K(1)}\left[E_{1}^{hG_{2}}\right]=\left\{\begin{array}{ ll}\mathbb{Z}_{2}\langle 1,\sigma\rangle/(4(\sigma-1))\\ \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\ Proof.: We start with the known Mackey functor \(\underline{\mathrm{Pic}}(K\mathbb{R})\) in Case VII and use the descent spectral sequence in Corollary 3.3.14. Set \(G=\langle 2^{k}+1\rangle\) or \(\langle 2^{k}-1\rangle\) for \(k\geq 2\). Recall that \(\{\pm 1\}\times\langle 2^{k}-1\rangle=\{\pm 1\}\times\langle 2^{k}+1\rangle\) as subgroups of \(\mathbb{Z}_{2}^{\times}\), we have \[\left(KO_{2}^{\wedge}\right)^{h(2^{k}-1)}\simeq\left(KU_{2}^{\wedge}\right)^{ h(\{\pm 1\}\times(2^{k}-1))}\simeq\left(KU_{2}^{\wedge}\right)^{h(\{\pm 1\}\times(2^{k} +1))}\simeq\left(KO_{2}^{\wedge}\right)^{h(2^{k}+1)}.\] Notice that \(\mathrm{cd}_{2}(G)=\mathrm{cd}_{2}(\mathbb{Z}_{2})=1\). This implies the descent spectral sequences \[H_{c}^{s}\left(G;\pi_{t}\mathfrak{pic}_{K(1)}\left(KO_{2}^{ \wedge}\right)\right) \Longrightarrow\pi_{t-s}\mathfrak{pic}_{K(1)}\left(\left(KO_{2}^ {\wedge}\right)^{hG}\right) \cong\pi_{t-s}\mathfrak{pic}_{K(1)}\left(E_{1}^{h(\{\pm 1\}\times G)}\right)\] \[H_{c}^{s}\left(G;\pi_{t}\mathfrak{pic}_{K(1)}\left(KU_{2}^{ \wedge}\right)\right) \Longrightarrow\pi_{t-s}\mathfrak{pic}_{K(1)}\left(\left(KU_{2}^{ \wedge}\right)^{hG}\right) \cong\pi_{t-s}\mathfrak{pic}_{K(1)}\left(E_{1}^{hG}\right)\] collapse on the \(E_{2}\)-pages, yielding extension problems: \[0\xrightarrow{}H_{c}^{1}\left(G;\pi_{0}\left(KO_{2}^{\wedge} \right)^{\times}\right)\xrightarrow{}\mathrm{Pic}_{K(1)}\left[\left(KO_{2}^ {\wedge}\right)^{hG}\right]\xrightarrow{}\left[\mathrm{Pic}_{K(1)}\left(KO_{ 2}^{\wedge}\right)\right]^{G}\xrightarrow{}0,\] \[0\xrightarrow{}H_{c}^{1}\left(G;\pi_{0}\left(KU_{2}^{\wedge} \right)^{\times}\right)\xrightarrow{}\mathrm{Pic}_{K(1)}\left[\left(KU_{2}^{ \wedge}\right)^{hG}\right]\xrightarrow{}\left[\mathrm{Pic}_{K(1)}\left(KU_{2}^ {\wedge}\right)\right]^{G}\xrightarrow{}0.\] The group \(G\) acts trivially on \(\pi_{0}\) and \(\pi_{1}\) of the Picard spectra \(\mathfrak{pic}_{K(1)}\left(KO_{2}^{\wedge}\right)\) and \(\mathfrak{pic}_{K(1)}\left(KU_{2}^{\wedge}\right)\). It follows that the \(H_{c}^{1}\) in the short exact sequences above is isomorphic to \(\mathrm{Hom}_{c}\), and every element in the Picard groups is fixed. As the group \(G\cong\mathbb{Z}_{2}\) is a \(2\)-completed pro-cyclic group and \(\pi_{0}\left(KO_{2}^{\wedge}\right)^{\times}\cong\pi_{0}\left(KU_{2}^{\wedge} \right)^{\times}\cong\mathbb{Z}_{2}^{\times}\) is also \(2\)-completed, we have isomorphisms: \[H_{c}^{1}\left(G;\pi_{0}\left(KO_{2}^{\wedge}\right)^{\times}\right) \cong\mathrm{Hom}_{c}\left(G,\pi_{0}\left(KO_{2}^{\wedge}\right)^{ \times}\right) \cong\mathbb{Z}_{2}^{\times},\] \[H_{c}^{1}\left(G;\pi_{0}\left(KU_{2}^{\wedge}\right)^{\times}\right) \cong\mathrm{Hom}_{c}\left(G,\pi_{0}\left(KU_{2}^{\wedge}\right)^{ \times}\right) \cong\mathbb{Z}_{2}^{\times}.\] In the end, the extension problems are: \[0\xrightarrow{}\mathbb{Z}_{2}^{\times}\xrightarrow{}\mathrm{ Pic}_{K(1)}\left[\left(KO_{2}^{\wedge}\right)^{hG}\right]\xrightarrow{}\mathbb{Z}/8 \xrightarrow{}0,\] \[0\xrightarrow{}\mathbb{Z}_{2}^{\times}\xrightarrow{}\mathrm{ Pic}_{K(1)}\left[\left(KU_{2}^{\wedge}\right)^{hG}\right]\xrightarrow{}\mathbb{Z}/2 \xrightarrow{}0.\] To solve them, our starting point is the base case computed in [10, Theorem 3.3]: \[\mathrm{Pic}_{K(1)}\left(E_{1}^{h\mathbb{Z}_{2}^{\times}}\right)\cong\mathbb{ Z}_{2}\langle 1,\sigma\rangle/(4(\sigma-1))\oplus\mathbb{Z}/2\cong\mathbb{Z}_{2}\oplus \mathbb{Z}/4\oplus\mathbb{Z}/2.\] We will bootstrap from there using restriction maps in group cohomology. Fix \(G=\langle 3\rangle\) or \(\langle 5\rangle\). To compute \(\mathrm{Pic}_{K(1)}\left(E_{1}^{hG}\right)\), consider the following diagram between extensions: (4.3.3) The left column in this diagram is obtained by applying the functor \(\operatorname{Hom}_{c}(G,(-)^{\times})\) to the Tambara functor: \[\begin{array}{rcl}\pi_{0}(KO_{2}^{\wedge})&\cong&\mathbb{Z}_{2}\\ \pi_{0}(KU_{2}^{\wedge})&\cong&\mathbb{Z}_{2}.\end{array}\] The right column is obtained by applying the \(G\)-fixed point functor to the Mackey functor \(\underline{\operatorname{Pic}}(K\mathbb{R})\) in Case VII. An Ext-group computation shows \(\operatorname{Pic}_{K(1)}\left(E_{1}^{hG}\right)\) is either \(\mathbb{Z}_{2}\oplus\mathbb{Z}/2\) or \(\mathbb{Z}_{2}\oplus\mathbb{Z}/2\oplus\mathbb{Z}/2\). Consider the image of the generator \(g\in\mathbb{Z}_{2}^{\times}\) under the map \(i\colon\mathbb{Z}_{2}^{\times}\to\operatorname{Pic}_{K(1)}\left[\left(KU_{2}^ {\wedge}\right)^{hG}\right]\). For the left square in (4.3.3) to commute under restriction maps, the projection \(i(g)\) onto the free summand must be a multiple of \(2\). This forces an isomorphism: \[\operatorname{Pic}_{K(1)}\left(E_{1}^{hG}\right)\cong\mathbb{Z}_{2}\oplus \mathbb{Z}/2.\] The restriction and transfer maps can then be computed via diagram chase. This finishes the computation of Case I. All other cases can be proved in similar ways. 1. \(G=\{3\}\) or \(\{5\}\leq\mathbb{Z}_{2}^{\times}\). [MISSING_PAGE_POST] 3. \((9)\leq G=\{3\}\) or \(\{5\}\). 4. \(G=\{2^{k}-1\}\) or \(\langle 2^{k}+1\rangle\leq\{\pm 1\}\times(1+2^{k}\mathbb{Z}_{2})\), where \(k\geq 3\). \[\begin{CD}0\xrightarrow{\text{Hom}_{c}\left(G,\pi_{0}\left(KO_{2}^{\wedge} \right)^{\times}\right)}\xrightarrow{\text{Pic}_{K(1)}\left[\left(KO_{2}^{ \wedge}\right)^{hG}\right]}\xrightarrow{\text{Pic}_{K(1)}\left(KO_{2}^{\wedge }\right)^{G}}\xrightarrow{\text{Pic}_{K(1)}\left(KO_{2}^{\wedge}\right)^{G}} \xrightarrow{\text{Pic}_{K(1)}\left(KO_{2}^{\wedge}\right)^{G}}\xrightarrow{ \text{Pic}_{K(1)}\left(KO_{2}^{\wedge}\right)^{G}}\xrightarrow{\text{Pic}_{K (1)}\left(KO_{2}^{\wedge}\right)^{G}}\xrightarrow{\text{Pic}_{K(1)}\left(KO_ {2}^{\wedge}\right)^{G}}\xrightarrow{\text{Pic}_{K(1)}\left(KO_{2}^{\wedge} \right)^{G}}\xrightarrow{\text{Pic}_{K(1)}\left(KO_{2}^{\wedge}\right)^{G}} \xrightarrow{\text{Pic}_{K(1)}\left(KO_{2}^{\wedge}\right)^{G}}\xrightarrow{ \text{Pic}_{K(1)}\left(KO_{2}^{\wedge}\right)^{G}}\xrightarrow{\text{Pic}_{K (1)}\left(KO_{2}^{\wedge}\right)^{G}}\xrightarrow{\text{Pic}_{K(1)}\left(KO_ {2}^{\wedge}\right)^{G}}\xrightarrow{\text{Pic}_{K(1)}\left(KO_{2}^{\wedge} \right)^{G}}\xrightarrow{\text{Pic}_{K(1)}\left(KO_{2}^{\wedge}\right)^{G}} \xrightarrow{\text{Pic}_{K(1)}\left(KO_{2}^{\wedge}\right)^{G}}\xrightarrow{ \text{Pic}_{K(1)}\left(KO_{2}^{\wedge}\right)^{G}}\xrightarrow{\text{Pic}_{K (1)}\left(KO_{2}^{\wedge}\right)^{G}}\xrightarrow{\text{Pic}_{K(1)}\left(KO _{2}^{\wedge}\right)^{G}}\xrightarrow{\text{Pic}_{K(1)}\left(KO_{2}^{\wedge} \right)^{G}}\xrightarrow{\text{Pic}_{K(1)}\left(KO_{2}^{\wedge}\right)^{G}} \xrightarrow{\text{Pic}_{K(1)}\left(KO_{2}^{\wedge}\right)^{G}}\xrightarrow{ \text{Pic}_{K(1)}\left(KO_{2}^{\wedge}\right)^{G}}\xrightarrow{\text{Pic}_{K (1)}\left(KO_{2}^{\wedge}\right)^{G}}\xrightarrow{\text{Pic}_{K(1)}\left(KO _{2}^{\wedge}\right)^{G}}\xrightarrow{\text{Pic}_{K(1)}\left(KO_{2}^{\wedge} \right)^{G}}\xrightarrow{\text{Pic}_{K(1)}\left(KO_{2}^{\wedge}\right)^{G}} \xrightarrow{\text{Pic}_{K(1)}\left(KO_{2}^{\wedge}\right)^{G}}\xrightarrow{ \text{Pic}_{K(1)}\left(KO_{2}^{\wedge}\right)^{G}}\xrightarrow{\text{Pic}_{K (1)}\left(KO_{2}^{\wedge}\right)^{G}}\xrightarrow{\text{Pic}_{K(1)}\left(KO _{2}^{\wedge}\right)^{G}}\xrightarrow{\text{Pic}_{K(1)}\left(KO_{2}^{\wedge} \right)^{G}}\xrightarrow{\text{Pic}_{K(1)}\left(KO_{2}^{\wedge}\right)^{G}} \xrightarrow{\text{Pic}_{K(1)}\left(KO_{2}^{\wedge}\right)^{G}}\xrightarrow{ \text{Pic}_{K(1)}\left(KO_{2}^{\wedge}\right)^{G}}\xrightarrow{\text{Pic}_{K (1)}\left(KO_{2}^{\wedge}\right)^{G}}\xrightarrow{\text{Pic}_{K(1)}\left(KO _{2}^{\wedge}\right)^{G}}\xrightarrow{\text{Pic}_{K(1)}\left(KO_{2}^{\wedge} \right)^{G}}\xrightarrow{\text{Pic}_{K(1)}\left(KO_{2}^{\wedge}\right)^{G}} \xrightarrow{\text{Pic}_{K(1)}\left(KO_{2}^{\wedge}\right)^{G}}\xrightarrow{ \text{Pic}_{K(1)}\left(KO_{2}^{\wedge}\right)^{G}}\xrightarrow{\text{Pic}_{K(1)} \left(KO_{2}^{\wedge}\right)^{G}}\xrightarrow{\text{Pic}_{K(1)}\left(KO_{2}^{ \wedge}\right)^{G}}\xrightarrow{\text{Pic}_{K(1)}\left(KO_{2}^{\wedge}\right)^ {G}}\xrightarrow{\text{Pic}_{K(1)}\left(KO_{2}^{\wedge}\right)^{G}}\xrightarrow{ \text{Pic}_{K(1)}\left(KO_{2}^{\wedge}\right)^{G}}\xrightarrow{\text{Pic}_{K(1)} \left(KO_{2}^{\wedge}\right)^{G}}\xrightarrow{\text{Pic}_{K(1)}\left(KO_{2}^{ \wedge}\right)^{G}}\xrightarrow{\text{Pic}_{K(1)}\left(KO_{2}^{\wedge}\right)^{G}} \xrightarrow{\text{Pic}_{K(1)}\left(KO_{2}^{\wedge}\right)^{G}}\xrightarrow{ \text{Pic}_{K(1)}\left(KO_{2}^{\wedge}\right)^{G}}\xrightarrow{\text{Pic}_{K(1)} \left(KO_{2}^{\wedge}\right)^{G}}\xrightarrow{\text{Pic}_{K(1)}\left _Remark 4.3.4_.: When \(p=2\), the group \(\operatorname{Pic}_{K(1)}\cong\operatorname{RO}(C_{2})_{2}^{\wedge}/(4(1-\sigma)) \oplus\mathbb{Z}/2\) is not topologically cyclic. The element \(S^{1}_{K(1)}\) corresponds to \((1,0)\) under the identification. From the formulas in Main Theorem E, we observe that: \[\left(\Sigma E_{1}^{hG}\in\operatorname{Pic}_{K(1)}\left(E_{1}^{hG}\right) \right)=\begin{cases}(1,0)\in\operatorname{RO}(C_{2})_{2}^{\wedge}/(4(1-\sigma ))\oplus\mathbb{Z}/2,&G=\mathbb{Z}_{2}^{\wedge};\\ (1,0)\in\mathbb{Z}_{2}\oplus\mathbb{Z}/2,&G=\{3\}\text{ or }\{5\};\\ (2^{k-3},1,0)\in\mathbb{Z}_{2}\oplus\mathbb{Z}/8\oplus\mathbb{Z}/2,&G=\{\pm 1 \}\times(2^{k}+1),k\geq 3;\\ (2^{k-3},1,0)\in\mathbb{Z}_{2}\oplus\mathbb{Z}/2\oplus\mathbb{Z}/2,&G=\{2^{k}-1 \}\text{ or }(2^{k}+1),k\geq 3;\\ 1\in\mathbb{Z}/8,&G=\{\pm 1\};\\ 1\in\mathbb{Z}/2,&G=\{1\}.\end{cases}\] As a result, the element \(\Sigma E_{1}^{hG}\) is _not_ divisible by \(2\) (or its powers) in the Picard group \(\operatorname{Pic}_{K(1)}\left(E_{1}^{hG}\right)\) at prime \(2\) for any closed subgroup \(G\). This is very different from the odd prime case in Remark 4.2.6. From the computations in Main Theorem E, we verify Proposition 2.3.7 in an example below: **Corollary 4.3.5**.: _The colimits of \(\operatorname{Pic}_{K(1)}\left(E_{1}^{h-}\right)\) as pro-abelian groups under transfinite compositions of restriction maps are:_ \[\operatorname*{colim}_{\operatorname{Res}}\operatorname{Pic}_{K(1)}\left[ \left(KO_{2}^{\wedge}\right)^{h(1+2^{k}\mathbb{Z}_{2})}\right] \cong\mathbb{Z}/8\cong\operatorname{Pic}_{K(1)}\left(KO_{2}^{\wedge}\right),\] \[\operatorname*{colim}_{\operatorname{Res}}\operatorname{Pic}_{K(1)} \left[\left(KU_{2}^{\wedge}\right)^{h(1+2^{k}\mathbb{Z}_{2})}\right] \cong\mathbb{Z}/2\cong\operatorname{Pic}_{K(1)}\left(KU_{2}^{\wedge}\right).\] Proof.: By Main Theorem E, the restriction maps \[\operatorname{Pic}_{K(1)}\left[\left(KO_{2}^{\wedge}\right)^{h(1+2^{k} \mathbb{Z}_{2})}\right]\longrightarrow\operatorname{Pic}_{K(1)}\left(KO_{2}^ {\wedge}\right)\hskip 28.452756pt\operatorname{Pic}_{K(1)}\left[\left( KU_{2}^{\wedge}\right)^{h(1+2^{k}\mathbb{Z}_{2})}\right]\longrightarrow\operatorname{Pic}_{K(1)} \left(KU_{2}^{\wedge}\right)\] are projections onto summands when \(k\geq 3\). The claim now follows by reading off the restriction maps in Cases II, V and Cases I, III, respectively. _Remark 4.3.6_.: Similar to Remark 4.2.8, the claim above fails when we take colimits in \(\mathsf{Ab}\). There will be an extra \(\mathbb{Q}_{2}\) summand in the colimit then. This gives another example of Remark 2.3.8. One important class in \(\operatorname{Pic}_{K(1)}\left(S^{0}_{K(1)}\right)\) at \(p=2\) is the exotic element \(\mathcal{E}_{K(1)}\). Under the isomorphism \(\operatorname{Pic}_{K(1)}\left(S^{0}_{K(1)}\right)\cong\operatorname{RO}(C_{2}) _{2}^{\wedge}/(4(1-\sigma))\oplus\mathbb{Z}/2\), the exotic element corresponds to \(2-2\sigma\). Our computation above implies: **Corollary 4.3.7**.: _The exotic element in \(\operatorname{Pic}_{K(1)}\left(S^{0}_{K(1)}\right)\) is detected by a closed subgroup \(G\leq\mathbb{Z}_{2}^{\times}\) iff \(\{\pm 1\}\leq G\). Equivalently, \(E_{1}^{hG}\hat{\wedge}\mathcal{E}_{K(1)}\simeq E_{1}^{hG}\) iff \(-1\notin G\)._ Proof.: We will compute the images of \(\mathcal{E}_{K(1)}\) under the restriction maps in the \(K(1)\)-local Picard Mackey functor. By Lemma 4.3.1, the index \(2\) subgroups of \(\mathbb{Z}_{2}^{\times}\) are \(\{\pm 1\}\times 1+8\mathbb{Z}_{2}\), (3), and (5). Reading off the restriction maps in Main Theorem E, we can see: * Case I implies images of \(\mathcal{E}_{K(1)}\) in \(\operatorname{Pic}_{K(1)}\left(E_{1}^{h(3)}\right)\cong\operatorname{Pic}_{K( 1)}\left(E_{1}^{h(5)}\right)\cong\mathbb{Z}_{2}\oplus\mathbb{Z}/2\) are both zero. * Case II implies its image \(\operatorname{n}\operatorname{Pic}_{K(1)}\left(E_{1}^{h(\{\pm 1\}\times 1+8 \mathbb{Z}_{2})}\right)\cong\mathbb{Z}_{2}\oplus\mathbb{Z}/8\oplus\mathbb{Z}/2\) is \((0,4,0)\). * Subgroups of \(\mathbb{Z}_{2}^{\times}\) not contained in (3), and (5) are of the form \(\{\pm 1\}\times 1+2^{k}\mathbb{Z}_{2}\). Case V states that the restriction map \(\operatorname{Pic}_{K(1)}\left(E_{1}^{h(\{\pm 1\}\times 1+2^{k}\mathbb{Z}_{2})} \right)\to\operatorname{Pic}_{K(1)}\left(E_{1}^{h(\{\pm 1\}\times 1+2^{k+1} \mathbb{Z}_{2})}\right)\) restricts to identity on the \(\mathbb{Z}/8\)-summands. Hence images of \(\mathcal{E}_{K(1)}\) in them are all no trivial. * It is well-known that \(KO_{2}^{\wedge}\mathcal{E}_{K(1)}\simeq\Sigma^{4}KO_{2}^{\wedge}\) is nontrivial. This can also be obtained from the transfinite restriction computation in Corollary 4.3.5.
2309.12545
Provably Robust and Plausible Counterfactual Explanations for Neural Networks via Robust Optimisation
Counterfactual Explanations (CEs) have received increasing interest as a major methodology for explaining neural network classifiers. Usually, CEs for an input-output pair are defined as data points with minimum distance to the input that are classified with a different label than the output. To tackle the established problem that CEs are easily invalidated when model parameters are updated (e.g. retrained), studies have proposed ways to certify the robustness of CEs under model parameter changes bounded by a norm ball. However, existing methods targeting this form of robustness are not sound or complete, and they may generate implausible CEs, i.e., outliers wrt the training dataset. In fact, no existing method simultaneously optimises for closeness and plausibility while preserving robustness guarantees. In this work, we propose Provably RObust and PLAusible Counterfactual Explanations (PROPLACE), a method leveraging on robust optimisation techniques to address the aforementioned limitations in the literature. We formulate an iterative algorithm to compute provably robust CEs and prove its convergence, soundness and completeness. Through a comparative experiment involving six baselines, five of which target robustness, we show that PROPLACE achieves state-of-the-art performances against metrics on three evaluation aspects.
Junqi Jiang, Jianglin Lan, Francesco Leofante, Antonio Rago, Francesca Toni
2023-09-22T00:12:09Z
http://arxiv.org/abs/2309.12545v2
# Provably Robust and Plausible Counterfactual Explanations for ###### Abstract Counterfactual Explanations (CEs) have received increasing interest as a major methodology for explaining neural network classifiers. Usually, CEs for an input-output pair are defined as data points with minimum distance to the input that are classified with a different label than the output. To tackle the established problem that CEs are easily invalidated when model parameters are updated (e.g. retrained), studies have proposed ways to certify the robustness of CEs under model parameter changes bounded by a norm ball. However, existing methods targeting this form of robustness are not sound or complete, and they may generate implausible CEs, i.e., outliers wrt the training dataset. In fact, no existing method simultaneously optimises for proximity and plausibility while preserving robustness guarantees. In this work, we propose Provably RObust and PLAusible Counterfactual Explanations (PROPLACE)1, a method leveraging on robust optimisation techniques to address the aforementioned limitations in the literature. We formulate an iterative algorithm to compute provably robust CEs and prove its convergence, soundness and completeness. Through a comparative experiment involving six baselines, five of which target robustness, we show that PROPLACE achieves state-of-the-art performances against metrics on three evaluation aspects. Footnote 1: The implementation is available at [https://github.com/junqijiang/proplace](https://github.com/junqijiang/proplace) ## 1 Introduction Counterfactual Explanations (CEs) have become a major methodology to explain NNs due to their simplicity, compliance with the regulations (Wachter _et al._, 2017), and alignment with human thinking (Celar and Byrne, 2023). Given an input point to a classifier, a CE is a modified input classified with another, often more desirable, label. Consider a customer that is denied a loan by the machine learning system of a bank. A CE the bank provided for this customer could be, _the loan application would have been approved, had you raised your annual salary by 5 6000_. Several desired properties of CEs have been identified in the literature, the most fundamental of which is _validity_, requiring that the CE needs to be correctly classified with a specified label (Tolomei _et al._, 2017). _Proximity_ refers to the closeness between the CE and the input measured by some distance metric, which translates to a measure of the effort the end user has to make to achieve the prescribed changes (Wachter _et al._, 2017). The CEs should also lie on the data manifold of the training dataset and not be an outlier, which is assessed via _plausibility_(Poyiadzi _et al._, 2020). Most recently, the _robustness_ of CEs, amounting to their validity under various types of uncertainty, has drawn increasing attention due to its real-world importance. In this work, we consider robustness to the model parameter changes occurring in the classifier on which the CE was generated. Continuing the loan example, assume the bank's machine learning model is retrained with new data, while, in the meantime, the customer has achieved a raise in salary (as prescribed by the CE). The customer may then return to the bank only to find that the previously specified CE is now invalidated by the new model. In this case, the bank could be seen as being responsible by the user and could potentially be legally lieable, risking financial and reputational damage to the organisation. The quality of such unreliable CE is also questionable: (Rawal _et al._, 2020; Dutta _et al._, 2022) have shown that CEs found by existing non-robust methods are prone to such invalidation due to their closeness to the decision boundary. Various methods have been proposed to tackle this issue. (Nguyen _et al._, 2022; Dutta _et al._, 2022; Hamman _et al._, 2023) focus on building heuristic methods using model confidence, Lipschitz continuity, and quantities related to the data distribution. (Upadhyay _et al._, 2021; Black _et al._, 2022; Jiang _et al._, 2023) consider optimising the validity of CEs under bounded model parameter changes, which are also empirically shown to be robust to the unbounded parameter changes scenarios. Among the existing methods, only (Jiang _et al._, 2023) provides robustness guarantees in a formal approach, which are known to be lacking in the explainable AI (XAI) literature in general, aside from some notable examples, e.g. as introduced in (Marques-Silva and Ignatiev, 2022). Their method generates such provably robust CEs via iteratively tuning the hyperparameters of an arbitrary non-robust CEs method and testing for robustness. However, this method cannot always guarantee soundness and is not complete, which is also the case for the method in [20]. Another limitation in the current literature is that the methods targeting this form of robustness guarantee do not find plausible CEs, limiting their practical applicability. Such limitations have motivated this work. After discussing relevant studies in Section 2, we introduce the robust optimisation problem for computing CEs with proximity property as the objective, and robustness and plausibility properties as constraints (Section 3). In Section 4, we then present Provably Robust and PLAusible CEs (PROPLACE), a method leveraging on robust optimisation techniques to address the limitation in the literature that no method optimises for proximity and plausibility while providing formal robustness guarantees. We show the (conditional) soundness and completeness of our method, and give a bi-level optimisation procedure that will converge and terminate. Finally, in our experiments, we compare PROPLACE with six existing CE methods, five of which target robustness, on four benchmark datasets. The results show that our method achieves the best robustness and plausibility, while demonstrating superior proximity among the most robust baselines. ## 2 Related Work As increasing interest has been focused on XAI, a plethora of CE generation methods have been proposed (see [14] for a recent overview). Given our focus on neural networks, we cover those explaining the outputs of these models. [21] proposed a gradient-based optimisation method targeting the validity and proximity of CEs. Similarly, using the mixed integer programming (MILP) representation of neural networks, [15] formulated the CEs search into a constrained optimisation problem such that the resulting CEs are guaranteed to be valid. [16] advocated generating a diverse set of CEs for each input to enrich the information provided to the explainee. Several works also addressed _actionability_ constraints [20, 21, 22], only allowing changes in the actionable features of real users. [22] proposed a graph-based method to find a path of CEs that are all lying within the data manifold. Several other works have proposed to use (variational) auto-encoders or nearest neighbours to induce plausibility [13, 23, 24]. Among these properties, actionability and plausibility are two orthogonal considerations which make the CEs realistic in practice, and trade-offs have been identified between plausibility and proximity [17]. In this work, our focus is on the property of robustness to changes in the model parameters, i.e. weights and biases in the underlying classifier. Several studies looked at CEs under bounded model parameter changes of a neural network: [20] formulated a novel loss function and solved using gradient-based methods. [18] proposed a heuristic based on the classifier's Lipschitz constant and the model confidence to search for robust CEs. [19] used interval abstractions [20] to certify the robustness against bounded parameter changes, and embed the certification process into existing CE methods. Differently to our approach, these methods do not generate plausible CEs or guarantee that provably robust CEs are found. Other relevant works place their focus on the robustness of CEs against unbounded model changes. [19] took the approach of augmenting the training data with previously generated CEs. [21] focused on the data distribution and formulated the problem as posterior probability ratio minimisation to generate robust and plausible CEs. By using first- and second-moment information, [20] proposed lower and upper bounds on the CEs' validity under random parameter updates and generated robust CEs using gradient descent. [14] defined a novel robustness measure based on the model confidences over the neighbourhood of the CE, and used dataset points that satisfy some robustness test to find close and plausible CEs. Their notion is then further re-calibrated for neural networks with probabilistic robustness guarantees in [13]. Trade-offs between robustness and proximity were discussed by [17] and [20]. Other forms of CEs' robustness have also been investigated, for example, robustness against: input perturbations [15, 16, 17, 18, 19, 21, 22, 23]; noise in the execution of CEs [17, 18, 19, 20]; and model multiplicity [17, 18]. ## 3 Preliminaries and Problem Statement Notation.Given an integer \(k\), we use \([k]\) to denote the set \(\{1,\ldots,k\}\). We use \(|S|\) to denote the cardinality of a set \(S\). Neural Network (NN).We denote a NN as \(\mathcal{M}_{\Theta}:\mathcal{X}\subseteq\mathbb{R}^{d}\to\mathcal{Y}\subseteq \mathbb{N}\), where the inputs are \(d\)-dimensional vectors and the outputs are discrete class labels. \(\Theta\) represents the collection of parameters that characterise the NN. Throughout the paper, we will illustrate our method using the binary classification case (i.e. \(\mathcal{Y}=\{0,1\}\)), though the method is readily applicable to multi-class classification. Let \(\mathcal{M}_{\Theta}(x)\) also (with an abuse of notation) refer to the pre-sigmoid (logit) value in the NN. Then, for an input \(x\in\mathcal{X}\), we say \(\mathcal{M}_{\Theta}\) classifies \(x\) as class 1 if \(\mathcal{M}_{\Theta}(x)\geq 0\), otherwise \(\mathcal{M}_{\Theta}\) classifies \(x\) as class 0. Counterfactual Explanation (CE).For an input \(x\in\mathcal{X}\) that is classified to the unwanted class 0 (assumed throughout the paper), a CE \(x^{\prime}\in\mathcal{X}\) is some other data point "similar" to the input, e.g. by some distance measure, but classified to the desired class 1. **Definition 1**.: (CE) _Given a NN \(\mathcal{M}_{\Theta}\), an input \(x\in\mathcal{X}\) such that \(\mathcal{M}_{\Theta}(x)<0\), and a distance metric \(dist:\mathbb{R}^{d}\times\mathbb{R}^{d}\rightarrow\mathbb{R}^{+}\), a_ CE \(x^{\prime}\in\mathcal{X}\) _is such that:_ \[\operatorname*{arg\,min}_{x^{\prime}} \quad dist(x,x^{\prime})\] \[\text{subject to} \quad\mathcal{M}_{\Theta}(x^{\prime})\geq 0\] The minimum distance objective targets the minimum effort by the end user to achieve a change, which corresponds to the basic requirement of proximity mentioned in Section 1. In the literature, normalised \(L_{1}\) distance is often adopted as the distance metric because it induces changes in fewer features in the CE [2]. However, methods that find such plain CEs usually result in unrealistic combinations of features, or outliers to the underlying data distribution of the training dataset. A plausible CE avoids these issues and is formally defined as follows: **Definition 2**.: (Plausible CE) _Given a NN \(\mathcal{M}_{\Theta}\) and an input \(x\in\mathcal{X}\) such that \(\mathcal{M}_{\Theta}(x)<0\), a distance metric \(dist:\mathbb{R}^{d}\times\mathbb{R}^{d}\rightarrow\mathbb{R}^{+}\) and some plausible region \(\mathcal{X}_{plaus}\subseteq\mathbb{R}^{d}\), a_ plausible CE _is an \(x^{\prime}\) such that:_ \[\operatorname*{arg\,min}_{x^{\prime}} \quad dist(x,x^{\prime})\] \[\text{subject to} \quad\mathcal{M}_{\Theta}(x^{\prime})\geq 0,\quad x^{\prime} \in\mathcal{X}_{plaus}\] The plausible region \(\mathcal{X}_{plaus}\) may be used to eliminate any unrealistic feature values (e.g. a value of 0.95 for a discrete feature), or to indicate a densely populated region that is close to the data manifold of the training dataset. Additionally, it may also include some actionability considerations, such as restricting immutable attributes (e.g. avoiding suggesting changes in gender) or specifying some relations between input features (e.g. obtaining a doctoral degree should also cost the user at least 4 years). **Robustness of Counterfactual Explanations.** Studies have shown that CEs found by the above formulations are readily invalidated when small changes occur in the model parameters of the NNs. We formalise this in the following and begin by introducing a distance measure between two NNs and a definition of model shift. Note that Definitions 3 to 7 are adapted from [10]. **Definition 3**.: (Distance between two NNs) _Consider two NNs \(\mathcal{M}_{\Theta}\), \(\mathcal{M}_{\Theta^{\prime}}\) of the same architecture characterised by parameters \(\Theta\) and \(\Theta^{\prime}\). For \(0\leq p\leq\infty\), the_ p-distance _between \(\mathcal{M}_{\Theta}\) and \(\mathcal{M}_{\Theta^{\prime}}\) is \(d_{p}(\mathcal{M}_{\Theta},\mathcal{M}_{\Theta^{\prime}})=\|\Theta-\Theta^{ \prime}\|_{p}\)._ **Definition 4**.: (Bounded model shifts) _Given a NN \(\mathcal{M}_{\Theta}\), \(\delta\in\mathbb{R}_{>0}\) and \(0\leq p\leq\infty\), the set of bounded model shifts is defined as \(\Delta=\{\mathcal{M}_{\Theta^{\prime}}\mid d_{p}(\mathcal{M}_{\Theta}, \mathcal{M}_{\Theta^{\prime}})\leq\delta\}\)._ **Certifying Robustness.** Having presented the definitions required to formalise the optimisation problem for finding provably robust and plausible CEs, we now introduce another relevant technique that uses interval abstractions to certify the robustness of CEs. We refer to the certification process as the \(\Delta\)-robustness test; this will be used for parts of our method and also as an evaluation metric in the experiments. We assume \(p=\infty\) for bounded model shifts \(\Delta\) throughout the paper. **Definition 5**.: (Interval abstraction of NN) _Consider a NN \(\mathcal{M}_{\Theta}\) with \(\Theta=[\theta_{0},\ldots,\theta_{d}]\). Given a set of bounded model shifts \(\Delta\), we define the interval abstraction of \(\mathcal{M}_{\Theta}\) under \(\Delta\) as the model \(\mathcal{I}_{(\Theta,\Delta)}:\mathcal{X}\rightarrow\mathcal{P}\mathbb{R}\) (for \(\mathcal{P}\mathbb{R}\) the set of all closed intervals over \(\mathbb{R}\)) such that:_ * \(\mathcal{M}_{\Theta}\) _and_ \(\mathcal{I}_{(\Theta,\Delta)}\) _have the same architecture;_ * \(\mathcal{I}_{(\Theta,\Delta)}\) _is parameterised by an interval-valued vector_ \(\boldsymbol{\Theta}\)_=_ \([\theta_{0},\ldots,\theta_{d}]\) _such that, for_ \(i\in\{0,\ldots,d\}\)_,_ \(\boldsymbol{\theta}_{i}=[\theta_{i}-\delta,\theta_{i}+\delta]\)_, where_ \(\delta\) _is the bound in_ \(\Delta\)_._ When \(p=\infty\), \(\boldsymbol{\theta}_{i}\) encodes the range of possible model parameter changes by the application of \(\Delta\) to \(\mathcal{M}_{\Theta}\). Given a fixed input, by propagating the weight and bias intervals, the output range of \(\mathcal{I}_{(\Theta,\Delta)}\) exactly represents the possible output range for the input by applying \(\Delta\) to \(\mathcal{M}_{\Theta}\)[10]. **Definition 6**.: (Interval abstraction of NN classification) _Let \(\mathcal{I}_{(\Theta,\Delta)}\) be the interval abstraction of a NN \(\mathcal{M}_{\Theta}\) under \(\Delta\). Given an input \(x\in\mathcal{X}\), let \(\mathcal{I}_{(\Theta,\Delta)}(x)=[l,u]\). Then, we say that \(\mathcal{I}_{(\Theta,\Delta)}\) classifies \(x\) as class \(1\) if \(l\geq 0\) (denoted, with an abuse of notation, \(\mathcal{I}_{(\Theta,\Delta)}(x)\geq 0\)), and as class \(0\) if \(u<0\) (denoted, with an abuse of notation, \(\mathcal{I}_{(\Theta,\Delta)}(x)<0\))._ Indeed, for an input, if the lower bound \(l\) of pre-sigmoid output node interval \([l,u]\) of \(\mathcal{I}_{(\Theta,\Delta)}\) satisfies \(l\geq 0\), then it means all shifted models in \(\Delta\) would predict the input with a pre-sigmoid value that is greater than or equal to \(0\), all resulting in predicted label \(1\). We apply this intuition to the CE context: **Definition 7**.: (\(\Delta\)-robust CE) _Consider an input \(x\in\mathcal{X}\) and a model \(\mathcal{M}_{\Theta}\) such that \(\mathcal{M}_{\Theta}(x)<0\). Let \(\mathcal{I}_{(\Theta,\Delta)}\) be the interval abstraction of \(\mathcal{M}_{\Theta}\) under \(\Delta\). We say that a_ CE \(x^{\prime}\) is \(\Delta\)-robust _iff \(\mathcal{I}_{(\Theta,\Delta)}(x^{\prime})\geq 0\)._ Checking whether a CE \(x^{\prime}\) is \(\Delta\)-robust requires the calculation of the lower bound \(l\) of the pre-sigmoid output node interval \([l,u]\) of \(\mathcal{I}_{(\Theta,\Delta)}\). This process can be encoded as a MILP program (see Appendix B in [10]). **Optimising for Robustness and Plausibility.** Now we introduce the targeted provably robust and plausible optimisation problem based on Definitions 2 and 7, by taking inspiration from the robust optimisation technique [13]. **Definition 8**.: (Provably robust and plausible CE) _Given a NN \(\mathcal{M}_{\Theta}\), an input \(x\in\mathcal{X}\) such that \(\mathcal{M}_{\Theta}(x)<0\), a distance metric \(dist:\mathbb{R}^{d}\times\mathbb{R}^{d}\rightarrow\mathbb{R}^{+}\) and some plausible region \(\mathcal{X}_{plaus}\subseteq\mathbb{R}^{d}\), let \(\mathcal{I}_{(\Theta,\Delta)}\) be the interval abstraction of \(\mathcal{M}_{\Theta}\) under the bounded model shifts \(\Delta\). Then, a_ provably robust and plausible CE \(x^{\prime}\in\mathcal{X}\) _is such that:_ \[\operatorname*{arg\,min}_{x^{\prime}} \quad dist(x,x^{\prime}) \tag{1a}\] \[\text{subject to} \quad\mathcal{I}_{(\Theta,\Delta)}(x^{\prime})\geq 0,\] (1b) \[\quad x^{\prime}\in\mathcal{X}_{plaus} \tag{1c}\] The optimisation problem (1) can be equivalently rewritten as follows: \[\operatorname*{arg\,min}_{x^{\prime}} \quad dist(x,x^{\prime})\] (2a) subject to \[\quad\max_{\mathcal{M}_{\Theta^{\prime}}\in\Delta}[-\mathcal{M}_{ \Theta^{\prime}}(x^{\prime})]\leq 0, \tag{2b}\] \[\quad x^{\prime}\in\mathcal{X}_{plaus} \tag{2c}\] We show next a novel approach for solving this robust optimisation problem (2). ## 4. Proplace The procedure for computing robust and plausible CEs, solving the optimisation problem (2), is summarised in Algorithm 1. We will first introduce how the plausible region \(\mathcal{X}_{plaus}\) is constructed in Section 4.1 (corresponding to Line 3, Algorithm 1). Then, in Section 4.2 we will present the bi-level optimisation method (corresponding to Lines 4-5, Algorithm 1) to solve the robust optimisation problem (2). In Section 4.2 we will also instantiate the complete bi-level optimisation formulations (in MILP form) of our method for NNs with ReLU activation functions. Finally, in Section 4.3 we discuss the soundness and completeness of Algorithm 1 and prove its convergence. ### Identifying Search Space \(\mathcal{X}_{plaus}\) As mentioned in Section 2, points from the training dataset (especially \(k\)-nearest-neighbours) are frequently utilised in the literature to induce plausibility. In this work, we propose to use a more specialised kind of dataset point, \(k\)_\(\Delta\)-robust nearest-neighbours_, to construct the search space for CEs that is both plausible and robust. **Definition 9**.: (\(\mathds{k}\)\(\Delta\)-robust nearest-neighbours) _Given a NN \(\mathcal{M}_{\Theta}\) and an input \(x\in\mathcal{X}\) such that \(\mathcal{M}_{\Theta}(x)<0\), a distance metric \(dist:\mathbb{R}^{d}\times\mathbb{R}^{d}\rightarrow\mathbb{R}^{+}\), a dataset \(\mathcal{D}\subseteq\mathbb{R}^{d}\) on which \(\mathcal{M}_{\Theta}\) is trained, and a set of bounded model shifts of interest \(\Delta\), let \(\mathcal{I}_{(\Theta,\Delta)}\) be the interval abstraction of \(\mathcal{M}_{\Theta}\) under \(\Delta\). Then, the \(\mathds{k}\)\(\Delta\)-robust nearest-neighbours of \(x\) is a set \(S_{k,\Delta}\subseteq\mathcal{D}\) with cardinality \(|S_{k,\Delta}|=k\) such that:_ * \(\forall x^{\prime}\in S_{k,\Delta}\)_,_ \(x^{\prime}\) _is_ \(\Delta\)_-robust, i.e._ \(\mathcal{I}_{(\Theta,\Delta)}(x^{\prime})\geq 0\)_,_ * \(\forall x^{{}^{\prime\prime}}\in\mathcal{D}\smallsetminus S_{k,\Delta}\)_, if_ \(x^{{}^{\prime\prime}}\) _is_ \(\Delta\)_-robust,_ \(dist(x,x^{{}^{\prime\prime}})\geq\max_{x^{\prime}\in S_{k,\Delta}}\,dist(x,x ^{\prime})\)_._ ``` 0: input \(x\), model \(\mathcal{M}_{\Theta}\), 1: training dataset \(\mathcal{D}=\{(x_{1},y_{1}),\ldots,(x_{n},y_{n})\}\), 2: set of bounded model shifts \(\Delta\), 3: plausible region to be used as CEs search space \(\mathcal{X}_{plaus}\). 4:Init:\(x^{\prime}\leftarrow\varnothing\); \(\Delta^{\prime}\leftarrow\{\mathcal{M}_{\Theta}\}\) 5:Repeat until\((-\mathcal{M}_{\Theta^{\prime}}(x^{\prime}))\leq 0\) \(x^{\prime}\leftarrow\texttt{Unter\_minimisation}(\mathcal{M}_{\Theta},x, \mathcal{X}_{plaus},\Delta^{\prime})\) \(\mathcal{M}_{\Theta^{\prime}}\leftarrow\texttt{Inner\_maximisation}(x^{\prime}, \Delta^{\prime},\Delta)\) \(\Delta^{\prime}\leftarrow\Delta^{\prime}\cup\{\mathcal{M}_{\Theta^{\prime}}\}\) 6:return\(x^{\prime}\) ``` **Algorithm 1** PROPLACE The first constraint enforces the \(\Delta\)-robustness, and the second states that the points contained in the set are the \(k\) nearest points to the input \(x\) amongst all the \(\Delta\)-robust dataset points. In practice, in order to compute the \(k\)\(\Delta\)-robust nearest-neighbours, we fit a k-d tree on the dataset points that are classified to the desired class, then iteratively query the k-d tree for the nearest neighbour of an input, until the result satisfies the \(\Delta\)-robustness test (Definition 7). Restricting the CE search space within the convex hull of these robust neighbours will likely induce high plausibility (and robustness). However, because these points are deep within parts of the training dataset that are classified to another class, they may be far from the model's decision boundary, therefore resulting in large distances to the inputs. In fact, (Dutta et al., 2022; Hamman et al., 2023) adopted similar robust nearest neighbours (using other notions of robustness tests) as the final CEs, and poor proximity was observed in their experiment results. They have also shown that finding CEs using line search between proximal CEs and these robust neighbours can slightly improve proximity. In our case, since the validity of the CEs can be guaranteed from the optimisation procedures (Section 4.2), we expand the plausible search space across the decision boundary by taking the input into consideration, which is assumed to also be inside the data distribution. **Definition 10**.: (Plausible region) _Given an input \(x\in\mathbb{R}^{d}\) and its \(k\)\(\Delta\)-robust nearest neighbours \(S_{k,\Delta}\), the plausible region \(\mathcal{X}_{plaus}\) is the convex hull of \(S_{k,\Delta}\cup\{x\}\)._ By restricting the CE search space to such convex hull, the method has the flexibility to find close CEs (with \(x\) as a vertex), or robust and plausible CEs (with the robust neighbours as other vertices). This \(\mathcal{X}_{plaus}\) ensures the soundness and completeness of our method (Section 4.3). ### Bi-level Optimisation Method with MILP #### 4.2.1. Outer and Inner Optimisation problems We separate the robust optimisation problem (2) to solve into outer minimisation and inner maximisation problems, as specified in Definitions 11 and 12. **Definition 11**.: _Given a NN \(\mathcal{M}_{\Theta}\) and an input \(x\in\mathcal{X}\) such that \(\mathcal{M}_{\Theta}(x)<0\), a distance metric \(dist:\mathbb{R}^{d}\times\mathbb{R}^{d}\rightarrow\mathbb{R}^{+}\) and some plausible region \(\mathcal{X}_{plaus}\subseteq\mathbb{R}^{d}\), let \(\Delta^{\prime}\) be a set of shifted models. Then, the outer minimisation problem finds a CE \(x^{\prime}\) such that:_ \[\operatorname*{arg\,min}_{x^{\prime}} \quad dist(x,x^{\prime})\] (3a) _subject to_ \[\quad-\mathcal{M}_{\Theta^{\prime}}(x^{\prime})\leq 0,\text{ for each } \mathcal{M}_{\Theta^{\prime}}\in\Delta^{\prime}, \tag{3b}\] \[\quad x^{\prime}\in\mathcal{X}_{plaus} \tag{3c}\] **Definition 12**.: _Given a CE \(x^{\prime}\in\mathcal{X}\) found by the outer minimisation problem, the set of bounded model shifts \(\Delta\), the inner maximisation problem finds a shifted model \(\mathcal{M}_{\Theta^{\prime}}\) such that:_ \[\operatorname*{arg\,max}_{\mathcal{M}_{\Theta^{\prime}}} \quad-\mathcal{M}_{\Theta^{\prime}}(x^{\prime})\] (4a) _subject to_ \[\quad\mathcal{M}_{\Theta^{\prime}}\in\Delta \tag{4b}\] The outer minimisation problem relaxes the constraint that a CE should be robust to all possible model shifts in the set \(\Delta\); instead, it requires robustness wrt a subset of the model changes \(\Delta^{\prime}\subset\Delta\). \(\Delta^{\prime}\) is initialised with the original classification model \(\mathcal{M}_{\Theta}\). At the first execution, the outer minimisation finds the closest CE \(x^{\prime}\) valid for that model. Then, \(x^{\prime}\) is passed to the inner maximisation problem to compute the model shift \(S(\mathcal{M}_{\Theta})\) that produces the lowest model output score. This model shift is considered to be the worst-case perturbation on the model parameters in the set \(\Delta\), and is added to \(\Delta^{\prime}\). In the next iterations, \(x^{\prime}\) is updated to the closest CE valid for all the models in \(\Delta^{\prime}\) (outer), which is being expanded (inner), until convergence. #### 4.2.2. MILP Formulations The proposed bi-level optimisation method in Section 4.2.1 is independent of specific NN structures. In this section, we take NNs with ReLU activation functions as an example to further elaborate the method. We denote the total number of hidden layers in an NN \(\mathcal{M}_{\Theta}\) as \(h\). We call \(N_{0}\), \(N_{h+1}\), and \(N_{i}\) the sets of input, output, and hidden layer nodes for \(i\in[h]\), and their node values are \(V_{0}\), \(V_{h+1}\), and \(V_{i}\). For hidden layer nodes \(V_{i}=\text{ReLU}(W_{i}V_{i-1}+B_{i})\), and for output layer nodes \(V_{h+1}=W_{h+1}V_{h}+B_{h+1}\), where \(W_{i}\) is the weight matrix connecting nodes at layers \(i-1\) and \(i\), and \(B_{i}\) is the bias vector of nodes \(N_{i}\). We instantiate the formulations using normalised \(L_{1}\), while our method PROPLACE can accommodate arbitrary distance metrics. The outer minimisation problem is equivalent to the following MILP program, where the superscripts \(j\) on weight matrices and bias vectors indicate they are model parameters of the \(j\)-th model \(\mathcal{M}_{\Theta}^{j}\in\Delta^{\prime}\): \[\min_{x^{\prime},y,\lambda} \left\lVert x-x^{\prime}\right\rVert_{1}\] (5a) s.t. \[V_{0}^{j}=x^{\prime}, \tag{5b}\] \[V_{i}^{j}\in\{0,1\}^{[N_{i}]},\ i\in[h],j\in[[\Delta^{\prime}]]\] (5c) \[0\leq V_{i}^{j}\leq M_{Y_{i}}^{j},\ i\in[h],j\in[[\Delta^{\prime }]]\] (5d) \[W_{i}^{j}V_{i-1}^{j}+B_{i}^{j}\leq V_{i}^{j}\leq\left(W_{i}^{j}V _{i-1}^{j}+B_{i}^{j}\right)+M(1-Y_{i}^{j}),\] (5e) \[i\in[h],j\in[[\Delta^{\prime}]]\] (5f) \[W_{h+1}^{j}V_{h}^{j}+B_{h+1}^{j}\geq 0,j\in[[\Delta^{\prime}]]\] \[\lambda_{l}\in[0,1],\ l\in[[S_{k,\Delta}\cup\{x\}]],\quad\sum_{l =1}^{|S_{k,\Delta}\cup\{x\}|}\lambda_{l}=1\] \[x^{\prime}=\sum_{l=1}^{|S_{k,\Delta}\cup\{x\}|}\lambda_{l}x_{l}^ {\prime},x_{l}^{\prime}\in S_{k,\Delta}\cup\{x\} \tag{5g}\] Constraints (5b) - (5f) and constraint (5g) correspond respectively to the robustness and plausibility requirement in (3b) - (3c). The inner maximisation program can be formulated as the following MILP program, where the superscripts \(0\) on weight matrices and biases indicate they are model parameters of the original model \(\mathcal{M}_{\Theta}\), and \(\delta\) is the bound of model magnitude change specified in \(\Delta\): \[\max_{W,B,Y} V_{h+1}\] (6a) s.t. \[V_{0}=x^{\prime}, \tag{6b}\] \[V_{i}\in\{0,1\}^{[N_{i}]},\ i\in[h]\] (6c) \[0\leq V_{i}\leq M\forall i,\ i\in[h]\] (6d) \[W_{i}V_{i-1}+B_{i}\leq V_{i}\leq\left(W_{i}V_{i-1}+B_{i}\right)+ M(1-\gamma_{i}),\] (6e) \[i\in[h]\] \[V_{h+1}=W_{h+1}V_{h}+B_{h+1}\] (6f) \[W_{i}^{0}-\delta\leq W_{i}\leq W_{i}^{0}+\delta,\ i\in[h+1]\] (6g) \[B_{i}^{0}-\delta\leq B_{i}\leq B_{i}^{0}+\delta,\ i\in[h+1] \tag{6h}\] Due to the flexibility of such MILP programs, the framework accommodates continuous, ordinal, and categorical features (Mohammadi et al., 2021). Specific requirements like feature immutability or associations between features can also be encoded (Ustun et al., 2019). These MILP problems can be directly solved using off-the-shelf solvers such as Gurobi (Gurobi Optimization, LLC, 2023). ### Soundness, Completeness and Convergence of Algorithm 1 We now discuss the soundness and completeness of our method by restricting the search space for the CE to the plausible region \(\mathcal{X}_{plaus}\). From its definition, the vertices (except the input \(x\)) of \(\mathcal{X}_{plaus}\) are \(\Delta\)-robust, which thus satisfies the robustness requirement of our target problem (Definition 8). This means that there exist at least \(k\) points in the search space satisfying constraint (2c) that also satisfy constraint (2b), making these points feasible solutions for the target problem. We may thus make the following remark: **Proposition 1**.: _Algorithm 1 is sound and complete if \(\exists\ x^{\prime}\in\mathcal{D}\) such that \(x^{\prime}\) is \(\Delta\)-robust._ Next, we adapt the method in (Mutapcic and Boyd, 2009, Section 5.2) to provide an upper bound on the maximum number of iterations of Algorithm 1. **Proposition 2**.: _Given the requirements of Algorithm 1, assume the classifier \(\mathcal{M}_{\Theta}\) is Lipschitz continuous in \(x^{\prime}\). Then, the maximum number of iterations before Algorithm 1 terminates is bounded._ Proof.: Firstly, we assume two small tolerance variables \(\sigma>t>0\) and modify the robustness constraint (2b) of Definition 8 to: \(\max_{\mathcal{M}_{\Theta^{\prime}}\in\Delta}[-\mathcal{M}_{\Theta^{\prime}}(x^ {\prime})+\sigma]\leq t\), such that the correctness of the robustness guarantee is not affected. The termination condition for Algorithm 1 therefore becomes \(-\mathcal{M}_{\Theta^{\prime}}(x^{\prime})+\sigma\leq t\). Consider the plausible CE problem (Definition 2 with the validity constraint modified to \(-\mathcal{M}_{\Theta}(x^{\prime})+\sigma\leq t\)), which is the problem solved by the first execution (iteration 1) of the outer minimisation problem in Algorithm 1. We denote its feasible region as \(\mathcal{F}\). Suppose \(\mathcal{M}_{\Theta}\) is a ReLU NN without the final (softmax or sigmoid) activation layer, then \(\mathcal{M}_{\Theta}\) is Lipschitz continuous. Let \(f(x^{\prime},\mathcal{M}_{\Theta}):=-\mathcal{M}_{\Theta}(x^{\prime})+\sigma\), then \(f\) is Lipschitz continuous in \(x^{\prime}\) over \(\mathcal{F}\) with some Lipschitz constant \(L\). For a distance metric \(dist:\mathbb{R}^{d}\times\mathbb{R}^{d}\rightarrow\mathbb{R}^{+}\), and any \(x_{1},x_{2}\in\mathcal{F}\), we have: \[|f(x_{1},\mathcal{M}_{\Theta})-f(x_{2},\mathcal{M}_{\Theta})|\leq L\times dist(x _{1},x_{2}) \tag{7}\] At iteration \(m\), we denote the CE found by the outer minimisation as \(x^{\prime(m)}\), and the shifted model found by the inner maximisation as \(\mathcal{M}_{\Theta}^{(m)}\). Then, \(f(x^{\prime(m)},\mathcal{M}_{\Theta}^{(m)}):=-\mathcal{M}_{\Theta}^{(m)}(x^{ \prime(m)})+\sigma\). Assume at step \(m\) the algorithm has not terminated, then \[f(x^{\prime(m)},\mathcal{M}_{\Theta}^{(m)})>t \tag{8}\] For the iteration steps \(n>m\), \(x^{\prime(n)}\) is required to be valid on \(\mathcal{M}_{\Theta}^{(m)}\) as specified in the outer minimisation problem, we therefore have: \[f(x^{\prime(n)},\mathcal{M}_{\Theta}^{(m)})\leq 0 \tag{9}\] Combining (8) and (9) yields \[f(x^{\prime(m)},\mathcal{M}_{\Theta}^{(m)})-f(x^{\prime(n)},\mathcal{M}_{ \Theta}^{(m)})>t \tag{10}\] Further combining (10) with (7), for the iteration steps \(n>m\), \[dist(x^{\prime(m)},x^{\prime(n)})>\frac{t}{L} \tag{11}\] Consider the balls \(B_{i}\), \(i=1,\ldots,m\), of diameter \(\frac{t}{L}\) centred at each intermediate result of the outer minimisation problem, \(x^{\prime(i)}\). From (11), it can be concluded that for any two intermediate \(x^{\prime(i)}\), \(x^{\prime(j)}\), \(1<i,j<m\), \(dist(x^{\prime(i)},x^{\prime(j)})>\frac{t}{L}\), and \(x^{\prime(i)}\) and \(x^{\prime(j)}\) are the centres of the balls \(B^{(i)}\) and \(B^{(j)}\). Therefore, any two circles will not intercept. The total volume of these balls is thus \(m\times U\times\left(\frac{t}{L}\right)^{d}\), where \(U\) is the unit volume in \(\mathbb{R}^{d}\). Consider a ball that encompasses the feasible solution region \(\mathcal{F}\) and let \(R\) be its radius. We know that \(x^{\prime}_{i},i=1,\ldots,m\), are all within the feasible region \(\mathcal{F}\), therefore, the ball \(B\) that has a radius \(R+\frac{t}{2L}\) will cover the spaces of the small balls \(B_{i},i=1,\ldots,m\). Also, the volume of \(B\) is \(U\times\left(2R+\frac{t}{L}\right)^{d}\) and will be greater than the total volume of the small balls, which means: \[U\times\left(2R+\frac{t}{L}\right)^{d}>m\times U\times\left(\frac{t}{L}\right) ^{d}\quad\Longrightarrow\quad m<\left(\frac{2RL}{t}+1\right)^{d}\] It can be concluded that the step number at which Algorithm 1 has not terminated is bounded above by the \(\left(\frac{2RL}{t}+1\right)^{d}\). ## 5 Experiments In this section, we demonstrate that our proposed method achieves state-of-the-art performances compared with existing robust CEs generation methods. **Datasets and Classifiers.** Our experiments use four benchmark datasets in financial and legal contexts: the Adult Income (ADULT), COMPAS, Give Me Some Credits (GMC), and HELOC datasets. We adopt the pre-processed versions available in the CARLA library (Pawelczyk et al., 2021) where each dataset contains binarised categorical features and min-max scaled continuous features. Labels 0 and 1 are the unwanted and the desired class, respectively. We split each dataset into two halves. We use the first half for training NNs with which the robust CEs are generated, and the second half for model retraining and evaluating the robustness of the CEs. For making predictions and generating CEs, the NNs contain two hidden layers with ReLU activation functions. They are trained using the Adam optimiser with a batch size of 32, and under the standard \(80\%,20\%\) train-test dataset split setting. The classifiers achieved \(84\%,85\%,94\%\), and \(76\%\) accuracies on the test set of ADULT, COMPAS, GMC, and HELOC datasets, respectively. The retrained models have the same hyperparameters and training procedures as the original classifiers. Following the experimental setup in previous works (Dutta et al., 2022; Ferrario and Loi, 2022; Nguyen et al., 2022; Black et al., 2022; Upadhyay et al., 2021), for each dataset, we train 10 new models using both halves of the dataset to simulate the possible retrained models after new data are collected. We also train 10 new models using 99% of the first half of the dataset (different 1% data are discarded for each training), to simulate the leave-one-out retraining procedures. The random seed is perturbed for retraining. These 20 retrained models are used for evaluating the robustness of CEs. **Evaluation Metrics.** The CEs are evaluated by the following metrics for their proximity, plausibility, and robustness. * \(\ell_{1}\) measures the average \(L_{1}\) distance between a CE and its corresponding input. * \(lof\) is the average 10-Local Outlier Factor (Breunig et al., 2000) of the generated CEs, which indicates to what extent a data point is an outlier wrt its \(k\) nearest neighbours in a specified dataset. \(lof\) values close to 1 indicate inliers, larger values (especially if greater than 1.5) indicate outliers. * \(or\), the validity of CEs on the retrained models, is defined as the average percentage of CEs that remain valid (classified to class 1) under the retrained models. * \(o\Delta\) is the percentage of CEs that are \(\Delta\)-robust. The bound of model parameter changes \(\delta\) is specified to be the same as the value used in our algorithm. **Baselines.** We compare our method with six state-of-the-art methods for generating CEs, including five which target robustness. WCE (Wachter et al., 2017) is the first method to generate CEs for NNs, which minimises the \(\ell_{1}\) distance between the CEs and the inputs. Robust Bayesian Recourse (RBR) (Nguyen et al., 2022) addresses the proximity, robustness, and plausibility of CEs. RobXNN (Dutta et al., 2022) is a nearest-neighbour-based method that focuses on a different notion of robustness to model changes. Robust Algorithmic Recourse (ROAR) (Upadhyay et al., 2021) optimises for proximity and the same \(\Delta\) notion of robustness. Proto-R and MILP-R are the methods proposed by (Jiang et al., 2023) which embed the \(\Delta\)-robustness test into the base methods of (Van Looveren and Klaise, 2021) and (Mohammadi et al., 2021). For all methods including ours, we tune their hyperparameters to maximise the validity after retraining \(or\). Results.We randomly select 50 test points from each dataset that are classified to be the unwanted class, then apply our method and each baseline to generate CEs for these test points. Results are shown in Table 1. As a non-robust baseline, the WCE method is the least robust while producing high \(\ell_{1}\) costs and poor plausibility. Though RBR shows the lowest \(\ell_{1}\) results on three datasets, it has only moderate robustness against the naturally retrained models and is not \(\Delta\)-robust on any dataset. The rest of the baselines all show strong robustness on at least three datasets, with our method having slightly better \(\mathit{vr}\) and \(\mathit{v\Delta}\) results, evaluated at 100% in every experiment. This indicates that our method PROPLACE can not only guarantee robustness under bounded model parameter changes but also induce reliable robustness against unbounded model changes. In terms of plausibility, our method shows the best lof score in most experiments. Therefore, our method has addressed the limitation in the literature that no method optimises for guaranteed \(\Delta\)-robustness and plausibility. Though the two properties have established trade-offs with proximity (Pawelczyk et al., 2020; Pawelczyk et al., 2022; Upadhyay et al., 2021), our method still shows \(\ell_{1}\) costs lower than all methods except RBR, which is significantly less robust, and MILP-R, which finds outliers. For the COMPAS dataset, our method has the best proximity result among all baselines. Note that the PROTO-R baseline from the work which proposed certification for \(\Delta\)-robustness failed to find \(\Delta\)-robust CEs on the ADULT dataset, as was the case in their results (see Table 1, (Jiang et al., 2023)). This is due to the fact that their method rely heavily on a base method to find CEs, and it is not straightforward to be always able to direct the hyperparameters search for optimising \(\Delta\)-robustness. With improved soundness and completeness (Proposition 1), PROPLACE always finds provably robust results. ## 6. Conclusions We proposed a robust optimisation framework PROPLACE to generate provably robust and plausible CEs for neural networks. The method addresses the limitation in the literature that existing methods lack formal robustness guarantees to bounded model parameter changes and do not generate plausible CEs. We proved the soundness, completeness, and convergence of PROPLACE. Through a comparative study, we show the efficacy of our method, demonstrating the best robustness and plausibility results with better proximity than the most robust baselines. Despite the specific form of robustness we target, PROPLACE is also empirically robust to model retraining with unbounded parameter changes. Future work could include investigating the properties of actionability and diversity, evaluations with user studies, and investigating connections between \(\Delta\)-robustness and different notions of robustness measures. ## Acknowledgement Jiang, Rago and Toni were partially funded by J.P. Morgan and by the Royal Academy of Engineering under the Research Chairs and Senior Research Fellowships scheme. Jianglin Lan is supported by a Leverhulme Trust Early Career Fellowship under Award CCF-2021-517. Leofante is supported by an Imperial College Research Fellowship grant. Rago and Toni were partially funded by the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No. 101020934). Any views or opinions expressed herein are solely those of the authors listed.
2309.09603
Maximum cliques in a graph without disjoint given subgraph
The generalized Tur\'an number $\ex(n,K_s,F)$ denotes the maximum number of copies of $K_s$ in an $n$-vertex $F$-free graph. Let $kF$ denote $k$ disjoint copies of $F$. Gerbner, Methuku and Vizer [DM, 2019, 3130-3141] gave a lower bound for $\ex(n,K_3,2C_5)$ and obtained the magnitude of $\ex(n, K_s, kK_r)$. In this paper, we determine the exact value of $\ex(n,K_3,2C_5)$ and described the unique extremal graph for large $n$. Moreover, we also determine the exact value of $\ex(n,K_r,(k+1)K_r)$ which generalizes some known results.
Fangfang Zhang, Yaojun Chen, Ervin Gyori, Xiutao Zhu
2023-09-18T09:18:49Z
http://arxiv.org/abs/2309.09603v1
# Maximum cliques in a graph without disjoint given subgraph ###### Abstract The generalized Turan number \(\mathrm{ex}(n,K_{s},F)\) denotes the maximum number of copies of \(K_{s}\) in an \(n\)-vertex \(F\)-free graph. Let \(kF\) denote \(k\) disjoint copies of \(F\). Gerbner, Methuku and Vizer [19, 3130-3141] gave a lower bound for \(\mathrm{ex}(n,K_{3},2C_{5})\) and obtained the magnitude of \(\mathrm{ex}(n,K_{s},kK_{r})\). In this paper, we determine the exact value of \(\mathrm{ex}(n,K_{3},2C_{5})\) and described the unique extremal graph for large \(n\). Moreover, we also determine the exact value of \(\mathrm{ex}(n,K_{r},(k+1)K_{r})\) which generalizes some known results. **Keywords**: Generalized Turan number, disjoint union, extremal graph. ## 1 Introduction Let \(G\) be a graph with the set of vertices \(V(G)\). For two graphs \(G\) and \(H\), let \(G\cup H\) denote the disjoint union of \(G\) and \(H\), and \(kG\) denote \(k\) disjoint copies of \(G\). We write \(G+H\) for the join of \(G\) and \(H\), the graph obtained from \(G\cup H\) by adding all edges between \(V(G)\) and \(V(H)\). We use \(K_{n}\), \(C_{n}\), \(P_{n}\) to denote the complete graph, cycle, and path on \(n\) vertices, respectively. Let \(K_{s}(G)\) denote the number of copies of \(K_{s}\) in \(G\). For a graph \(F\), the Turan number of \(F\), denote by \(\mathrm{ex}(n,F)\), is the maximum number of edges in an \(F\)-free graph \(G\) on \(n\) vertex. In 1941, Turan [19] proved that the balanced complete \(r\)-partite graph on \(n\) vertices, called Turan graph \(T_{r}(n)\), is the unique extremal graph of \(\mathrm{ex}(n,K_{r+1})\). Starting from this, the Turan problem has attracted a lot of attention. The study of disjoint copies of a given graph in the context of Turan numbers is very rich. The first result is due to Erdos and Gallai [5] who determined the Turan number of \(\mathrm{ex}(n,kK_{2})\) for all \(n\). Later Simonovits [18] and independently Moon [17] determined the Turan number of disjoint copies of cliques. In [10] Gorgol initiated the systematic investigation of Turan numbers of disjoint copies of graphs and proved the following. **Theorem 1**: _(Gorgol [10]) For every graph \(F\) and \(k\geq 1\),_ \[\mathrm{ex}(n,kF)=\mathrm{ex}(n,F)+O(n).\] In this paper we study the generalized Turan number of disjoint copies of graphs. The generalized Turan number \(\mathrm{ex}(n,T,F)\) is the maximum number of copies of \(T\) in any \(F\)-free graph on \(n\) vertices. Obviously, \(\mathrm{ex}(n,K_{2},F)=\mathrm{ex}(n,F)\). The earliest result in this topic is due to Zykov [23] who proved that \(\mathrm{ex}(n,K_{s},K_{r})=K_{s}(T_{r-1}(n))\). **Theorem 2**: _(Zykov [23]) For all \(n\),_ \[\mathrm{ex}(n,K_{s},K_{r})=K_{s}(T_{r-1}(n)),\] _and \(T_{r-1}(n)\) is the unique extremal graph._ In recent years, the problem of estimating generalized Turan number has received a lot of attention. Many classical results have been extended to generalized Turan problem, see [1, 4, 11, 12, 15, 16, 20, 22]. Theorem 1 implies that the classical Turan number \(\mathrm{ex}(n,kF)\) and \(\mathrm{ex}(n,F)\) always have the same order of magnitude. However, this is not true for generalized Turan number. The function \(\mathrm{ex}(n,K_{3},C_{5})\) has attracted a lot of attentions, see [2, 6, 7], the best known upper bound is given by Lv and Lu, **Theorem 3**: _(Lv and Lu [14] ) \(\mathrm{ex}(n,K_{3},C_{5})\leq\frac{1}{2\sqrt{6}}n^{\frac{3}{2}}+o(n^{\frac{3 }{2}})\)._ And Gerbner, Methuku and Vizer [8] proved \(\mathrm{ex}(n,K_{3},2C_{5})=\Theta(n^{2})\)[8]. This implies that the order of magnitudes of \(\mathrm{ex}(n,H,F)\) and \(\mathrm{ex}(n,H,kF)\) may differ. They also obtained a lower bound for \(\mathrm{ex}(n,K_{3},2C_{5})\) which is obtained by joining a vertex to a copy of \(T_{2}(n-1)\). In this paper, we show the graph \(K_{1}+T_{2}(n-1)\) is indeed the unique extremal graph for \(\mathrm{ex}(n,K_{3},2C_{5})\). **Theorem 4**: _For sufficiently large \(n\),_ \[\mathrm{ex}(n,K_{3},2C_{5})=\left\lfloor\frac{(n-1)^{2}}{4}\right\rfloor,\] _and \(K_{1}+T_{2}(n-1)\) is the unique extremal graph._ We also focus on the generalized the Turan number of disjoint copies of cliques. Since \(\mathrm{ex}(n,K_{s},K_{r})\) is known [23], it is natural to study the function \(\mathrm{ex}(n,K_{s},kK_{r})\). Gerbner, Methuku and Vizer [8] obtained the asymptotic value of \(\mathrm{ex}(n,K_{s},kK_{r})\). **Theorem 5**: _(Gerbner, Methuku and Vizer [8]) If \(s<r\), then_ \[\operatorname{ex}(n,K_{s},kK_{r})=(1+o(1)){r-1\choose s}\left(\frac{n}{r-1} \right)^{s}.\] _If \(s\geq r\geq 2\) and \(k\geq 2\), then_ \[\operatorname{ex}(n,K_{s},kK_{r})=\Theta(n^{x}),\] _where \(x=\left\lceil\frac{kr-s}{k-1}\right\rceil-1\)._ Liu and Wang [13] determined the exact value of \(\operatorname{ex}(n,K_{r},2K_{r})\) for \(r\geq 3\) and \(n\) sufficiently large. A new proof of \(\operatorname{ex}(n,K_{r},2K_{r})\) can be found in [21] by Yuan and Yang. Gerbner and Patkos [9] determined \(\operatorname{ex}(n,K_{s},2K_{r})\) for all \(s\geq r\geq 3\) and \(n\) sufficiently large. In this paper, we determine the value of \(\operatorname{ex}(n,K_{r},(k+1)K_{r})\) for all \(r\geq 2\), \(k\geq 1\) and \(n\) sufficiently large. **Theorem 6**: _There exists a constant \(n_{0}(k,r)\) depending on \(k\) and \(r\geq 2\) such that when \(n\geq n_{0}(k,r)\),_ \[\operatorname{ex}(n,K_{r},(k+1)K_{r})=K_{r}(K_{k}+T_{r-1}(n-k)),\] _and \(K_{k}+T_{r-1}(n-k)\) is the unique extremal graph._ The detailed proofs of Theorems 4 and 6 will be presented in Sections 3 and 4, respectively. ## 2 Proof of Theorem 4 Suppose \(n\) is large enough and let \(G\) be an \(n\)-vertex \(2C_{5}\)-free graph with \(\operatorname{ex}(n,K_{3},2C_{5})\) copies of triangles. Since \(K_{1}+T_{2}(n-1)\) contains no \(2C_{5}\), thus \(K_{3}(G)\geq\lfloor(n-1)^{2}/4\rfloor\). Next we will show that \(G=K_{1}+T_{2}(n-1)\). Since \(n\) is sufficiently large and by Theorem 3, \(G\) must contain a copy of \(C_{5}\), say \(C=v_{1}v_{2}v_{3}v_{4}v_{5}v_{1}\). Then \(G\setminus C\) contains no \(C_{5}\). By Theorem 3 again, we have \[K_{3}(G\setminus C)\leq\frac{1}{2\sqrt{2}}(n-5)^{\frac{3}{2}}+o((n-5)^{\frac{ 3}{2}}).\] We claim that there is at least one vertex in \(V(C)\) whose neighborhood contains a copy of \(6P_{4}\). To prove this, we need a theorem obtained by Bushaw and Kettle [3]. **Theorem 7**: _(Bushaw and Kettle[3]) For \(k\geq 2\), \(\ell\geq 4\) and \(n\geq 2\ell+2k\ell(\lceil\ell/2\rceil+1){\ell\choose\lfloor\ell/2\rfloor}\),_ \[\operatorname{ex}(n,kP_{\ell})={k\lfloor\ell/2\rfloor-1\choose 2}+(k\lfloor \ell/2\rfloor-1)(n-k\lfloor\ell/2\rfloor+1)+\lambda,\] _where \(\lambda=1\) if \(\ell\) is odd, and \(\lambda=0\) if \(\ell\) is even._ By Theorem 7, we know \(\mathrm{ex}(n,6P_{4})\leq\max\left\{\binom{872}{2},11(n-6)\right\}\). Now suppose no vertex in \(V(C)\) contains \(6P_{4}\) in its neighborhood. Then the number of triangles containing \(v_{i}\) is at most \[e(G[N(v_{i})])\leq\mathrm{ex}(n,6P_{4})=11n+o(n).\] Therefore, the total number of triangles satisfies \[K_{3}(G) \leq\frac{1}{2\sqrt{2}}n^{\frac{3}{2}}+o(n^{\frac{3}{2}})+55n+o(n)\] \[=\frac{1}{2\sqrt{2}}n^{\frac{3}{2}}+o(n^{\frac{3}{2}})\] \[<\frac{(n-1)^{2}}{4}.\] The last inequality holds when \(n\) is large. A contradiction. Therefore, we may assume that \(v_{1}\) is the vertex in \(V(C)\) such that \(G[N(v_{1})]\) contains a copy of \(6P_{4}\). If \(G\setminus v_{1}\) contains a copy of \(C_{5}\), then at least one copy of \(P_{4}\) in \(G[N(v_{1})]\) does not intersect with this \(C_{5}\) and hence we find two disjoint \(C_{5}\), a contradiction. Thus \(G\setminus v_{1}\) is \(C_{5}\)-free. So we have \[K_{3}(G)\leq e(G\setminus v_{1})+K_{3}(G\setminus v_{1}). \tag{2.1}\] So if we have \(e(G\setminus v_{1})+K_{3}(G\setminus v_{1})\leq\left\lfloor\frac{(n-1)^{2}}{4}\right\rfloor\), then the proof is completed. To prove this, we need the following lemma. **Lemma 1**: _Let \(n\geq 2\binom{68}{3}\). If \(G\) is a \(C_{5}\)-free graph on \(n\) vertices, then_ \[e(G)+K_{3}(G)\leq\left\lfloor\frac{n^{2}}{4}\right\rfloor,\] _and equality holds if and only if \(G=T_{2}(n)\)._ **Proof**. For each integer \(n\), let \(G_{n}\) be a \(C_{5}\)-free graph of \(n\) vertices such that \(e(G_{n})+K_{3}(G_{n})\) is maximum. For every \(n\), if \(G_{n}\) is also triangle-free, then by Turan Theorem [19], \(e(G_{n})\leq\left\lfloor\frac{n^{2}}{4}\right\rfloor\). Hence, \(e(G_{n})+K_{3}(G_{n})\leq\left\lfloor\frac{n^{2}}{4}\right\rfloor\) and equality holds if and only if \(G_{n}=T_{2}(n)\), we are done. Next we shall prove that from \(n\geq 2\binom{68}{2}\), each \(G_{n}\) is triangle-free. To do this, let us define a function \[\phi(n):=e(G_{n})+K_{3}(G_{n})-\left\lfloor\frac{n^{2}}{4}\right\rfloor.\] Since \(T_{2}(n)\) is \(C_{5}\)-free and \(e(T_{2}(n))+K_{3}(T_{2}(n))=\left\lfloor\frac{n^{2}}{4}\right\rfloor\), we have \(\phi(n)\geq 0\). We claim that from \(n\geq 68\), if \(G_{n}\) contains a triangle, then \[\phi(n)<\phi(n-1)-1. \tag{2.2}\] First suppose that \(\delta(G_{n})\geq\frac{n}{4}-1\). Let \(xy\) be the edge of \(G_{n}\) which is contained in the most number of triangles. Set \(W=N(x)\cap N(y)=\{z_{1},\ldots,z_{w}\}\). Since \(G_{n}\) is \(C_{5}\)-free, \(G_{n}[W]\) contains no edge unless \(w\leq 2\). Let \(D_{0}=N(x)\setminus(W\cup\{y\})\), \(D_{i}=N(z_{i})\setminus(W\cup\{x,y\})\) for \(1\leq i\leq w\) and \(D_{w+1}=N(y)\setminus(W\cup\{x\})\). We next show that \(D_{i}\) satisfy the following properties for \(0\leq i\leq w+1\). * \(|D_{i}|\geq\frac{n}{4}-w-2\) for \(i=0,w+1\) and \(|D_{j}|\geq\frac{n}{4}-4\) for \(1\leq j\leq w\); * \(D_{i}\cap D_{j}=\emptyset\) for \(0\leq i\neq j\leq w+1\); * There are no edges between \(D_{i},D_{j}\). Since \(\delta(G_{n})\geq\frac{n}{4}-1\), **(P1)** is clearly true. Since \(G_{n}\) is \(C_{5}\)-free, it is easy to see that \(D_{i}\cap D_{j}=\emptyset\) for \(1\leq i\neq j\leq w\). Suppose \(D_{0}\cap D_{i}\neq\emptyset\) or \(D_{w+1}\cap D_{i}\neq\emptyset\) for some \(1\leq i\leq w\), by symmetry, let \(v\in D_{0}\cap D_{i}\). Then by the choice of \(xy\), we have \(w\geq 2\). For \(1\leq j\leq w\) and \(j\neq i\), \(vz_{i}yz_{j}xv\) is a copy of \(C_{5}\), a contradiction. Thus **(P2)** holds. Suppose \(uv\) is an edge with \(u\in D_{i},v\in D_{j}\), then \(uz_{i}yz_{j}vu\) is a copy of \(C_{5}\) if \(i,j\in[1,w]\), \(uz_{i}yxvu\) or \(uz_{i}xyvu\) is a copy of \(C_{5}\) if \(i\in[1,w]\) and \(j\in\{0,w+1\}\), \(uxz_{1}yvu\) is a copy of \(C_{5}\) if \(i=0,j=w+1\), a contradiction. This implies **(P3)** holds. Let \(N=V(G_{n})-W\cup\{x,y\}-\cup_{i=0}^{w+1}D_{i}\). By **(P1)** and **(P2)**, we have \[n=|N|+\sum_{i=0}^{w+1}|D_{i}|+w+2\geq|N|+2(\frac{n}{4}-w-2)+w(\frac{n}{4}-4)+ w+2,\] which implies \(w\leq 2\), \(|N|\leq\frac{n}{4}+7\) and \(D_{i}\neq\emptyset\) when \(n\geq 61\). By the choice of \(xy\), each vertex of \(D_{i}\) has at most two neighbors in \(G_{n}[D_{i}]\) for \(0\leq i\leq w+1\) since there is no edge in \(3\) triangles. By **(P3)** and \(\delta(G_{n})\geq\frac{n}{4}-1\), each vertex in \(D_{i}\) has at least \(\frac{n}{4}-4\) neighbors in \(N\). Let \(v_{0}\in D_{0}\) and \(v_{1}\in D_{w+1}\). Because \(n\geq 68\), we can deduce that \(2(\frac{n}{4}-4)>\frac{n}{4}+7\geq|N|\) and hence \(N(v_{0})\cap N(v_{1})\cap N\neq\emptyset\). Then \(uv_{0}xyv_{1}u\) is a copy of \(C_{5}\), where \(u\in N(v_{0})\cap N(v_{1})\cap N\), a contradiction. We are done if the minimum degree is at least \(\frac{n}{4}-1\). Therefore, there is one vertex \(v\) in \(G_{n}\) such that \(d(v)<\frac{n}{4}-1\) when \(n\geq 68\). Because \(G_{n}\) is \(C_{5}\)-free, \(G_{n}[N(v)]\) is the disjoint union of stars and triangles which implies \(e(G_{n}[N(v)])\leq d(v)\). If we delete \(v\) from \(G_{n}\), it will destroy at most \(d(v)\) triangles and delete \(d(v)\) edges. Hence, \[\phi(n-1)-\phi(n)\] \[= \left\lfloor\frac{n^{2}}{4}\right\rfloor-\left\lfloor\frac{(n-1)^{2} }{4}\right\rfloor-\{(e(G_{n})+K_{3}(G_{n}))-(e(G_{n-1})+K_{3}(G_{n-1}))\}\] \[\geq \frac{2n-2}{4}-\{(e(G_{n})+K_{3}(G_{n}))-(e(G_{n}-v)+K_{3}(G_{n}-v ))\}\] \[\geq \frac{2n-2}{4}-2d(v)>\frac{2n-2}{4}-2(\frac{n}{4}-1)>1.\] Hence our claim(inequality 2.2) holds for \(n\geq 68\). Note that for \(n_{0}\geq 68\), if \(G_{n_{0}}\) contains no triangle, then \(\phi(n_{0})=0\). Moreover, for every \(n\geq n_{0}\), we have that \(G_{n}\) contains no triangles, either. Otherwise, we can find an integer \(n\) such that \(G_{n}\) contains a triangle but \(G_{n-1}\) is triangle-free. But then \(\phi(n)\leq\phi(n-1)-1<0\) by inequality 2.2, which is contrary to \(\phi(n)\geq 0\). Now let \(n_{0}\) be the first integer after 68 such that \(G_{n_{0}}\) is triangle-free. Then \[0\leq\phi(n_{0})\leq\phi(n_{0}-1)-1<\phi(68)-(n_{0}-68)\leq\binom{68}{2}+ \binom{68}{3}+68-n_{0}.\] This implies \(n_{0}\leq 2\binom{68}{3}\). Thus \(G_{n}\) must be triangle-free for \(n\geq 2\binom{68}{3}\geq n_{0}\). So \(e(G_{n})+K_{3}(G_{n})=e(G_{n})=\lfloor n^{2}/4\rfloor\) and \(G_{n}=T_{2}(n)\) by Turan Theorem [19]. The proof of Lemma 1 is completed. Combining equation (2.1) and Lemma 1, we can see that when \(n\) is large, \(K_{3}(G)\leq\left\lfloor\frac{(n-1)^{2}}{4}\right\rfloor\) and equality holds if and only if \(G=K_{1}+T_{2}(n-1)\). The proof of Theorem 4 is completed. \(\blacksquare\) ## 3 Proof of Theorem 6 We prove it by induction on \(r\) and in each case, we always assume \(n\geq n_{0}(k,r)=\). The base case \(r=2\) is the celebrated Erdos-Gallai Theorem [5], which says that \[\operatorname{ex}(n,K_{2},(k+1)K_{2})=\max\left\{\binom{2k+1}{2},(n-k)k+ \binom{k}{2}\right\}.\] As \(n\geq n_{0}(k,2)\), we know \(\operatorname{ex}(n,K_{2},(k+1)K_{2})=K_{2}(K_{k}+T_{1}(n-k))\). Let \(r\geq 3\) and suppose that the result holds for all \(r^{\prime}<r\). Next we consider the case \(\operatorname{ex}(n,K_{r},(k+1)K_{r})\). Let \(G\) be a \((k+1)K_{r}\)-free graph on \(n\) vertices with \(\operatorname{ex}(n,K_{r},(k+1)K_{r})\) copies of \(K_{r}\). We may assume that \(G\) contains \(k\) disjoint copies of \(K_{r}\). Otherwise we can add some edges into \(G\) unit the resulting graph contains \(k\) disjoint \(K_{r}\). But at least one \(K_{r}\) in these \(k\) disjoint \(K_{r}\) is new which implies that the number of \(K_{r}\) is increased, a contradiction. Let \[I=\{X_{1},\ldots,X_{k}\}\] be a set of \(k\) disjoint \(r\)-cliques in \(G\), where \(X_{i}\) is a copy of \(K_{r}\). Let \(V(I)=\cup_{i=1}^{k}V(X_{i})\) and \(N=G\setminus V(I)\). Clearly, \(N\) contains no \(K_{r}\). We say a vertex \(v\) in \(I\) is joined to an \((r-1)\)-clique in \(N\) if \(v\) is adjacent to all vertices of this \((r-1)\)-clique. For each \(X_{i}\), \(i\in[k]\), we have the following property. **Claim 1**: _Each \(X_{i}\) contains at most one vertex which is joined to at least \(kr+1\) disjoint \((r-1)\)-cliques in \(N\)._ **Proof**. If not, suppose \(u_{1},u_{1}^{\prime}\in V(X_{1})\) are both joined to \(kr+1\) disjoint \((r-1)\)-cliques. First we can find an \((r-1)\)-clique joined to \(u_{1}\) in \(N\). Since \(u_{1}^{\prime}\) is also joined to at least \(kr+1\) disjoint \((r-1)\)-cliques in \(N\), we can find another \((r-1)\)-clique joined to \(u_{1}^{\prime}\) which does not intersect with the \((r-1)\)-clique joined to \(u\). Together with \(\{X_{2},\ldots,X_{k}\}\), we find a copy of \((k+1)K_{r}\), a contradiction. By Claim 1, let \(A=\{X_{1},\ldots,X_{a}\}\) be a subset of \(I\) such that there exists a vertex in \(X_{i}\), say \(u_{i}\), that is joined to at least \(kr+1\) disjoint \((r-1)\)-cliques in \(N\) for each \(i\in[a]\). Let \(U=\{u_{1},\ldots,u_{a}\}\). Since \(N\) is \(K_{r}\)-free, each \(K_{r}\) in \(G\) must intersect with some vertices in \(V(I)\). Then all \(r\)-cliques can be divided into two classes: the set of cliques in which all vertices are contained in \(V(N)\cup U\), ant the set of cliques containing at least one vertex in \(V(I)\setminus U\). We simply use \(K_{r}(U)\) and \(K_{r}(\overline{U})\) to denote the number of copies of \(K_{r}\) in these two classes, respectively. Suppose a \(K_{r}\) in the first class contains \(s\) vertices in \(U\) and \(r-s\) vertices in \(N\), the number of \(K_{r}\)'s of this type is at most \(\binom{a}{s}K_{r-s}(N)\). Since \(N\) is \(K_{r}\)-free and by Theorem 2, which says \(\operatorname{ex}(n,K_{s},K_{r})=K_{s}(T_{r-1}(n))\), we have \(K_{r-s}(N)\leq K_{r-s}\left(T_{r-1}(n-kr)\right)\leq\binom{r-1}{r-s}\left( \frac{n-kr}{r-1}\right)^{r-s}\). Then \[K_{r}(U) \leq\sum_{s=1}^{r}\binom{a}{s}K_{r-s}(N)\] \[\leq a\left(\frac{n-kr}{r-1}\right)^{r-1}+\binom{a}{2}\binom{r-1} {r-2}\left(\frac{n-kr}{r-1}\right)^{r-2}+O(n^{r-3}). \tag{3.1}\] Next we calculate the size of \(K_{r}(\overline{U})\). Each vertex \(v\in V(I)\setminus U\) is joined to at most \(kr\) independent \((r-1)\)-cliques in \(N\). Hence the number of \(K_{r}\) containing \(v\) and \(r-1\) vertices of \(N\) is at most \[K_{r-1}(G[N(v)\cap V(N)]) \leq\mathrm{ex}(n-kr,K_{r-1},(kr+1)\cdot K_{r-1})\] \[=K_{r-1}\left(K_{kr}+T_{r-2}(n-2kr)\right)\] \[\leq(kr)\left(\frac{n-2kr}{r-2}\right)^{r-2},\] the second equality comes from the induction hypothesis. Any other copies of \(K_{r}\) in \(K_{r}(\overline{U})\) contains at most \(r-2\) vertices in \(N\) and at least one vertex in \(V(I)\setminus U\). So the number of such \(r\)-cliques is at most \[\sum_{s=2}^{r}\left(\binom{kr}{s}-\binom{a}{s}\right)K_{r-s}(N)\leq\left( \binom{kr}{2}-\binom{a}{2}\right)\binom{r-1}{r-2}\left(\frac{n-kr}{r-1}\right) ^{r-2}+O(n^{r-3}).\] Hence, \[K_{r}(\overline{U})\leq\left(kr+\left(\binom{kr}{2}-\binom{a}{2}\right) \binom{r-1}{r-2}\right)\left(\frac{n-kr}{r-1}\right)^{r-2}+O(n^{r-3}). \tag{3.2}\] Therefore, by inequality (3.1) and (3.2), we have \[K_{r}(G)\leq a\left(\frac{n-kr}{r-1}\right)^{r-1}+\left(kr+\binom{kr}{2}\binom {r-1}{r-2}\right)\left(\frac{n-kr}{r-1}\right)^{r-2}+O(n^{r-3}). \tag{3.3}\] On the other hand, since \(K_{k}+T_{r-1}(n-k)\) is \((k+1)K_{r}\)-free, we know that \[K_{r}(G)\geq k\left(\frac{n-k}{r-1}\right)^{r-1}+O(n^{r-2}). \tag{3.4}\] When \(n\) is greater than some constant \(n_{0}(k,r)\), inequalites (3.3) and (3.4) hold mean \(a=k\) and then \(U=\{u_{1},\ldots,u_{k}\}\). Let \(G^{\prime}=G\setminus U\). We claim that \(G^{\prime}\) is also \(K_{r}\)-free. Suppose not, \(G^{\prime}\) contains a \(r\)-clique, denote by \(X_{0}^{\prime}\). Since each \(u_{i}\) is joined to at least \(kr+1\) independent copies of \(K_{r-1}\)'s in \(N\), at least \((k-1)r+1\) of whom are disjoint with \(X_{0}^{\prime}\) for each \(i\in[k]\). Then we can find a \(r\)-clique \(X_{1}^{\prime}\) such that \(u_{1}\in X_{1}^{\prime}\) and \(V(X_{1}^{\prime})\cap V(X_{0}^{\prime})=\emptyset\). Next, we claim that we may find another \(k\) independent \(r\)-cliques such that each is disjoint with \(X_{0}^{\prime}\). Suppose we have found pairwise disjoint \(r\)-cliques \(X_{1}^{\prime},\ldots,X_{i-1}^{\prime}\) such that \(u_{j}\in X_{j}^{\prime}\) for \(j\in[i-1]\) and \(i\leq k\). Then, in \(G^{\prime}[N(u_{i})]\), there are at least \((k-1)r+1-(i-1)(r-1)\geq 1\) independent \((r-1)\)-cliques which disjoint with \(\{X_{0}^{\prime},X_{1}^{\prime},\ldots,X_{i-1}^{\prime}\}\). That is we can choose a \((r-1)\)-clique and thus a \(r\)-clique \(X_{i}^{\prime}\) such \(u_{i}\in X_{i}^{\prime}\) and \(X_{0}^{\prime},X_{1}^{\prime},\ldots,X_{i}^{\prime}\) are pairwise disjoint. The procedure can keep going until we find \(k\) independent \(r\)-cliques \(X_{1}^{\prime},\ldots,X_{k}^{\prime}\). Then \(X_{0}^{\prime},X_{1}^{\prime},\ldots,X_{k}^{\prime}\) forms a \((k+1)K_{r}\), a contradiction. Since \(G^{\prime}\) is \(K_{r}\)-free, by Zykov's Theorem, \(K_{r-i}(G^{\prime})\leq K_{r-i}(T_{r-1}(n-k))\) and the equality holds if and only if \(G^{\prime}=T_{r-1}(n-k)\). Thus \[K_{r}(K_{k}+T_{r-1}(n-k))\leq K_{r}(G)\leq\sum_{i=0}^{r}{k\choose i}K_{r-i}(G^{ \prime})=K_{r}(K_{k}+T_{r-1}(n-k)).\] The condition of the equality holds means \(G=K_{k}+T_{r-1}(n-k)\). The proof of Theorem 6 is completed. \(\blacksquare\) ## 4 Acknowledge The research of the authors Gyori is partially supported by the National Research, Development and Innovation Office NKFIH, grants K132696, SNN-135643 and K126853, Chen is supported by SNFC under grant numbers 12161141003 and 11931006, Zhang is supported by NSFC under grant numbers 12101298.
2309.00141
Causal Inference under Network Interference Using a Mixture of Randomized Experiments
In randomized experiments, the classic stable unit treatment value assumption (SUTVA) states that the outcome for one experimental unit does not depend on the treatment assigned to other units. However, the SUTVA assumption is often violated in applications such as online marketplaces and social networks where units interfere with each other. We consider the estimation of the average treatment effect in a network interference model using a mixed randomization design that combines two commonly used experimental methods: Bernoulli randomized design, where treatment is independently assigned for each individual unit, and cluster-based design, where treatment is assigned at an aggregate level. Essentially, a mixed randomization experiment runs these two designs simultaneously, allowing it to better measure the effect of network interference. We propose an unbiased estimator for the average treatment effect under the mixed design and show the variance of the estimator is bounded by $O({d^2}n^{-1}p^{-1})$ where $d$ is the maximum degree of the network, $n$ is the network size, and $p$ is the probability of treatment. We also establish a lower bound of $\Omega(d^{1.5}n^{-1}p^{-1})$ for the variance of any mixed design. For a family of sparse networks characterized by a growth constant $\kappa \leq d$, we improve the upper bound to $O({\kappa^7 d}n^{-1}p^{-1})$. Furthermore, when interference weights on the edges of the network are unknown, we propose a weight-invariant design that achieves a variance bound of $O({d^3}n^{-1}p^{-1})$.
Yiming Jiang, He Wang
2023-08-31T21:26:36Z
http://arxiv.org/abs/2309.00141v1
# Causal Inference under Network Interference Using a Mixture of Randomized Experiments ###### Abstract In randomized experiments, the classic _stable unit treatment value assumption_ (SUTVA) states that the outcome for one experimental unit does not depend on the treatment assigned to other units. However, the SUTVA assumption is often violated in applications such as online marketplaces and social networks where units interfere with each other. We consider the estimation of the average treatment effect in a network interference model using a mixed randomization design that combines two commonly used experimental methods: Bernoulli randomized design, where treatment is independently assigned for each individual unit, and cluster-based design, where treatment is assigned at an aggregate level. Essentially, a mixed randomization experiment runs these two designs simultaneously, allowing it to better measure the effect of network interference. We propose an unbiased estimator for the average treatment effect under the mixed design and show the variance of the estimator is bounded by \(O(d^{2}n^{-1}p^{-1})\) where \(d\) is the maximum degree of the network, \(n\) is the network size, and \(p\) is the probability of treatment. We also establish a lower bound of \(\Omega(d^{1.5}n^{-1}p^{-1})\) for the variance of any mixed design. For a family of sparse networks characterized by a growth constant \(\kappa\leq d\), we improve the upper bound to \(O(\kappa^{7}dn^{-1}p^{-1})\). Furthermore, when interference weights on the edges of the network are unknown, we propose a weight-invariant design that achieves a variance bound of \(O(d^{3}n^{-1}p^{-1})\). experimental design causal inference network interference clustering SUTVA ## 1 Introduction Randomized experiments are a powerful tool for understanding the causal impact of changes. For example, online marketplaces use randomized experiments to test the effectiveness of new features (Johari et al., 2022); tech companies build large-scale experimentation platforms to improve their products and systems (Paluck et al., 2016; Saveski et al., 2017); economists and social science researchers rely heavily on randomized experiments to understand the effects of economic and social changes (Leung, 2022). The overarching goal of randomized experiments is to impute the difference between two universes that cannot be observed simultaneously: a factual universe where the treatments assigned to all units remain unchanged, and a counterfactual universe where all units receive a new treatment. The difference in the average outcome between these two universes is commonly known as the average treatment effect (ATE). Standard estimation approaches for the ATE such as A/B testing rely heavily on a fundamental independence assumption: the _stable unit treatment value assumption_ (SUTVA) (Imbens and Rubin, 2015), which states that the outcome for each experimental unit depends only on the treatment it received, and not on the treatment assigned of any other units. However, SUTVA is often violated in settings where experimental units interact with each other. In such cases, ignoring the interference between treatment and control groups may lead to significant estimation bias of the ATE. For example, suppose an e-commerce retailer wants to determine the causal effect on its sales by applying a price promotion to all products. One simple approach is to run an experiment that applies promotion randomly to a subset of products, and then compare the average sales of promoted products with non-promoted products. However, simply comparing the difference in the two groups will likely overestimate the average treatment effect of the promotion, because customers may alter their shopping behavior in the experiment by substituting non-promoted products for similar promoted products. As such, if the retailer decides to implement the price promotion for all products on its platform based on the result of this experiment, the realized sales lift may be smaller than expected. One common approach for reducing the estimation bias of ATE in the presence of interference is _cluster-based randomization_, which assigns treatment at an aggregate level rather than at the individual level. In such a design, the population of the experiment is often represented by an underlying network \(G(V,E)\), where each vertex in the set \(V\) represents an individual experimental unit, and a directed edge \((i,j)\in E\) indicates that the outcome of unit \(i\) may be affected by the treatment of another unit \(j\). The weights on the edges represent the magnitude of interference between two units. Note that the weights on \((i,j)\) and \((j,i)\) can be different, as the interference between two units may be asymmetric. A cluster-based randomization design partitions the network into disjoint subsets of vertices (i.e., clusters) and applies the same treatment to all units within the same cluster. Clearly, using larger clusters will reduce the number of edges across clusters and will better capture the network interference effect between units, which leads to a smaller bias. However, a downside of using larger clusters is that there will be fewer _randomization units_ in the experiment, resulting in higher variance in the ATE estimation. Considering this bias-variance trade-off, a substantial body of literature delves into optimal cluster design under network interference (Ugander et al., 2013; Candogan et al., 2023; Leung, 2022b; Brennan et al., 2022). In this paper, we propose a modification of cluster-based randomization by mixing it with another commonly used experiment design in which each unit is individually and independently randomized, i.e., _Bernoulli randomization_. As one experimental unit cannot receive two different treatments at the same time, the mixed design determines which randomization method applies to a given unit by a coin flip. Essentially, the mixed design _simultaneously_ runs a cluster-based randomized experiment on one half of the network and an individually randomized experiment on the other half. The idea of mixing cluster-based and Bernoulli designs is proposed by (Saveski et al., 2017; Pouget-Abadie et al., 2019b), which uses the mixed design to detect the existence of network inference effect (i.e., hypothesis testing for SUTVA). But to the best of our knowledge, the effect of using mixed randomization design in estimating average treatment effects has not been previously studied. We summarize the main results of the paper as follows. In Section 3, we define an unbiased estimator for the ATE under mixed randomization designs and establish bounds on the variance of the estimator given any cluster design (which is used in the cluster-based part of the experiment). Due to computational challenges, these variance bounds cannot be directly applied to optimize cluster design. We propose several heuristics for clustering and analyze the variance of the estimator in these heuristics. For general networks, we prove a bound on the ATE estimation variance in the order of \(O(d^{2}/(np))\) where \(d\) is the maximum degree of the network, \(n\) is the network size, and \(p\) is the fraction of units that receive treatment. Our result improves the state-of-the-art \(O(d^{6}/(np))\) upper bound in Ugander and Yin (2023), which uses cluster-based randomization design. We also establish a lower bound of \(\Omega(d^{1.5}/(np))\) for any mixed design that matches the upper bound up to a factor of \(\sqrt{d}\). For a family of sparse networks characterized by a growth constant \(\kappa\ll d\), we improve the upper bound to \(O(\kappa^{7}d/(np))\). In Section 4, we extend the analysis to the case where the degree of interference between units, namely, the edge weights of the network, are unknown _a priori_. We propose a weight-invariant mixed randomization design that is agnostic to edge weights, and show the algorithm has an upper bound on the estimation variance in the order of \(O(d^{3}/(np))\). Furthermore, we show that our proposed estimators are consistent and asymptotically normal provided that the graph is sufficiently sparse (Section 5). Finally, we use numerical experiments to illustrate the efficacy of the proposed mixed randomization algorithms in Section 6 and compare it with (pure) cluster-based randomization. Related Works There is extensive literature that explores experiment design where experimental units interfere with each other. Halloran and Struchiner (1995) considered designing and analyzing experiments with interference motivated by infectious diseases. Hudgens and Halloran (2008) extended their work by estimating the causal effects of treatments when interference exists within disjoint groups. Manski (2013) and Aronow and Samii (2017) further generalized the approach to problems where interference is represented by a treatment vector known as exposure mapping. The exposure mapping framework forms a foundation for subsequent studies of various types of experiments, such as graph experiment (Ugander et al., 2013; Ugander and Yin, 2023), bipartite experiment (Pouget-Abadie et al., 2019; Brennan et al., 2022; Harshaw et al., 2023), switchback experiment (Bojinov et al., 2023), and micro-randomized trials (Li and Wager, 2022), among others. When the SUTVA assumption is violated by interference between units, several experiment design approaches have been proposed to to mitigate the estimation bias. One common approach is cluster-based randomization, which partitions units into clusters and then assigns random treatments at the cluster level. Previous studies have shown that cluster-based design can reduce bias when interference exists (Eckles et al., 2016; Leung, 2022b) as well as estimation variance under neighborhood interference (using an unbiased Horvitz-Thompson (HT) estimator) (Ugander et al., 2013; Ugander and Yin, 2023). Another approach is using a multi-level randomization design where treatments are applied with different proportions to groups, assuming that there is no interference between groups (Hudgens and Halloran, 2008; Tchetgen and VanderWeele, 2012). A third approach is using regression adjustment in experiments with covariates to improve estimation precision (Chin, 2019). Our work studies causal inference with network interference. Specifically, we consider a setting with linear exposure mapping, which is a generalization of the partial interference assumption and stratified interference assumption in Hudgens and Halloran (2008). The linear exposure mapping assumption is also common in studies of bipartite experiments (Brennan et al., 2022; Harshaw et al., 2023)) and social network experiments (Bramoulle et al., 2009; Toulis and Kao, 2013; Pouget-Abadie et al., 2019). Another common assumption on the exposure mapping in the literature is that interference solely comes from a unit's neighbors (Ugander et al., 2013; Ugander and Yin, 2023) and otherwise allows for inference under arbitrary exposure mappings. More recently, problems under more general exposure mapping assumptions have been studied. For example, Leung (2022b) studied the case where interference may be present between any pair of units but the extent of interference diminishes with spatial distance. A similar analysis was also presented in Leung (2022a). Suye (2023) considered the setting where the exposure mapping may be misspecified and discussed the conditions under which unbiased causal effects can be estimated. Our main goal in this paper is to correctly estimate the average treatment effect (ATE) under network inference. In addition to this goal, there are other goals that have attracted the attention of researchers. Saveski et al. (2017) and Pouget-Abadie et al. (2019) considered a hypothesis testing problem of detecting the existence of network interference. Our approach is partially motivated by the randomization over randomized experiments approach proposed in those papers. Suye et al. (2021) considered the inferential target of estimating the direct treatment effect under unknown interference. Candogan et al. (2023) developed a family of experiments called independent block randomization, which aims at correlating the treatment probability of clusters to minimize the worse-case variance. ## 3 Analysis of Mixed Randomization Design ### Notation Let \([n]:=\{1,...,n\}\) for any \(n\in\mathbb{Z}_{+}\). And let \([a:b]\) be the set of integers \(\{a,a+1,...,b-1,b\}\) for any \(a,b\in\mathbb{Z}\) and \(a\leq b\). Throughout the paper, we use the boldface symbols (e.g., \(\mathbf{x}\)) to denote a vector and \(x_{i}\) to denote its \(i^{\text{th}}\) element. ### Network Causal Inference Model Below, we define a causal inference problem with network interference using the exposure mapping framework (Aronow and Samii, 2017). Consider an experiment with \(n\) individual units. Each unit \(i\in[n]\) is associated with a potential outcome function \(Y_{i}(\cdot):\{0,1\}^{n}\rightarrow\mathbb{R}\), which maps a treatment vector \(\mathbf{z}\in\{0,1\}^{n}\) to a potential outcome value. Let the average expected outcome of all units given the assignment \(\mathbf{z}\) be \[\mu(\mathbf{z})=\frac{1}{n}\sum_{i=1}^{n}Y_{i}(\mathbf{z}).\] Let \(\mathbf{1}\in\mathbb{R}_{n}\) be a vector of 1's and \(\mathbf{0}\in\mathbb{R}_{n}\) be a vector of 0's. Our goal is to measure the (global) average treatment effect (ATE): \(\mathbb{E}[\mu(\mathbf{1})-\mu(\mathbf{0})]\). Because the number of possible treatment assignments is exponential in \(n\) (i.e., \(2^{n}\)), making causal inferences on the ATE is impractical unless further assumption is imposed on the structure of the outcome function \(Y_{i}\). Throughout this paper, we assume that \(Y_{i}(\mathbf{z})\) depends only on the treatments of units from a subset \(\mathcal{N}_{i}\subset V\), which is referred to as the _neighborhood set_ of unit \(i\). More formally, for any two assignment vector \(\mathbf{z}\) and \(\mathbf{z}^{\prime}\) such that \(z_{i}=z_{i}^{\prime}\) and \(z_{j}=z_{j}^{\prime}\) for all \(j\in\mathcal{N}_{i}\), we have \(Y_{i}(\mathbf{z})=Y_{i}(\mathbf{z}^{\prime})\). We assume that the neighborhood set is known and correctly specified for each unit. By connecting units with their neighbors, we get a directed graph \(G(V,E)\) where \(V\) is the set of all units and \(E\) is the set of all edges. Without loss of generality, we assume the graph is connected; otherwise, the problem can be decomposed into separate subgraphs. We focus on a linear exposure model where the magnitude of interference on unit \(i\) from unit \(j\) is measured by a constant factor \(v_{ij}\), \(\forall(i,j)\in E\). Without loss of generality, we assume the weights are normalized such that \(\sum_{j\in\mathcal{N}_{i}}|v_{ij}|\leq 1,\ \forall i\in V\) and \(\sum_{i}\sum_{j\in\mathcal{N}_{i}}v_{ij}\geq 0\). The outcome function \(Y_{i}(\mathbf{z})\) of unit \(i\) is given by \[Y_{i}(\mathbf{z})=\alpha_{i}+z_{i}\beta_{i}+\gamma\sum_{j\in\mathcal{N}_{i}}v _{ij}z_{j},\quad\forall i\in V. \tag{1}\] In Eq (1), \(\alpha_{i}\in\mathbb{R}\) is the potential outcome of unit \(i\) without any treatment, \(\beta_{i}\in\mathbb{R}\) is the direct treatment effect on unit \(i\), \(\gamma v_{ij}z_{j}\) is the spillover effect from an adjacent unit \(j\in\mathcal{N}_{i}\). Because \(\sum_{j\in\mathcal{N}_{i}}|v_{ij}|\leq 1\), the value \(|\gamma|\) can be interpreted as the maximum absolute interference effect from neighbors. We treat the coefficients \(\alpha_{i}\), \(\beta_{i}\), and \(\gamma\) as fixed but unknown so that the only randomness in the observation model is due to the choice of treatment assignment \(\mathbf{z}\). In this section, we assume the weights \(v_{ij}\)'s are known. For example, Saveski et al. (2017) considers a social network experiment where each unit represents a user and \(v_{ij}\) is the same for all \((i,j)\in E\) (up to a normalization factor). As another example, for an experiment on a transportation network, we may set \(v_{ij}\) as the traffic flow from node \(j\) to node \(i\). The setting with unknown \(v_{ij}\)'s will be studied in Section 4. The linear exposure mapping model in Eq (1) can be viewed as a first-order approximation of more general network interference patterns. Similar linear exposure assumptions are common in the causal inference literature (Toulis and Kao, 2013; Brennan et al., 2022; Harshaw et al., 2023) and our model generalizes some previous studies by allowing weighted inference from neighboring units. Throughout the paper, we assume the outcomes are bounded, that is, there exist constants \(Y_{L}\) and \(Y_{M}\) such that \(Y_{L}\leq Y_{i}(\mathbf{z})\leq Y_{M}\) for all \(i\in V\) and all \(\mathbf{z}\in\{0,1\}^{n}\). We also assume \(Y_{L}>0\) without loss of generality (e.g., by shifting the outcome of all units by a fixed constant). One common approach used in the literature to reduce the bias of ATE under network interference is _cluster-based_ randomization, where the network is partitioned into disjoint clusters, and units within the same cluster are assigned the same treatment. Specifically, let \(m\in\mathbb{Z}_{+}\) and let the clusters \(\{C_{1},C_{2},...,C_{m}\}\) be a partition of the vertex set \(V\). We define \(c(i)\) as a function that maps the index of unit \(i\) to its associated cluster, i.e., \(c(i)=t\) if and only if \(i\in C_{t}\). The standard estimator for ATE in cluster-based design is the _Horvitz-Thompson_ estimator, which is defined below. **Definition 1** (Cluster-based Randomization).: _Let \(p\in(0,1)\) be the treatment probability. Under cluster-based randomized design, each unit's treatment is a Bernoulli random variable, i.e. \(z_{i}\sim\) Bernoulli(p) for all \(i\in V\). Moreover, for \(i\neq j\), the correlation coefficient of \(z_{i}\) and \(z_{j}\) is \(\text{corr}(z_{i},z_{j})=\mathbf{1}\{c(i)=c(j)\}\). The Horvitz-Thompson (HT) estimator for the ATE in cluster-based design (denoted by the subscript cb) is_ \[\tau_{cb}=\frac{1}{n}\sum_{i=1}^{n}\left(\frac{z_{i}}{p}-\frac{1-z_{i}}{1-p} \right)Y_{i}(\mathbf{z}). \tag{2}\] We note that there exist other definitions of HT estimators in cluster-based randomization design (e.g. Ugander et al., 2013) to make it unbiased, but the HT estimator defined in Eq (2) is commonly used one in practice. It is easily verified that for the linear exposure mapping model given by Eq (1), the true ATE is equal to \(\bar{\beta}+\gamma\sum_{i=1}^{n}\sum_{j\in\mathcal{N}_{i}}v_{ij}/n\), where \(\bar{\beta}=\sum_{i=1}^{n}\beta_{i}/n\). However, the expectation of the HT estimator is not always equal to the ATE. **Lemma 1**.: _Let \(\tau_{cb}\) be the HT estimator of cluster-based design in Definition 1. Its expectation is given by_ \[\mathbb{E}[\tau_{cb}]=\bar{\beta}+\frac{\gamma}{n}\sum_{i=1}^{n} \sum_{j\in\mathcal{N}_{i}}v_{ij}\mathbf{1}\{c(i)=c(j)\}. \tag{3}\] Proof.: Let \(t_{i}=z_{i}/p-(1-z_{i})/(1-p)\). Since \(\mathbb{E}[t_{i}z_{j}]=\mathbf{1}\{c(i)=c(j)\}\), the result follows by substituting Eq (1) into Eq (2) and taking the expectation. By Lemma 1, the HT estimator \(\tau_{cb}\) of a cluster-based design is unbiased if and only if all clusters are independent, i.e., there does not exist \(i,j\in V\) such that \(j\in\mathcal{N}_{i}\) but \(c(i)\neq c(j)\). Nevertheless, in most randomized experiments, the underlying network cannot be perfectly partitioned into independent clusters. As a motivating example, let us consider a \(d\)-regular network (i.e. \(|\mathcal{N}_{i}|=d\) for every \(i\in\mathcal{N}\)) with \(v_{ij}=1/d\) for all \((i,j)\in E\). The ATE for this network is \(\bar{\beta}+\gamma\). However, assuming between-cluster connections account for half of all the connections, the HT estimator gives \(\mathbb{E}[\tau_{cb}]=\bar{\beta}+\gamma/2\), which may cause significant bias in the ATE estimation. To reduce the bias, we may additionally use a _Bernoulli randomization design_, where each individual unit receives an i.i.d. treatment with probability \(p\). The Bernoulli design can be viewed as a special case of cluster-based design where each cluster contains a single unit, i.e. \(m=n\) and \(|C_{1}|=|C_{2}|=...=|C_{n}|=1\). By Lemma 1, the mean of the HT estimator for the Bernoulli design is \(\tau_{b}:=\bar{\beta}\). If we were able to get estimations from _both_\(\tau_{cb}\) and \(\tau_{b}\), we can use a new estimator \(\tau^{\prime}:2\tau_{cb}-\tau_{b}\), which is an unbiased estimator for the ATE with \(\mathbb{E}[\tau]=\bar{\beta}+\gamma\). However, this estimation approach is infeasible, as each unit in the network can only receive one treatment at a time, so we cannot obtain both \(\tau_{cb}\) and \(\tau_{b}\). The challenge above motivates us to consider a mixture of two randomization designs, which we call the _mixed design_. In the mixed design, we first draw random numbers to determine whether a unit will receive treatment according to a cluster-based design or a Bernoulli design, and then assign treatments according to each design separately. The key idea of the mixed design is to simultaneously obtain estimations from both the cluster-based and Bernoulli designs. The formal definition of the mixed design will be presented in the next subsection. Finally, we provide lower and upper bounds of the HT estimator \(\tau_{cb}\) for the cluster-based design, which will be used in subsequent sections. The proof is given in Appendix A. **Proposition 2**.: _The variance of the HT estimator under cluster-based design is bounded by_ \[\left(\frac{Y_{L}^{2}}{p(1-p)}-\frac{1}{2}Y_{M}^{2}\right)\eta+ \gamma^{2}\delta\leq\mathrm{Var}(\tau_{cb})\leq\left((\frac{1}{p(1-p)}+2)Y_{M} ^{2}-Y_{M}Y_{L}\right)\eta+\gamma^{2}\delta, \tag{4}\] _where_ \[\eta:=\frac{1}{n^{2}}\sum_{k=1}^{m}|C_{k}|^{2},\quad\delta:=\frac {1}{n^{2}}\sum_{1\leq k\neq l\leq m}\left(\sum_{i\in C_{k}}\sum_{i^{\prime} \in\mathcal{N}_{l}\cap\mathcal{C}_{l}}v_{ii^{\prime}}\right)\left(\sum_{j\in C _{l}}\sum_{j^{\prime}\in\mathcal{N}_{j}\cap C_{k}}v_{jj^{\prime}}\right).\] ### Mixed Randomization Design Our proposed mixed design assigns treatment using a two-stage randomization. In the first stage, all nodes in the network are randomly partitioned into two subsets. In the second stage, one subset receives random treatments according to a cluster-based design, and the other subset receives random treatments according to a Bernoulli design. (Random numbers used in the two stages are independent.) We note that similar mixed designs are proposed by Saveski et al. (2017); Pouget-Abadie et al. (2019b). However, their papers focus on hypothesis testing for the existence of network inference, whereas our work aims to estimate the ATE. The complete steps of creating a mixed randomization design are described as follows: 1. Let \(\{C_{1},C_{2},\ldots,C_{m}\}\) be a set of clusters that forms a partition of the network \(G(V,E)\). Let \(\mathbf{W}\in\{0,1\}^{m}\) be a random vector indicating the assignment of each cluster to either cluster-based or Bernoulli design. That is, \(W_{j}=1\) implies that the cluster \(C_{j}\) uses cluster-based design, and \(W_{j}=0\) implies the cluster \(C_{j}\) uses Bernoulli design. We require \(W_{j}\) to be i.i.d. Bernoulli random variables with mean \(1/2\) (\(\forall j\in[m]\)). Let \(\tilde{\mathbf{w}}\in\{0,1\}^{n}\) be the corresponding unit-level assignment vector, namely, \(\tilde{w}_{i}=W_{c(i)}\) for all \(i\in V\). 2. For cluster \(C_{j}\) (\(j=1,\ldots,m\)): if \(W_{j}=1\), assign all the units in this cluster treatment \(z_{i}=1\) with probability \(p\), and assign all the units treatment \(z_{i}=0\) with probability \(1-p\). If \(W_{j}=0\), assign treatment to the units in cluster \(C_{j}\) by i.i.d. Bernoulli variables with mean \(p\). Note that the above procedure does not specify how the clusters \(\{C_{1},C_{2},\ldots,C_{m}\}\) should be chosen. We will study how to design clustering algorithms to optimize the efficiency of the mixed randomization design in the next subsection. Define the following estimators \(\tau_{c}\) for those units using cluster-based design and \(\tau_{b}\) for those units using Bernoulli design: \[\tau_{c} =\frac{2}{n}\sum_{i=1}^{n}\tilde{w}_{i}(\frac{z_{i}}{p}-\frac{1- z_{i}}{1-p})Y_{i}(\mathbf{z}), \tag{5a}\] \[\tau_{b} =\frac{2}{n}\sum_{i=1}^{n}(1-\tilde{w}_{i})(\frac{z_{i}}{p}-\frac {1-z_{i}}{1-p})Y_{i}(\mathbf{z}). \tag{5b}\] It is easily verified that \(\mathbb{E}[\tau_{c}]=\bar{\beta}+\gamma\sum_{i=1}^{n}\sum_{j\in\mathcal{N}_{i} }v_{ij}\mathbf{1}\{c(i)=c(j)\}/n\) and \(\mathbb{E}[\tau_{b}]=\bar{\beta}\) (the proof is similar to Lemma 1 and is thus omitted). Using this fact, we define the estimator \(\tau\) for the mixed randomization design. **Definition 2** (ATE Estimator in Mixed Design).: _Let_ \[\tau:=\rho\tau_{c}-(\rho-1)\tau_{b},\] _where \(\tau_{c}\) and \(\tau_{b}\) are defined in (5a) and (5b), respectively, and_ \[\rho:=\frac{\sum_{i=1}^{n}\sum_{j\in\mathcal{N}_{i}}v_{ij}}{\sum_{i=1}^{n} \sum_{j\in\mathcal{N}_{i}}v_{ij}\mathbf{1}\{c(i)=c(j)\}}. \tag{6}\] The following lemma shows that \(\tau\) is indeed an unbiased estimator for the ATE. **Lemma 3**.: _If \(\rho\) is given by Eq (6), we have_ \[\mathbb{E}[\tau]=\mathbb{E}[\mu(\mathbf{1})-\mu(\mathbf{0})]=\bar{\beta}+ \frac{\gamma}{n}\sum_{i=1}^{n}\sum_{j\in\mathcal{N}_{i}}v_{ij}.\] Proof.: The equality follows immediately from Definition 2. Note that computing the coefficient \(\rho\) by Eq (6) requires full knowledge of the weight coefficients \(v_{ij}\) on all edges \((i,j)\in E\). Analogous to Proposition 2, we have the following lower and upper bounds on the variance of \(\tau\). **Proposition 4**.: _Suppose there exist \(Y_{L}>0\) and \(Y_{M}>0\) such that \(Y_{L}\leq Y_{i}(\mathbf{z})\leq Y_{M}\) for all \(i\in V\) and all \(\mathbf{z}\in\{0,1\}^{n}\). Then, we have_ \[\mathrm{Var}(\tau)\leq \left((\frac{2}{p(1-p)}+1)Y_{M}^{2}-Y_{M}Y_{L}-Y_{L}^{2}\right) \rho^{2}\eta+\gamma^{2}\rho^{2}\delta+O\left(\frac{\rho^{2}}{np(1-p)}\right), \tag{7}\] \[\mathrm{Var}(\tau)\geq \left(\frac{2}{p(1-p)}Y_{L}^{2}-2Y_{M}^{2}+Y_{M}Y_{L}\right)\rho^ {2}\eta+\gamma^{2}\rho^{2}\delta+O\left(\frac{\rho^{2}}{np(1-p)}\right), \tag{8}\] _where_ \[\eta:=\frac{1}{n^{2}}\sum_{k=1}^{m}|C_{k}|^{2},\quad\delta:=\frac{1}{n^{2}}\sum_{1 \leq k\neq l\leq m}\left(\sum_{i\in C_{k}}\sum_{i^{\prime}\in\mathcal{N}_{i} \cap\mathcal{C}_{l}}v_{ii^{\prime}}\right)\left(\sum_{j\in C_{l}}\sum_{j^{ \prime}\in\mathcal{N}_{j}\cap C_{k}}v_{jj^{\prime}}\right).\] The proof of Proposition 4 can be found in Appendix A. Note that the variance bounds in Proposition 4 for the estimator \(\tau\) under mixed design have a similar form as the variance bound under cluster-based design (see Prop. 2). In particular, the bounds depend on the number and sizes of the clusters (\(\eta\)), and the weights of between-cluster connections (\(\delta\)). Proposition 4 also provides a bound on the maximum size of each cluster if \(\tau\) is consistent. When \(v_{ij}\geq 0\) for all \((i,j)\in E\), we have \(|\rho|\geq 1\). If \(\mathrm{Var}(\tau)\to 0\) as the population size \(n\) goes to infinity, we must have \(\eta\to 0\) by the first term of the bound, which implies \(\max_{k}|C_{k}|=o(n)\). Using Proposition 4, we propose an efficient clustering algorithm to upper bound the variance of \(\tau\) in Section 3.4. We then provide a lower bound on the variance of \(\tau\) for any clustering algorithm in Section 3.5. ### Upper Bound on the Estimator Variance Recall that Eq (7) in Proposition 4 provides an expression for an upper bound of the variance of the mixed design estimator \(\tau\). However, finding an exact algorithm to minimize the expression in Eq (7) is computationally challenging, as it requires us to optimize the factor \(\rho^{2}\eta\) in the first term. Andreev and Racke (2004) showed that no polynomial time approximation algorithm can guarantee a finite approximation ratio in graph partitioning (in our case, a finite bound on \(\rho\)) unless \(\mathcal{P}=\mathcal{NP}\). Therefore, instead of searching for an optimal algorithm to minimize the bound in Eq (7), we resort to finding an efficient clustering algorithm that leads to an asymptotic bound on \(\mathrm{Var}(\tau)\). We first consider arbitrary network structures and propose a clustering algorithm based on a greedy heuristic (Algorithm 1). The key idea is to use maximum weight matching to generate clusters with sizes of 1 and 2, and then merge these small clusters together to create the desired clustering. The merging step follows a greedy rule, where we iteratively look for two clusters to merge to minimize an approximation of the variance upper bound: \[A(\{C_{1},...,C_{m}\})=\left((\frac{2}{p(1-p)}+1)Y_{M}^{2}-Y_{M}Y_{L}-Y_{L}^{ 2}\right)\rho^{2}\eta+\left(\frac{Y_{M}-Y_{L}}{a}\right)^{2}\rho^{2}|\delta|, \tag{9}\] where \(\eta\), \(\delta\), \(Y_{L}\) and \(Y_{M}\) are defined in Proposition 4 and \(a=\max_{i}(\sum_{j\in\mathcal{N}_{i}}\max\{v_{ij},0\})\). The main differences between the approximate variance upper bound \(A(\cdot)\) and the variance upper bound in Proposition 4 are: (1) we replace the unknown parameter \(\gamma^{2}\) by a known value \((Y_{M}-Y_{L})^{2}/a^{2}\) and take the absolute value of \(\delta\); (2) the \(O(\cdot)\) term is omitted. After merging two clusters \(C_{k},C_{l}\in\mathbf{C}\), we obtain a new cluster set \(\mathbf{C}^{\prime}=\mathbf{C}/\{C_{k},C_{l}\}\cup\{C_{k}\cup C_{l}\}\), and the corresponding approximate variance upper bound becomes \(A(\mathbf{C}^{\prime})\). We use \(\Delta A_{\mathbf{C}}(C_{k},C_{l})\) to denote the value of \(A(\mathbf{C}^{\prime})-A(\mathbf{C})\): \[\Delta A_{\mathbf{C}}(C_{k},C_{l})=A(\mathbf{C}/\{C_{k},C_{l}\}\cup\{C_{k} \cup C_{l}\})-A(\mathbf{C}). \tag{10}\] Within each iteration, the algorithm looks for a pair of clusters \((C_{k},C_{l})\) minimizing the value of \(\Delta A_{\mathbf{C}}(C_{k_{1}},C_{k_{2}})\). The stopping criterion is to check whether \(\Delta A_{\mathbf{C}}(C_{k},C_{l})\) is positive, which tells whether the approximate variance upper bound can be further improved by merging clusters. Note that the computational complexity of Algorithm 1 is \(O(n^{3})\). Firstly, finding a maximum matching in a general graph with \(n\) vertices is \(O(n^{3})\)(Galil, 1986), thus line 1 to 4 in Algorithm 1 takes \(O(n^{3})\) time. Secondly, the while loop from Line 5 to Line 7 can iterate at most \(n\) times since there are \(n\) vertices. Finally, Line 6 can be implemented in \(O(|E|)\) time if we enumerate the edge set \(E\) to find all adjacent clusters and calculate \(\sum_{i\in C_{k}}\sum_{i^{\prime}\in\mathcal{N}_{i}\cap C_{l}}v_{ii^{\prime}}\) for every adjacent \((C_{l},C_{l})\). Thus, Line 5 to Line 7 in Algorithm 1 takes \(O(n|E|)\) time, in which \(E\) is the edge set and thus \(|E|\leq n(n-1)\). Throughout the paper, we use \(d:=\max_{i\in V}|\mathcal{N}_{i}|\) to denote the maximum degree of the network. The variance of \(\tau\) using the output of Algorithm 1 is bounded as follows. **Theorem 5**.: _The variance of the estimator \(\tau\) in mixed randomization design using clustering from Algorithm 1 is upper bounded by \(O\left(\frac{d^{2}}{np(1-p)}\right)\)._ The complete proof is included in Appendix A. Theorem 5 guarantees an \(O(d^{2})\) upper bound on the variance of \(\tau\) for general networks. For networks with \(d=o(\sqrt{n})\), the estimator \(\tau\) is consistent, as the variance converges to 0 as the network size \(n\rightarrow\infty\). The dependence on \(d\) can be further improved by making additional assumptions on the sparsity of the network. Specifically, we consider a family of graphs satisfying the _restricted-growth_ condition (Ugander et al., 2013). **Definition 3** (restricted-growth graph).: _Let \(B_{r}(v)\) be the set of vertices within \(r\) hops of a vertex \(v\) (i.e., all vertices connected to \(v\) by a path with length no more than \(r\)). A graph \(G=(V,E)\) is a restricted-growth graph if there exists a constant \(\kappa>0\), such that for all vertices \(v\in V\) and all \(r\in\mathbb{Z}_{+}\), it holds that \(|B_{r+1}(v)|\leq\kappa|B_{r}(v)|\)._ By definition, we have \(\kappa\leq d\). Empirical evidence shows that the growth constant \(\kappa\) for most networks in practice can be significantly less than \(d\). For example, the empirical analysis by Ugander and Yin (2023) on Facebook social networks shows that \(\kappa\) is typically on the order of \(25\%-50\%\) of \(d\). Assuming the restricted-growth condition, when the spillover effects from neighbors are either always positive or always negative, we propose another clustering algorithm (Algorithm 2). The algorithm has two steps. In the first step, it finds a vertex \(v_{0}\) whose 2-hop does not overlap with any existing cluster, and adds the 2-hop of \(v_{0}\) as a new cluster. Note that by Definition 3, the size of each 2-hop is no more than \(\kappa(d+1)\). In the second step, it forms clusters among the remaining vertices to ensure that the maximum cluster size does not exceed \(\kappa(d+1)\). We show that Algorithm 2 gives a tighter upper bound on the estimation variance \(\operatorname{Var}(\tau)\), which is linear in the maximum degree \(d\) of the network. **Theorem 6**.: _If \(v_{ij}\geq 0\) for all \((i,j)\in E\) and there exists a positive constant \(\epsilon>0\) such that \(\sum_{j\in\mathcal{N}_{i}}v_{ij}\geq\epsilon\) for all \(i\in V\), the variance of estimator \(\tau\) using clustering from Algorithm 2 is upper bounded by \(O\left(\frac{\kappa^{*}d}{np(1-p)}\right).\)_ The bounds in Theorem 5 and 6 compare favorably to the state-of-the-art variance bounds for ATE estimators under cluster-based randomization design. Ugander and Yin (2023) achieved an \(O(d^{2}\kappa^{4}n^{-1}p^{-1})\) variance bound for restricted-growth graphs, which implies an \(O(d^{6}n^{-1}p^{-1})\) bound for general graphs (since \(\kappa\leq d\)). Our bounds for mixed randomization design have better dependence on the maximum degree \(d\) for both general networks and restricted-growth graphs. ### Lower Bound on the Estimator Variance To put the upper bounds in Section 3.4 into perspective, we present a lower bound of the variance of \(\tau\) for any clustering algorithm. To establish the lower bound, we consider a specific family of networks and measure the dependence on the maximum degree \(d\) and the growth constant \(\kappa\) by applying the lower bound in Proposition 4. We define a family of cycle networks whose structure is controlled by two parameters, \(d\) and \(\kappa\). **Definition 4** (Cycle network).: _We call a network \((d,\kappa)\)-cycle with \(1\leq\kappa\leq d\) if each unit \(i\in[n]\) is connected to the units indexed by \(\{(i\pm 1)\mod n,(i\pm 2)\mod n,...,(i\pm(\kappa-1))\mod n\}\) and \(\{(i\pm\kappa)\mod n,(i\pm 2\kappa)\mod n,...,(i\pm d\kappa)\mod n\}\)._ It is easily verified that a cycle network in Definition 4 has a maximum degree of \(2(d+\kappa)\leq 4d\) and a maximum growth factor of \(2\kappa\) (see Definition 3). Therefore, the parameter \(d\) controls the magnitude of the maximum degree and the parameter \(\kappa\) controls the growth rate. The following theorem establishes a lower bound on \(\mathrm{Var}(\tau)\). **Theorem 7**.: _Consider a \((d,\kappa)\)-cycle network whose treatment effect is given by_ \[Y_{i}(\mathbf{z})=\alpha_{i}+z_{i}\beta_{i}+\frac{\gamma}{|\mathcal{N}_{i}|} \sum_{j\in\mathcal{N}_{i}}z_{j},\quad\forall i\in V.\] _For any clustering algorithm, the estimator \(\tau\) in mixed design (Definition 2) satisfies_ \[\mathrm{Var}(\tau)=\Omega\left(\frac{\min\{\kappa d,d^{2}/\kappa\}}{np(1-p)} \right).\] The proof is given in Appendix A. Because \(1\leq\kappa\leq d\), the above result implies that the variance of the estimator \(\tau\) for restricted-growth networks in terms of \(d\) is at least \(\Omega(d)\), which matches the upper bound in Theorem 6. For general networks, by setting \(\kappa=\sqrt{d}\), the above theorem implies a \(\Omega(d^{1.5}/(np(1-p)))\) lower bound for networks with a maximum degree \(d\). This lower bound differs from the \(O(d^{2}/(np(1-p)))\) upper bound in Theorem 5 by only a factor of \(O(\sqrt{d})\). Notice that the lower bound in Theorem 7 holds even when we use _randomized_ clusters, which will be studied in the next section. ## 4 Weight-Invariant Mixed Design for Unknown Edge Weights The mixed experiment design proposed in Section 3 requires full knowledge of the weight coefficients \(v_{ij}\) for all edges \((i,j)\in E\). However, in many applications, the weights are unknown a priori. In this section, we extend the mixed experiment design by constructing a clustering algorithm with a corresponding ATE estimator that is agnostic to the weight coefficients. Recall the mixed randomization design is based on a clustering of the network. In the previous section, we assume the clusters are fixed. The key to extending the design to unknown weights is to use _randomized_ clusters. Our approach is motivated by Ugander and Yin (2023) who showed that randomized clustering algorithms can provide better variance upper bounds for cluster-based design. Consider a randomized clustering algorithm that produces \(k\) different clusterings \(\mathbf{C}_{1},\mathbf{C}_{2},\cdots,\mathbf{C}_{k}\) for a graph \(G(V,E)\). Each clustering \(\mathbf{C}_{l}\) (\(\forall l\in[k]\)) is a set of clusters that forms a partition of the network, i.e. \(\mathbf{C}_{l}=\{C_{l}^{l},C_{2}^{l},...,C_{m_{l}}^{l}\}\). Suppose \(\mathbf{C}_{l}\) is generated by the clustering algorithm with probability \(p(l)\). (We require \(\sum_{l=1}^{k}p(l)=1\)). Let \(c_{l}(i):V\rightarrow[m_{l}]\) be the function that maps each unit \(i\in V\) to the cluster index in the \(l^{\text{th}}\) clustering. To simplify the notation for the analysis below, we rewrite the interference model in Eq (1) as \[Y_{i}(\mathbf{z})=\alpha_{i}+z_{i}\beta_{i}+\gamma_{i}\sum_{j\in\mathcal{N}_{i}}w _{ij}z_{j},\quad\forall i\in V.\] where \(\gamma_{i}:=\gamma\sum_{j\in\mathcal{N}_{i}}v_{ij}\) and \(w_{ij}:=v_{ij}/(\sum_{j\in\mathcal{N}_{i}}v_{ij})\). Then for any parameter \(\rho\) in mixed design (see Definition 2), the expectation of the estimator \(\tau\) is \[\mathbb{E}[\tau]=\bar{\beta}+\frac{\rho}{n}\sum_{i=1}^{n}\gamma_{i}\sum_{j\in \mathcal{N}_{i}}w_{ij}\sum_{l=1}^{k}p(k)\mathbf{1}\{c_{l}(i)=c_{l}(j)\}. \tag{11}\] We say the estimator \(\tau\) is _weight-invariant_, if the value of \(\mathbb{E}[\tau]\) and \(\rho\) are independent of the weights \(w_{ij}\). Below is a formal definition. **Definition 5** (weight-invariant).: _The mixed design estimator \(\tau\) in Definition 2 is weight-invariant if \(\rho\) does not depend on \(w_{ij}\) and the following condition holds. For any unit \(i\in V\) and any two sets of weights \(\{w_{ij}\in\mathbb{R}|\sum_{j\in\mathcal{N}_{i}}w_{ij}=1,\ \forall i\}\) and \(\{w^{\prime}_{ij}\in\mathbb{R}|\sum_{j\in\mathcal{N}_{i}}w^{\prime}_{ij}=1,\ \forall i\}\), it holds that_ \[\sum_{j\in\mathcal{N}_{i}}w_{ij}\sum_{l=1}^{k}p(l)\mathbf{1}\{c_{l}(i)=c_{l}(j )\}=\sum_{j\in\mathcal{N}_{i}}w^{\prime}_{ij}\sum_{l=1}^{k}p(l)\mathbf{1}\{c_{ l}(i)=c_{l}(j)\}.\] The following lemma provides a sufficient and necessary condition under which the estimator \(\tau\) is weight-invariant and unbiased. **Lemma 8**.: _The estimator \(\tau\) is weight-invariant and unbiased if and only if_ \[\rho=\left(\sum_{l=1}^{k}p(l)\mathbf{1}\{c_{l}(i)=c_{l}(j)\}\right)^{-1}= \mathbb{E}[\mathbf{1}\{c(i)=c(j)\}]^{-1}\quad\forall(i,j)\in E. \tag{12}\] Lemma 8 follows immediately from Definition 5, so we omit the proof. For a weight-invariant estimator \(\tau\), the parameter \(\rho\) does not depend on the weight coefficients \(w_{ij}\), and only on the clustering algorithm itself. The main challenge for creating a weight-invariant design is to find a clustering algorithm that satisfies Eq (12). Denote the \(|E|\)-by-\(|E|\) edge incidence matrix of the graph \(G(V,E)\) by \(M\), where \(M_{ij}\in\{0,1\}\) and 1 means the \(i^{\text{th}}\) edge and the \(j^{\text{th}}\) edge share a common vertex. We propose a weight-invariant clustering method in Algorithm 3. ``` Input: A graph \(G(V,E)\) 1 Initialize the clustering \(\mathbf{C}\leftarrow\{\}\) Initialize the edge incidence matrix \(M\in\{0,1\}_{|E|\times|E|}\) Find the maximum eigenvalue \(\lambda^{*}\) of \(M\) and a corresponding eigenvector \(\boldsymbol{\omega}\)for\((i,j)\in E\)do 2\(X_{(i,j)}\gets Beta(\omega_{(i,j)},1)\) /* sample from Beta distribution */ 3for\((i,j)\in E\)do 4if\(X_{(i,j)}=\max\{X_{(u,v)}|(u,v)\in E,u=i\textbf{ or }u=j\}\)then 5\(\mathbf{C}\leftarrow\mathbf{C}\cup\{\{i,j\}\}\) 6 7for\(v\in V\)do 8if\(v\notin C_{k}\ \forall k\in[|\mathbf{C}|]\)then 9\(\mathbf{C}\leftarrow\mathbf{C}\cup\{\{v\}\}\) ``` **Output:** Clustering \(\mathbf{C}\) **Algorithm 3**Weight-invariant Clustering Note that Algorithm 3 generates random clusters of maximum size \(2\). By lines 7-8, the output of the algorithm is a valid clustering, because each vertex occurs in exactly one cluster in \(\mathbf{C}\). The following proposition shows the efficacy of Algorithm 3: **Proposition 9**.: _Using the clustering output generated by Algorithm 3, the mixed design ATE estimator \(\tau\) is weight-invariant and unbiased with \(\rho=\lambda^{*}\)._ Proof.: Suppose \(X_{1}\), \(X_{2}\),..., \(X_{k}\) are independent Beta distributed random variables with parameter \((w_{1},1)\), \((w_{2},1)\),..., \((w_{k},1)\). Then \[P(X_{1}=\max\{X_{2},...,X_{k}\})=\frac{w_{1}}{\sum_{j=1}^{k}w_{j}}.\] For any edge \((i,j)\in E\), let \(I_{ij}=\{(u,v)\in E|u=i\textbf{ or }u=j\}\) denote the set of edges incident to \((i,j)\). Since \(\boldsymbol{\omega}\) is a eigenvector of the edge incidence matrix \(M\) associated with the largest eigenvalue \(\lambda^{*}\), by the Perron-Frobenius theorem, we have \(\lambda^{*}\geq 0\) and the probability that \(i\) and \(j\) belong to the same cluster is \[\frac{\omega_{(i,j)}}{\sum_{(u,v)\in E\cap I_{ij}}\omega_{(u,v)}}=\frac{1}{ \lambda^{*}}.\] By Lemma 8, we have \[\rho=\frac{1}{\mathbb{E}[\mathbf{1}\{c(i)=c(j)\}]}=\frac{\sum_{(u,v)\in E\cap I _{ij}}\omega_{(u,v)}}{\omega_{(i,j)}}=\lambda^{*}.\] Because \(\lambda^{*}\) is determined by the edge incidence matrix \(M\), the parameter \(\rho\) does not dependent on the weights \(w_{ij}\) of the network. As a result, the estimator \(\tau\) using Algorithm 3 is weight-invariant and unbiased. Finally, we give an upper bound on the variance of \(\tau\) under Algorithm 3. This upper bound differs from the bound in Theorem 5 by a factor of \(O(d)\), since Algorithm 3 does not require the knowledge of the weight coefficients \(w_{ij}\) (or \(v_{ij}\)), \(\forall(i,j)\in E\). **Theorem 10**.: _The variance of \(\tau\) using Algorithm 3 is bounded by \(O\left(\frac{d^{3}}{np(1-p)}\right)\)._ Proof.: Let \(x_{ij}(\mathbf{C}_{k}):=\mathbf{1}\{c_{k}(i)=c_{k}(j)\}\), then \[\operatorname{Var}(\mathbb{E}[\tau|\mathbf{C}])= \operatorname{Var}(\bar{\beta}+\rho\gamma/n\sum_{i=1}^{n}\sum_{j \in\mathcal{N}_{i}}v_{ij}x_{ij}(\mathbf{C}))\] \[= \frac{\rho^{2}\gamma^{2}}{n^{2}}\operatorname{Var}(\sum_{i=1}^{n} \sum_{j\in\mathcal{N}_{i}}v_{ij}x_{ij}(\mathbf{C})).\] To simplify the notation, we do not distinguish \((i,j)\) and \((j,i)\) in the following due to the symmetry of \(x_{ij}(\cdot)\) and \(x_{ji}(\cdot)\). Let \(E^{\prime}\) be the undirected version of the edge set \(E\), i.e. if \((i,j)\in E\) then there exists \(e\in E^{\prime}\) such that \(e:=(i,j)\) and \(e:=(j,i)\). The weight of edge \(e\in E^{\prime}\) is defined to be \(v_{e}=\mathbf{1}\{(i,j)\in E\}v_{ij}+\mathbf{1}\{(j,i)\in E\}v_{ji}\) if \(e:=(i,j):=(j,i)\). Also, \(x_{e}(\mathbf{C}_{k})=\mathbf{1}\{c_{k}(i)=c_{k}(j)\}\) if \(e:=(i,j):=(j,i)\). Then we can rewrite \[\operatorname{Var}(\sum_{i=1}^{n}\sum_{j\in\mathcal{N}_{i}}v_{ij}x_{ij}( \mathbf{C}))=\operatorname{Var}\left(\sum_{e\in E^{\prime}}v_{e}x_{e}(\mathbf{ C})\right).\] For every edge \(e\in E^{\prime}\), let \(I_{e}\) be the set of edges incident to \(e\). Moreover, let \(A_{1}(e)=I_{e}\), \(A_{2}(e)=\{f\in E^{\prime}:f\notin I_{e}\cup\{e\}\text{ and }I_{f}\cap I_{e}\neq \emptyset\}\) and \(A_{3}(e)=E^{\prime}/(A_{1}(e)\cup A_{2}(e))\). Note that if \(f\in A_{1}(e)\), then \(x_{e}(\mathbf{C})x_{f}(\mathbf{C})=0\). If \(f\in A_{3}(e)\), \(x_{e}(\mathbf{C})\) and \(x_{f}(\mathbf{C})\) are independent. Then \[\operatorname{Var}\left(\sum_{e\in E^{\prime}}v_{e}x_{e}(\mathbf{ C})\right)= \mathbb{E}[\sum_{e\in E^{\prime}}v_{e}x_{e}(\mathbf{C})]^{2}-( \sum_{e\in E^{\prime}}v_{e})^{2}/\rho^{2}\] \[= \sum_{e\in E^{\prime}}v_{e}\Big{(}v_{e}\operatorname{\mathbb{E}} [x_{e}(\mathbf{C})]+\sum_{f\in A_{1}(e)}v_{f}\operatorname{\mathbb{E}}[x_{e}( \mathbf{C})x_{f}(\mathbf{C})]+\sum_{f\in A_{2}(e)}v_{f}\operatorname{\mathbb{E} }[x_{e}(\mathbf{C})x_{f}(\mathbf{C})]\] \[+\sum_{f\in A_{3}(e)}v_{f}\operatorname{\mathbb{E}}[x_{e}( \mathbf{C})x_{f}(\mathbf{C})]\Big{)}-(\sum_{e\in E^{\prime}}v_{e})^{2}/\rho^{2}\] \[= \sum_{e\in E^{\prime}}v_{e}\Big{(}\frac{1}{\rho}v_{e}+\sum_{f\in A_{ 2}(e)}v_{f}(\frac{1}{\rho^{2}}+\text{Cov}(x_{e}(\mathbf{C})x_{f}(\mathbf{C})))+ \sum_{f\in A_{3}(e)}v_{f}\frac{1}{\rho^{2}}\Big{)}-(\sum_{e\in E^{\prime}}v_{e})^ {2}/\rho^{2}\] \[= \frac{1}{\rho}\left(1-\frac{1}{\rho}\right)\sum_{e\in E^{\prime}} v_{e}^{2}-\frac{1}{\rho^{2}}\sum_{e\in E^{\prime}}\sum_{f\in A_{1}(e)}v_{e}v_{f}+ \sum_{e\in E^{\prime}}\sum_{f\in A_{2}(e)}v_{e}v_{f}\,\text{Cov}(x_{e}(\mathbf{ C})x_{f}(\mathbf{C})).\] Note that \(-2\leq v_{e}\leq 2\) for all \(e\in E^{\prime}\) and \(\sum_{e\in E^{\prime}}|v_{e}|\leq\sum_{(i,j)\in E}|v_{ij}|\leq n\) by the assumption \(\sum_{j\in\mathcal{N}_{i}}|v_{ij}|\leq 1\) for all \(i\in[n]\). Also, \(|A_{1}(e)|\leq 2d-2\) and \(|A_{2}(e)|\leq 2(d-1)^{2}\) since the maximum degree is \(d\). Then \[\sum_{e\in E^{\prime}}v_{e}^{2}\leq\sum_{e\in E^{\prime}}2v_{e} \leq 2n\] \[\sum_{e\in E^{\prime}}\sum_{f\in A_{1}(e)}v_{e}v_{f}\geq-\sum_{e \in E^{\prime}}\sum_{f\in A_{1}(e)}2|v_{e}|\geq-2\sum_{e\in E^{\prime}}|v_{e}| |A_{1}(e)|\geq-4(d-1)n\] \[\sum_{e\in E^{\prime}}\sum_{f\in A_{2}(e)}|v_{e}v_{f}|\leq\sum_{e \in E^{\prime}}\sum_{f\in A_{2}(e)}2|v_{e}|=2\sum_{e\in E^{\prime}}|v_{e}||A_ {2}(e)|\leq 4(d-1)^{2}n\] Since \(-\rho^{-2}\leq\text{Cov}(x_{e}(\mathbf{C})x_{f}(\mathbf{C}))=\mathbb{E}(x_{e} (\mathbf{C})x_{f}(\mathbf{C}))-\rho^{-2}\leq\rho^{-1}-\rho^{-2}\), we have \[\text{Var}\left(\sum_{e\in E^{\prime}}v_{e}x_{e}(\mathbf{C})\right)\leq\frac{ 2n}{\rho}+\frac{4(d-1)n}{\rho^{2}}+\frac{4(d-1)^{2}n}{\rho}.\] To bound \(\text{Var}(\tau|\mathbf{C})\), we find upper bounds for \(\eta\) and \(\delta\) (defined in Proposition 4), respectively. Since Algorithm 3 produce clusters with maximum size 2, we have \(\eta\leq\sum_{k\in 1}^{m}2|C_{k}|/n^{2}=2/n\) and \(\delta\leq 2/n\) by Lemma 13. Then we apply Proposition 4 and obtain \(\text{Var}(\tau|\mathbf{C})=O(d^{2}/(np(1-p)))\). By the Perron-Frobenius theorem, \(\rho=\lambda^{*}\leq\max_{i\in E\sum_{j\in E}M_{ij}}\leq 2d\). By the variance decomposition formula, \(\text{Var}(\tau)=\text{Var}(\mathbb{E}[\tau|\mathbf{C}])+\mathbb{E}[\text{Var} (\tau|\mathbf{C})]\). Combining everything, we get \(\text{Var}(\tau)=O(d^{3}/(np(1-p)))\). ## 5 Statistical Inference In this section, we prove asymptotic normality results for the estimator \(\tau\) under mixed randomization design. This result will be useful for performing statistic inference on the ATE, e.g., by hypothesis testing or constructing confidence intervals. Typically, proofs for asymptotic normality rely on certain versions of the Central Limit Theorem. To this end, we rewrite the estimator \(\tau\) in the following form: \[\tau=\frac{1}{n}\sum_{i=1}^{n}2(2\rho\tilde{w}_{i}-\rho-\tilde{w}_{i}+1)\left( \frac{z_{i}}{p}-\frac{1-z_{i}}{1-p}\right)Y_{i}(\mathbf{z}),\] where \(\tilde{w}_{i}\) indicates whether unit \(i\) received a cluster-based treatment assignment or a Bernoulli assignment (see Eq (5a) and (5b)). Let \(L_{i}\) be the term within the above summation. Then \(\tau=\frac{1}{n}\sum_{i=1}^{n}L_{i}\) is the average of \(n\) dependent random variables. We say the random variable \(L_{i}\) has dependent neighborhood \(N_{i}\subset[n]\), if \(i\in N_{i}\) and \(X_{i}\) is independent of \(\{L_{j}\}_{j\neq iN_{i}}\). The following result is an extension of Stein's method for bounding the distance between probability distributions in the Wasserstein metric. **Lemma 11** (Ross, 2011, Theorem 3.6).: _Let \(X_{1},\dots,X_{n}\) be random variables such that \(\mathbb{E}[X_{i}^{4}]<\infty\), \(\mathbb{E}[X_{i}]=0\), \(\sigma^{2}=\text{Var}(\sum_{i=1}^{n}X_{i})\), and define \(W=\sum_{i=1}^{n}X_{i}/\sigma\). Let the collection \((X_{1},\dots,X_{n})\) have dependency neighborhoods \(N_{i}\), \(\forall i=[n]\), and also define \(D=\max_{i\in[n]}|N_{i}|\). Then for a standard normal random variable \(Z\), we have_ \[d_{W}(W,Z)\leq\frac{D^{2}}{\sigma^{3}}\sum_{i=1}^{n}\mathbb{E}[|X_{i}|^{3}]+ \frac{\sqrt{28}D^{3/2}}{\sqrt{\pi}\sigma^{2}}\sqrt{\sum_{i=1}^{n}\mathbb{E}[|X_ {i}|^{4}]},\] _where \(d_{W}(\cdot,\cdot)\) is the Wasserstein metric:_ \[d_{W}(\mu,\upsilon)\ :=\ \sup_{\{h\in\mathbb{R}\rightarrow\mathbb{R}:|h(x)-h(y)| \leq|x-y|\}}\left|\int h(x)d\mu(x)-\int h(x)dv(x)\right|.\] By Lemma 11, we establish sufficient conditions for asymptotic normality of the estimator \(\tau\). **Theorem 12**.: _Define \(\sigma_{n}^{2}=\mathrm{Var}(\sqrt{n}\tau/\rho)\). If \(\liminf_{n\rightarrow\infty}\sigma_{n}^{2}>0\), then we have_ \[\frac{\sqrt{n}(\tau-\mathbb{E}[\tau])}{\sigma_{n}\rho}\overset{d}{\to }N(0,1)\] _under one of the following conditions:_ _(1) The design applies fixed clustering from Algorithm_ 1_, and_ \(d^{8}/n\to 0\)_;_ _(2) The design applies weight-invariant clustering from Algorithm_ 3_, and_ \(d^{12}/n\to 0\)_._ ## 6 Numerical Experiments ### Test Instances We generalize networks in the numerical experiments from the random geometric graph (RGG) model (Ugander et al., 2013). The RGG model is a spatial graph where \(n\) units are scattered according to a uniform distribution in the region \([0,\sqrt{n}]\times[0,\sqrt{n}]\). Two units \(i\) and \(j\) are connected if \(dist(i,j)\leq\sqrt{r_{0}/\pi}\), where \(r_{0}\) is the limiting expected degree of the RGG model. To simulate the long-range dependency, we allow a unit to connect with \(r_{1}\) units outside the radius \(\sqrt{r_{0}/\pi}\), which are selected uniformly at random. Figure 1 shows two randomly generated RGG model with different parameters. Furthermore, if \(i\) and \(j\) are connected, we assign the weight coefficient \(v_{ij}\) independently from a uniform distribution \(U(-1/r,2/r)\), where \(r=r_{0}+r_{1}\). Note that the magnitude of the inference is inversely proportional to the average degree. To choose the parameters in the exposure mapping model Eq (1), we first generate \(\hat{\alpha}_{i}\) and \(\hat{\beta}_{i}\) independently from \(U(-1,1)\) for each \(i\in[n]\). We then re-scale the parameters by \(\alpha_{i}=\hat{\alpha}_{i}+5-\frac{1}{n}\sum_{j=1}^{n}\hat{\alpha}_{j}\), \(\beta_{i}=\hat{\beta}_{i}+0.5-\frac{1}{n}\sum_{j=1}^{n}\hat{\beta}_{j}\) and \(\gamma=0.5\left(\sum_{i=1^{n}}\sum_{j\in\mathcal{N}_{i}}v_{ij}\right)^{-1}\). The re-scaling step ensures that the true baseline parameter,, the direct treatment effect parameter, and the interference effect parameter are \(\bar{\alpha}=5\), \(\bar{\beta}=0.5\), and \(\gamma\sum_{i=1^{n}}\sum_{j\in\mathcal{N}_{i}}v_{ij}=0.5\), respectively. The true value of the ATE is equal to \(1\) in all randomly generated instances. ### Comparison of Different Designs The first design in our test is the fixed-cluster mixed randomization design in Section 3 with known interference weights \(v_{ij}\). In the experiment, we generate the clusters using Algorithm 1 and implement the treatment assignment Figure 1: Examples of RGG networks: the left figure is randomly generated from \((n,r_{0},r_{1})=(100,10,0)\) and the right figure \((n,r_{0},r_{1})=(100,5,5)\). steps in Section 3.3. The second design is the weight-invariant design described in Section 4, which is agnostic to the weight coefficients \(v_{ij}\). Finally, we test the randomized graph cluster randomization (RGCR) design Ugander and Yin (2023), which is the state-of-the-art method for cluster-based design. In all designs, we fix the probability of treatment to be \(p=0.5\). We denote the estimators of the above three designs by \(\tau_{F}\), \(\tau_{W}\), and \(\tau_{R}\), respectively. To compare the performance under different network sizes and structures, we generate \(3\times 6=18\) test instances, parameterized by \((n,r_{0},r_{1})\). The network size \(n\) is chosen from \(\{1000,2000,4000\}\). The value of \((r_{0},r_{1})\) is chosen from the set \(\{(4,0),(2,2),(0,4),(16,0),(8,8),(0,16)\}\). Note that \(r_{0}+r_{1}\) is either 4 or 16, which represents the expected average degree in the RGG model. Recall that when \(r_{0}\) is positive and \(r_{1}\) is zero, units are only connected to those in short distances. When \(r_{1}\) is positive, units may be subject to long-range inference. We consider three performance metrics for the ATE estimators \(\tau\): mean (\(\mathbb{E}[\tau]\)), variance (\(\mathrm{Var}(\tau)\)), and theoretical variance upper bounds (\(\widehat{\mathrm{Var}(\tau)}\)) given by Proposition 4. For each instance, we run 10,000 independent simulations for each design. The (exact) variances for \(\tau_{F}\) and \(\tau_{W}\) are approximated by sample variances, while the variance of the RGCR is calculated according to formula (2.1) to (2.3) in Ugander and Yin (2023). To calculate theoretical upper bounds on the variance, we use \(Y_{M}=6\). The variance upper bound for the RGCR design is given by \(\widehat{\mathrm{Var}(\tau_{R})}=2Y_{M}^{2}\sum_{i=1}^{n}|B_{2}(i)||B_{4}(i)|/n\) (this bound is in fact tighter than the bound used in Ugander and Yin (2023)). The simulation results are summarized in Table 1. We have several observations from the results in Table 1. First, let us examine the metrics for \(\tau_{F}\) (fixed clustering) and \(\tau_{W}\) (weight-invariant random clustering) under the mixed randomization design. As both estimators are unbiased, we expect that the means will be closer to the ATE when the sample variances are small. This relationship is confirmed by the simulation result. For sample variances \(\mathrm{Var}(\tau_{F})\), we observe from Table 1 that given \((r_{0},r_{1})\), if the network size \(n\) doubles, sample variances will decrease approximately by a half. Additionally, given \(n\), as the limiting expected degree \((r_{0}+r_{1})\) increases, the variances of \(\tau_{F}\) increase by approximately \((r_{0}+r_{1})^{2}\). These relationships imply that the \(O(d^{2}/n)\) bound in Theorem 5 is probably tight. We also observe that the theoretical variance upper bounds of \(\tau_{F}\) from Proposition 4, \(\widehat{\mathrm{Var}(\cdot)}\), is often less than twice the simulated sample variance, \(\mathrm{Var}(\cdot)\), suggesting that the bound in Proposition 4 serves as a good approximation of the true variance. Next, let us compare the variances of the estimators across all three designs, including \(\tau_{R}\) for the RGCR cluster-based design Ugander and Yin (2023). The estimator \(\tau_{F}\) under fixed clustering achieves the smallest sample variance and estimation errors among all three methods. The weight-invariant design (\(\tau_{W}\)) has significantly larger sample variance and estimation errors of the ATE compared to either the fixed-cluster mixed design (\(\tau_{F}\)) or cluster-based design (\(\tau_{R}\)). This is not surprising as the weight-invariant design assumes unknown edge weights. The fixed-cluster mixed design \begin{table} \begin{tabular}{c|r r r|r r r|r r r} \hline \hline \((n,r_{0},r_{1})\) & \(\mathbb{E}[\tau_{F}]\) & \(\mathbb{E}[\tau_{W}]\) & \(\mathbb{E}[\tau_{R}]\) & \(\mathrm{Var}(\tau_{F})\) & \(\mathrm{Var}(\tau_{W})\) & \(\mathrm{Var}(\tau_{R})\) & \(\widetilde{\mathrm{Var}(\tau_{F})}\) & \(\widetilde{\mathrm{Var}(\tau_{R})}\) & \(\widetilde{\mathrm{Var}(\tau_{R})}\) \\ \hline (1000,4,0) & 1.01 & 0.98 & 1.05 & **1.18** & 98.48 & 1.60 & **2.31** & 217.93 & 58.87 \\ (1000,2,2) & 1.04 & 1.01 & 1.00 & **1.58** & 61.18 & 3.10 & **2.76** & 108.44 & 562.53 \\ (1000,0,4) & 1.03 & 1.00 & 1.03 & **1.81** & 20.84 & 3.47 & **2.98** & 37.15 & 728.85 \\ (1000,16,0) & 1.06 & 1.98 & 0.99 & **6.32** & 1225.79 & 11.36 & **12.69** & 3465.31 & 1910.17 \\ (1000,8,8) & 1.04 & 1.26 & 0.93 & **13.22** & 639.16 & 32.89 & **25.44** & 2267.65 & 53433.22 \\ (1000,0,16) & 1.07 & 0.57 & 0.96 & **15.37** & 507.15 & 45.77 & **30.13** & 2077.55 & 66028.03 \\ \hline (2000,4,0) & 1.01 & 1.53 & 1.01 & **0.64** & 76.97 & 0.85 & **1.20** & 133.91 & 29.18 \\ (2000,2,2) & 1.01 & 1.02 & 1.01 & **0.83** & 42.80 & 1.67 & **1.49** & 80.48 & 330.58 \\ (2000,0,4) & 0.99 & 0.92 & 1.00 & **0.79** & 9.87 & 1.66 & **1.52** & 18.57 & 378.55 \\ (2000,16,0) & 0.97 & 1.29 & 1.03 & **3.53** & 485.21 & 5.71 & **6.79** & 2167.94 & 1066.67 \\ (2000,8,8) & 1.04 & 1.44 & 1.00 & **6.56** & 355.62 & 18.59 & **12.35** & 1419.96 & 57776.42 \\ (2000,0,16) & 1.02 & 0.99 & 1.05 & **7.53** & 254.43 & 26.25 & **15.09** & 1038.87 & 69962.69 \\ \hline (4000,4,0) & 1.01 & 1.36 & 0.97 & **0.34** & 63.24 & 0.41 & **0.64** & 105.77 & 15.44 \\ (4000,2,2) & 1.01 & 0.82 & 0.99 & **0.42** & 33.58 & 0.88 & **0.76** & 59.63 & 193.27 \\ (4000,0,4) & 1.00 & 1.04 & 1.02 & **0.44** & 5.43 & 0.93 & **0.75** & 9.29 & 193.31 \\ (4000,16,0) & 1.02 & 0.97 & 0.98 & **1.85** & 289.58 & 3.27 & **3.63** & 1203.06 & 575.97 \\ (4000,8,8) & 1.00 & 1.10 & 0.97 & **3.25** & 215.23 & 9.68 & **6.42** & 998.78 & 59398.23 \\ (4000,0,16) & 1.00 & 1.12 & 1.05 & **3.91** & 130.46 & 12.15 & **7.63** & 519.46 & 71930.99 \\ \hline \hline \end{tabular} \end{table} Table 1: Simulation Results for Different Experimental Designs. has a significantly smaller variance upper bound (\(\widetilde{\operatorname{Var}(\tau_{F})}\)) than the other designs, whereas the upper bound of the RGCR design (\(\widetilde{\operatorname{Var}(\tau_{R})}\)) is the largest compared to the others. However, empirically, the sample variances of RGCR are much smaller than the theoretical upper bounds, which indicates there might be a gap in the theoretical analysis of RGCR. Finally, we examine the impact of the network structure parameters \(r_{0}\) and \(r_{1}\) on the performance of these ATE estimators. For each setting \(r_{0}+r_{1}=4\) or \(r_{0}+r_{1}=16\), we consider three values of \(r_{0}\). Recall that when the average degree (\(r_{0}+r_{1}\)) is fixed, a larger \(r_{0}\) indicates that more neighbors are located in short distances, whereas a larger \(r_{1}\) implies more influences from long-range neighbors. Interestingly, we find that the fixed-cluster mixed design \(\tau_{F}\) is robust across all three cases and the variance barely changes with \(r_{0}\). The weight-invariant design performs better under long-range interference (i.e., larger \(r_{1}\)) but performs poorly under local network interference. However, the RGCR performs better with strong local network inference (i.e., larger \(r_{0}\)). ## 7 Conclusions We consider the problem of estimating the average treatment effect when units in an experiment may interfere with other units. The interference pattern is represented by an edge-weighted network, where the weights denote the magnitude of interference between a pair of units. Based on previous literature (Saveski et al., 2017; Pouget-Abadie et al., 2019), we propose a mixed experiment design that combines two commonly used methods: cluster-based randomization and Bernoulli randomization. We propose an unbiased estimator for the average treatment effect and establish upper and lower bounds on the variance of this estimator. Both the known edge weights case and the unknown edge weights case are studied. Moreover, we show the consistency and asymptotic normality of the proposed estimator given the network satisfies certain sparsity conditions. ## Appendix A Proof of Main Results ### Proof of Proposition 2 Proof.: We introduce the following notations: \[x_{ij} :=\mathbf{1}\{c(i)=c(j)\},\quad\forall i,j,\] \[t_{i} :=(\frac{z_{i}}{p}-\frac{1-z_{i}}{1-p})=\frac{(-1)^{1-z_{i}}}{p^{ z_{i}}(1-p)^{1-z_{i}}},\quad\forall i,\] \[r_{i} :=\sum_{j\in\mathcal{N}_{i}}v_{ij}z_{i},\quad\forall i.\] Rewrite \(\tau_{cb}=(\sum_{i=1}^{n}t_{i}Y_{i}(\mathbf{z}))/n\) and \(Y_{i}(\mathbf{z})=\alpha_{i}+z_{i}\beta_{i}+\gamma r_{i}\), we have \[\begin{split}\mathbb{E}[\tau_{cb}^{2}]=&\frac{1}{n^ {2}}E\left(\sum_{1\leq i,j\leq n}t_{i}t_{j}Y_{i}(\mathbf{z})Y_{j}(\mathbf{z}) \right)\\ =&\frac{1}{n^{2}}\sum_{1\leq i,j\leq n}(\alpha_{i} \alpha_{j}\,\mathbb{E}[t_{i}t_{j}]+\beta_{i}\beta_{j}\,\mathbb{E}[t_{i}t_{j}z _{i}z_{j}]+\gamma^{2}\,\mathbb{E}[t_{i}t_{j}r_{i}r_{j}]\\ &+2\alpha_{i}\beta_{j}\,\mathbb{E}[t_{i}t_{j}z_{j}]+2\alpha_{i} \gamma\,\mathbb{E}[t_{i}t_{j}r_{j}]+2\beta_{i}\gamma\,\mathbb{E}[t_{i}t_{j}z_{ i}r_{j}]).\end{split} \tag{13}\] Notice that \(\mathbb{E}[t_{i}t_{j}]=x_{ij}\,\mathbb{E}[t_{i}^{2}]\), \(\mathbb{E}[t_{i}t_{j}z_{i}z_{j}]=x_{ij}\,\mathbb{E}[t_{i}^{2}z_{i}^{2}]+(1-x_{ ij})\), \(\mathbb{E}[t_{i}t_{j}z_{j}]=x_{ij}\,\mathbb{E}[t_{i}^{2}z_{i}]\), \(\mathbb{E}[t_{i}t_{j}r_{j}]=x_{ij}\,\mathbb{E}[t_{i}^{2}r_{j}]\), \(\mathbb{E}[t_{i}t_{j}z_{i}r_{j}]=x_{ij}\,\mathbb{E}[t_{i}^{2}z_{i}r_{j}]+(1-x_ {ij})\sum_{j^{\prime}\in\mathcal{N}_{j}}v_{jj^{\prime}}x_{jj^{\prime}}\) and \(\mathbb{E}[t_{i}t_{j}r_{i}r_{j}]=x_{ij}\,\mathbb{E}[t_{i}^{2}r_{i}r_{j}]+(1-x_ {ij})\sum_{j^{\prime}\in\mathcal{N}_{j}}v_{jj^{\prime}}x_{jj^{\prime}}\) and \(\mathbb{E}[t_{i}t_{j}r_{i}r_{j}]=x_{ij}\,\mathbb{E}[t_{i}^{2}r_{i}r_{j}]+(1-x_ {ij})\sum_{j^{\prime}\in\mathcal{N}_{j}}v_{jj^{\prime}}x_{jj^{\prime}}\). \[\begin{split}\mathbb{E}[\tau_{cb}^{2}]=&\frac{1}{n^{2}} \sum_{1\leq i,j\leq n}x_{ij}E\left(t_{i}^{2}Y_{i}(\mathbf{z})(\alpha_{j}+\beta_{ j}z_{i}+\gamma r_{j})\right)\\ &+\frac{1}{n^{2}}\sum_{1\leq i,j\leq n}\left(\beta_{i}\beta_{j}(1 -x_{ij})+2\beta_{i}\gamma(1-x_{ij})\sum_{j^{\prime}\in\mathcal{N}_{j}}v_{jj^ {\prime}}x_{jj^{\prime}}\right)\\ &+\frac{\gamma^{2}}{n^{2}}\sum_{1\leq i,j\leq n}\left((1-x_{ij}) \left[(\sum_{i^{\prime}\in\mathcal{N}_{i}}v_{ii^{\prime}}x_{ii^{\prime}})(\sum _{j^{\prime}\in\mathcal{N}_{j}}v_{jj^{\prime}}x_{jj^{\prime}})+(\sum_{i^{ \prime}\in\mathcal{N}_{i}}v_{ii^{\prime}}x_{ji^{\prime}})(\sum_{j^{\prime}\in \mathcal{N}_{j}}v_{jj^{\prime}}x_{ij^{\prime}})\right]\right).\end{split} \tag{14}\] By Lemma 1 and \(\mathrm{Var}(\tau_{cb})=\mathbb{E}[\tau_{cb}^{2}]-\mathbb{E}[\tau_{cb}]^{2}\), we have \[\begin{split}\mathrm{Var}(\tau_{cb})=&\frac{1}{n^{ 2}}\sum_{1\leq i,j\leq n}x_{ij}E\left(t_{i}^{2}Y_{i}(\mathbf{z})(\alpha_{j}+ \beta_{j}z_{i}+\gamma r_{j})\right)\\ &-\frac{1}{n^{2}}\sum_{1\leq i,j\leq n}x_{ij}\left(\beta_{i}\beta _{j}+2\beta_{i}\gamma\sum_{j^{\prime}\in\mathcal{N}_{j}}v_{jj^{\prime}}x_{jj^ {\prime}}+\gamma^{2}(\sum_{i^{\prime}\in\mathcal{N}_{i}}v_{ii^{\prime}}x_{ii^{ \prime}})(\sum_{j^{\prime}\in\mathcal{N}_{j}}v_{jj^{\prime}}x_{jj^{\prime}}) \right)\\ &+\frac{\gamma^{2}}{n^{2}}\sum_{1\leq i,j\leq n}\left((1-x_{ij})( \sum_{i^{\prime}\in\mathcal{N}_{i}}v_{ii^{\prime}}x_{ji^{\prime}})(\sum_{j^{ \prime}\in\mathcal{N}_{j}}v_{jj^{\prime}}x_{ij^{\prime}})\right).\end{split} \tag{15}\] Recall that \(0<Y_{L}\leq Y_{i}(\mathbf{z})\leq Y_{M}\) for all \(i\in V\) and \(\mathbf{z}\in\{0,1\}^{n}\), which implies that \(0<Y_{L}\leq\alpha_{j}+\beta_{j}z_{i}+\gamma r_{j}\leq Y_{M}\) for all \(i,j\in V\). Thus, \[\frac{Y_{L}^{2}}{p(1-p)}=\mathbb{E}[t_{i}^{2}Y_{L}^{2}]\leq\mathbb{E}\left[t_{i }^{2}Y_{i}(\mathbf{z})(\alpha_{j}+\beta_{j}z_{i}+\gamma r_{j})\right]\leq \mathbb{E}[t_{i}^{2}Y_{M}^{2}]=\frac{Y_{M}^{2}}{p(1-p)},\quad\forall i,j\in V.\] Furthermore, \(Y_{i}(\mathbf{0})>0\) implies \(\alpha_{i}>0\) for all \(i\). Let \(s_{i}:=\sum_{i^{\prime}\in\mathcal{N}_{i}}v_{ii^{\prime}}x_{ii^{\prime}}\), then we have \[\begin{split}& 2Y_{M}(Y_{L}-Y_{M})\\ \leq&\beta_{i}\beta_{j}+\beta_{i^{\prime}}\gamma s_{j} +\beta_{j^{\prime}}\gamma s_{i}+\gamma^{2}s_{i}s_{j}\\ =&(\alpha_{i}+\beta_{i}+\gamma s_{i})(\alpha_{j}+ \beta_{j}+\gamma s_{j})-2\alpha_{i}(\alpha_{j}+\beta_{j}+\gamma s_{j})+\alpha_ {i}\alpha_{j}\\ \leq&\frac{1}{2}Y_{M}^{2}.\end{split}\] Finally, let \(x_{C_{k},i}=x_{i,C_{k}}:=\mathbf{1}\{c(i)=C_{k}\}\). By Eq (1), we have \[\sum_{1\leq i,j\leq n}\left((1-x_{ij})(\sum_{i^{\prime}\in \mathcal{N}_{i}}v_{ii^{\prime}}x_{ji^{\prime}})(\sum_{j^{\prime}\in\mathcal{N}_{ j}}v_{jj^{\prime}}x_{ij^{\prime}})\right)\] \[= \sum_{1\leq k\neq l\leq m}\sum_{i\in C_{k}}\sum_{j\in C_{l}}(\sum _{i^{\prime}\in\mathcal{N}_{i}}v_{ii^{\prime}}x_{ji^{\prime}})(\sum_{j^{\prime }\in\mathcal{N}_{j}}v_{jj^{\prime}}x_{ij^{\prime}})\] \[= \sum_{1\leq k\neq l\leq m}\sum_{i\in C_{k}}(\sum_{i^{\prime}\in \mathcal{N}_{i}}v_{ii^{\prime}}x_{C_{l},i^{\prime}})\sum_{j\in C_{l}}(\sum_{j^{ \prime}\in\mathcal{N}_{j}}v_{jj^{\prime}}x_{C_{k},j^{\prime}})\] \[= \sum_{1\leq k\neq l\leq m}(\sum_{i\in C_{k}}\sum_{i^{\prime}\in \mathcal{N}_{i}\cap C_{l}}v_{ii^{\prime}})(\sum_{j\in C_{l}}\sum_{j^{\prime} \in\mathcal{N}_{j}\cap C_{k}}v_{jj^{\prime}}). \tag{16}\] Combine all above together and using the fact that \(\sum_{1\leq i,j\leq n}x_{ij}=\sum_{k=1}^{m}|C_{k}|^{2}\), the proof is complete. ### Proof of Proposition 4 Proof.: Note that \[\operatorname{Var}(\tau)=\rho^{2}\operatorname{\mathbb{E}}[\tau_{c}^{2}]+(\rho-1)^ {2}\operatorname{\mathbb{E}}[\tau_{b}^{2}]-2\rho(\rho-1)\operatorname{\mathbb{E }}[\tau_{c}\tau_{b}]-(\bar{\beta}+\frac{\gamma}{n}\sum_{i=1}^{n}\sum_{j\in \mathcal{N}_{i}}v_{ij})^{2}.\] Below, we inherit the notations defined in the proof of Proposition 2. Analogous to the proof of Proposition 2, we have \[\operatorname{\mathbb{E}}[\tau_{c}^{2}]= \frac{4}{n^{2}}\sum_{1\leq i,j\leq n}x_{ij}E\left(\tilde{w}_{i}t_ {i}^{2}Y_{i}(\mathbf{z})(\alpha_{j}+\beta_{j}z_{i}+\gamma\tau_{j})\right)\] \[+\frac{1}{n^{2}}\sum_{1\leq i,j\leq n}(1-x_{ij})\left(\beta_{i} \beta_{j}+2\beta_{i}\gamma\sum_{j^{\prime}\in\mathcal{N}_{j}}v_{jj^{\prime}}x _{jj^{\prime}}\right)\] \[+\frac{\gamma^{2}}{n^{2}}\sum_{1\leq i,j\leq n}\left((1-x_{ij}) \left[(\sum_{i^{\prime}\in\mathcal{N}_{i}}v_{ii^{\prime}}x_{ii^{\prime}})(\sum _{j^{\prime}\in\mathcal{N}_{j}}v_{jj^{\prime}}x_{jj^{\prime}})+(\sum_{i^{ \prime}\in\mathcal{N}_{i}}v_{ii^{\prime}}x_{ji^{\prime}})(\sum_{j^{\prime}\in \mathcal{N}_{j}}v_{jj^{\prime}}x_{ij^{\prime}})\right]\right),\] \[\operatorname{\mathbb{E}}[\tau_{b}^{2}]= \frac{4}{n^{2}}\sum_{i=1}^{n}E\left((1-\tilde{w}_{i})t_{i}^{2}Y_ {i}(\mathbf{z})(\alpha_{j}+\beta_{j}z_{i}+\gamma\tau_{j})\right)+\bar{\beta}- \frac{1}{n^{2}}\sum_{i=1}^{n}\beta_{i}^{2}\] \[+\frac{\gamma^{2}}{n^{2}}\sum_{i=1}^{n}\sum_{\{j\in\mathcal{N}_{ i}:i\in\mathcal{N}_{j}\}}v_{ij}v_{ji},\] \[\operatorname{\mathbb{E}}[\tau_{c}\tau_{b}]= \frac{1}{n^{2}}\sum_{1\leq i,j\leq n}(1-x_{ij})\left(\beta_{i} \beta_{j}+\beta_{i}\gamma\sum_{j^{\prime}\in\mathcal{N}_{j}}v_{jj^{\prime}}x _{jj^{\prime}}\right).\] Combining the above equations, we have \[\operatorname{Var}(\tau)\leq \frac{2}{p(1-p)}Y_{M}^{2}(\rho^{2}\eta+\frac{1}{n}(\rho-1)^{2})-( \rho-1)^{2}\frac{1}{n^{2}}\sum_{i=1}^{n}\beta_{i}^{2}\] \[-\frac{1}{n^{2}}\sum_{1\leq i,j\leq n}[(\rho-1)^{2}\beta_{i}\beta _{j}+(\beta_{i}+\rho\gamma s_{i})(\beta_{j}+\rho\gamma s_{j})]x_{ij}\] \[+(\rho-1)^{2}\frac{\gamma^{2}}{n^{2}}\sum_{i=1}^{n}\sum_{\{j\in \mathcal{N}_{i}:i\in\mathcal{N}_{j}\}}v_{ij}v_{ji}+\rho^{2}\gamma^{2}\delta.\] Recall that \(0<Y_{L}\leq Y_{i}(\mathbf{z})\leq Y_{M}\) for all \(i\in V\) and \(\mathbf{z}\in\{0,1\}^{n}\), which implies that \(Y_{L}-Y_{M}\leq\beta_{i}+\gamma s_{i}=Y_{i}(1,\mathbf{z}_{-i})-\alpha_{i}\leq Y _{M}-Y_{L}\) for all \(i\in V\) (here \(\mathbf{z}_{-i}\) is the treatment for all units other than \(i\)). So \[\rho^{2}(Y_{L}^{2}+Y_{M}Y_{L}-Y_{M}^{2})\] \[\leq (\rho-1)^{2}\beta_{i}\beta_{j}+(\beta_{i}+\rho\gamma s_{i})(\beta _{j}+\rho\gamma s_{j})\] \[= \rho^{2}(\beta_{i}+\gamma s_{i})(\beta_{j}+\gamma s_{j})-\rho( \rho-1)(\beta_{i}\gamma s_{j}+\beta_{j}\gamma s_{i})\] \[\leq \rho^{2}Y_{M}(2Y_{M}-Y_{L}).\] Next, consider another clustering \(\mathbf{C}^{\prime}\) of the network \(G(V,E)\) where each vertex forms an individual cluster (i.e., \(\mathbf{C}^{\prime}=\{C_{1}^{\prime},C_{2}^{\prime},...,C_{n}^{\prime}\}=\{\{1 \},\{2\},...,\{n\}\}\)). By inequality (17) in Lemma 13 \[\sum_{i=1}^{n}\sum_{\{j\in\mathcal{N}_{i}:i\in\mathcal{N}_{j}\}}v_{ij}v_{ji}= \sum_{1\leq k\neq l\leq n}\left(\sum_{i\in C_{k}^{\prime}}\sum_{i^{\prime}\in \mathcal{N}_{i^{\prime}}\cap C_{l}^{\prime}}v_{ii^{\prime}}\right)\left(\sum_{j \in C_{l}^{\prime}}\sum_{j^{\prime}\in\mathcal{N}_{j}\cap C_{k}^{\prime}}v_{jj^ {\prime}}\right)\leq n.\] Then the upper bound in Eq (7) follows. The proof for the lower bound in Eq (8) is almost identical and thus is omitted. ### Proof of Theorem 5 Proof.: The proof includes two steps. Firstly, we would show that under the cluster set \(\mathbf{C}\) generated by maximum weight matching, the approximate variance upper bound \(A(\mathbf{C})\) defined in (9) is \(O(d^{2}/(np(1-p)))\). Since the merging step in Algorithm 1 strictly decrease the value of \(A(\mathbf{C})\), we then show that \(A(\mathbf{C})=O(d^{2}/(np(1-p)))\) implies the variance of the estimator \(\tau\) is also \(O(d^{2}/(np(1-p)))\), which completes the proof. To bound \(A(\mathbf{C})\) under the cluster set \(\mathbf{C}\) generated by maximum weight matching, we bound \(\rho\), \(\eta\) and \(\delta\), respectively. By Lemma 15 in Appendix B, a maximum weight matching guarantees \(\sum_{i\in V}\sum_{j\in\mathcal{N}_{i}}v_{ij}\mathbf{1}\{c(i)=c(j)\}\geq( \sum_{i\in V}\sum_{j\in\mathcal{N}_{i}}v_{ij})/2d\) in a graph with maximum degree \(d\), thus \[\rho=\frac{\sum_{i\in V}\sum_{j\in\mathcal{N}_{i}}v_{ij}}{\sum_{i\in V}\sum_{j \in\mathcal{N}_{i}}v_{ij}\mathbf{1}\{c(i)=c(j)\}}\leq 2d.\] To bound \(\eta\), because \(|C_{k}|\leq 2\) in the max weight matching and \(\sum_{k=1}^{m}|C_{k}|=n\), we have \[\eta=\sum_{k=1}^{m}\frac{|C_{k}|^{2}}{n^{2}}\leq\sum_{k=1}^{m}\frac{2|C_{k}|}{ n^{2}}=\frac{2}{n}.\] By Lemma 13, \(\delta\leq 2/n\) since the maximum cluster size is 2. Combining the upper bounds for \(\rho\), \(\eta\) and \(\delta\), \(A(\mathbf{C})\) defined in (9) is \(O(d^{2}/(np(1-p)))\). Next, we show that \(A(\mathbf{C})=O(d^{2}/(np(1-p)))\) implies \(\operatorname{Var}(\tau)=O(d^{2}/(np(1-p)))\). We first prove that \((Y_{M}-Y_{L})^{2}/a^{2}\) is an upper bound for \(\gamma^{2}\). By the assumption that \(Y_{L}\leq Y_{i}(\mathbf{z})\leq Y_{M}\) for all \(i\in[n]\) and \(\mathbf{z}\in\{0,1\}^{n}\), we have \(Y_{L}\leq\alpha_{i}\leq Y_{M}\) and \(Y_{L}\leq\alpha_{i}+\gamma\sum_{j\in\mathcal{N}_{i}}v_{ij}z_{j}\leq Y_{M}\) for all \(i\in[n]\) and \(\mathbf{z}\in\{0,1\}^{n}\). Thus \[Y_{L}\leq Y_{M}\leq Y_{L}-\alpha_{i}\leq\gamma\sum_{j\in\mathcal{N}_{i}}v_{ij }z_{j}\leq Y_{M}-\alpha_{i}\leq Y_{M}-Y_{L},\quad\forall i\in[n],\mathbf{z} \in\{0,1\}^{n},\] which implies \[\gamma^{2}\leq\frac{(Y_{M}-Y_{L})^{2}}{(\sum_{j\in\mathcal{N}_{i}}v_{ij}z_{j}) ^{2}},\quad\forall i\in[n],\mathbf{z}\in\{0,1\}^{n},\] and therefore, \[\gamma^{2}\leq\frac{(Y_{M}-Y_{L})^{2}}{(\max_{i\in[n]}\sum_{j\in\mathcal{N}_{i }}\max\{v_{ij},0\})^{2}}=\frac{(Y_{M}-Y_{L})^{2}}{a^{2}}.\] Then by the variance upper bound in Proposition 4, there exists a positive constant \(C_{0}\) such that \[\operatorname{Var}(\tau)\leq \left((\frac{2}{p(1-p)}+1)Y_{M}^{2}-Y_{M}Y_{L}-Y_{L}^{2}\right) \rho^{2}\eta+\gamma^{2}\rho^{2}\delta+C_{0}\frac{\rho^{2}}{np(1-p)}\] \[\leq \left((\frac{2+C_{0}}{p(1-p)}+1)Y_{M}^{2}-Y_{M}Y_{L}-Y_{L}^{2}+C_ {0}\right)\rho^{2}\eta+\frac{(Y_{M}-Y_{L})^{2}}{a^{2}}\rho^{2}|\delta|\] \[\leq C_{1}A(\mathbf{C})=O\left(\frac{d^{2}}{np(1-p)}\right),\] where the second inequality follows from \(\eta=\sum_{k=1}^{m}|C_{k}|^{2}/n^{2}\geq\sum_{k=1}^{m}|C_{k}|/n^{2}=1/n\) and \(\gamma^{2}\leq(Y_{M}-Y_{L})^{2}/a^{2}\), and \(C_{1}\) in the third inequality is a constant. ### Proof of Theorem 6 Proof.: Below, we use the upper bound Eq (7) in Proposition 4 and find bounds for \(\rho\), \(\eta\), and \(\delta\), respectively. Recall that by the definition of Algorithm 2, the size of any cluster is no more than \(\kappa(d+1)\). Let \(v_{0}\) denote the center vertex of a 2-hop cluster in the first step of the algorithm, and \(V_{0}\) be the set of all such vertices. Suppose there exists \(u\in V\) such that \(u\notin B_{4}(v_{0})\) for any \(v_{0}\in V_{0}\), then \(B_{2}(u)\) does not intersect with \(B_{2}(v_{0})\) for any \(v_{0}\in V_{0}\). Otherwise, there exists \(v_{0}^{\prime}\) and a path from \(v_{0}^{\prime}\) to \(u\) with at most 4 hops, which is a contradiction. Thus for any \(u\in V\), \(u\in B_{4}(v)\) for some \(v\) chosen in the first step of the algorithm. Suppose \(|V_{0}|=m\) and \(V_{0}\) consists of vertices \(\{v_{1},v_{2},...,v_{m}\}\) generated from the first step of the algorithm. Since \(V\subset\cup_{k=1}^{m}B_{4}(v_{k})\), we have \[n=|V|\leq\sum_{k=1}^{m}|B_{4}(v_{k})|\leq\kappa^{3}\sum_{k=1}^{m}|B_{1}(v_{k} )|.\] Thus, we have \[\rho\leq\frac{n}{\sum_{i=1}^{n}\sum_{j\in\mathcal{N}_{i}}v_{ij}\mathbf{1}\{c (i)=c(j)\}}\leq\frac{n}{\sum_{k=1}^{m}\sum_{u\in B_{1}(v_{k})}\epsilon}=\frac{ n}{\epsilon\sum_{k=1}^{m}|B_{1}(v_{k})|}\leq\frac{\kappa^{3}}{\epsilon},\] where the first inequality follows from the definition of \(\rho\) in Eq (6), the second inequality holds since we assume the weights satisfy \(v_{ij}\geq 0\) and \(\sum_{j\in\mathcal{N}_{i}}v_{ij}\geq\epsilon\), and the final inequality is from the previous equation. This gives an upper bound for \(\rho\). To bound \(\eta\), we have \(\eta=\sum_{k=1}^{m}|C_{k}|^{2}/n^{2}\leq\sum_{k=1}^{m}\kappa(d+1)|C_{k}|/n^{2}= \kappa(d+1)/n\) since \(\max_{k\in[m]}|C_{k}|\leq\kappa(d+1)\). To bound \(\delta\), we apply Lemma 13 and obtain \(\delta\leq\kappa(d+1)/n\). Combining the upper bounds for \(\rho\), \(\eta\) and \(\delta\) together and applying Proposition 4, the variance of estimator \(\tau\) is \(O(k^{7}d/(np(p-1)))\). ### Proof of Theorem 7 Proof.: In a \((d,\kappa)\)-cycle network, we call an edge \((i,j)\in E\) a Type-1 edge if \(j\in\{(i\pm 1)\mod n,(i\pm 2)\mod n,...,(i\pm(\kappa-1))\mod n\}\), and a Type-2 edge if \(j\in\{(i\pm\kappa)\mod n,(i\pm 2\kappa)\mod n,...,(i\pm d\kappa)\mod n\}\). We say a cluster \(\{i_{1},i_{2},...,i_{l}\}\) is continuous if and only if it includes at least \(l-1\) edges from the set \(\{(j,(j+1)\mod n):1\leq j\leq n\}\). Let \(E^{1}\) and \(E^{2}\) be the set of Type-1 and Type-2 edges, respectively. Consider a cluster \(C\) of size \(t\). Let \(n_{k}^{C}\) denote the number of Type-\(k\) edges (\(k=1,2\)) in cluster \(C\), i.e. \(n_{k}^{C}=|\{(i,j)\in E:i\in C,j\in C;\ (i,j)\in E^{k}\}|\), for \(k\in\{1,2\}\). First, we bound \(n_{1}^{C}\) as follows: 1. When \(t\leq\kappa\), since the maximum number of edges in \(C\) is \(t(t-1)\) (when \(C\) is a clique), we have \(n_{1}^{C}\leq t(t-1)\). 2. When \(\kappa+1\leq t\leq 2\kappa-1\), \[n_{1}^{C}\leq(\kappa-1)+\kappa+...+\underbrace{(t-1)+...+(t-1)}_{2\kappa-t}+ (t-2)+...+(\kappa-1)=(\kappa-1)(2t-\kappa).\] 3. When \(t\geq 2\kappa\), \[n_{1}^{C}\leq(\kappa-1)+\kappa+...+\underbrace{2(\kappa-1)+...2(\kappa-1)}_{t-2 \kappa+2}+2\kappa-3+...+(\kappa-1)=(\kappa-1)(2t-\kappa).\] Similarly, for \(n_{2}^{C}\), we have the following bounds: 1. When \(t\leq(d+1)\kappa\), we have \(n_{2}^{C}\leq t\lfloor\frac{t-1}{\kappa}\rfloor\). 2. When \((d+1)\kappa+1\leq t\leq 2d\kappa+1\), \[n_{2}^{C}\leq d\kappa+(d+1)\kappa+...+\lfloor\frac{t-1}{\kappa}\rfloor\left(t-2 \kappa(\lfloor\frac{t-1}{\kappa}\rfloor-d)\right)+(\lfloor\frac{t-1}{\kappa} \rfloor-1)\kappa+...+d\kappa\] \[= t\lfloor\frac{t-1}{\kappa}\rfloor-(\lfloor\frac{t-1}{\kappa} \rfloor-d)(\lfloor\frac{t-1}{\kappa}\rfloor-d+1)\kappa.\] 2. When \(t\geq 2d\kappa+2\), \[n_{2}^{C}\leq d\kappa+(d+1)\kappa+...+2d\left(t-2d\kappa\right)+(2d-1)\kappa+...+d \kappa=d(2t-\kappa(d+1)).\] Note that when \((d+1)\kappa+1\leq t\leq 2d\kappa+1\) (i.e., Case 2-b), \[t[\frac{t-1}{\kappa}]-([\frac{t-1}{\kappa}]-d)([\frac{t-1}{\kappa }]-d+1)\kappa\] \[\leq k([\frac{t-1}{\kappa}]+1)[\frac{t-1}{\kappa}]-([\frac{t-1}{ \kappa}]-d)([\frac{t-1}{\kappa}]-d+1)\kappa\] \[= \kappa d(2[\frac{t-1}{\kappa}]-d+1)\] \[\leq d(2t-\kappa(d+1)),\] so the bound in (2-c) also applies to (2-b). To summarize, \[n_{1}^{C}+n_{2}^{C}\leq f(t) := \left\{\begin{array}{cl}t(t-1),&\text{if }1\leq t\leq\kappa\\ (\kappa-1)(2t-\kappa)+t[\frac{t-1}{\kappa}],&\text{if }\kappa+1\leq t\leq \kappa(d+1)\\ d(2t-\kappa(d+1))+(\kappa-1)(2t-\kappa),&\text{if }t\geq\kappa(d+1)+1.\end{array}\right.\] It's easy to check that \(f(t)\) is nonnegative and increasing on \(t\geq 1\). Suppose we have \(m\) clusters, and \(t_{j}\) is the size of the \(j\)th cluster. Then by \(n_{1}^{C}+n_{2}^{C}\leq f(t)\) and the Cauchy-Schwarz inequality, \[\rho^{2}\eta\geq\frac{\sum_{j=1}^{m}t_{j}^{2}}{n^{2}}\left(\frac{2n(d+\kappa-1 )}{\sum_{i=j}^{m}f(t_{j})}\right)^{2}\geq 4(d+\kappa-1)^{2}\left(\sum_{j=1}^{m} \frac{f(t_{j})^{2}}{t_{j}^{2}}\right)^{-1}.\] Furthermore, we relax \(t_{j}\) to continuous variables. Note that when \(1\leq t\leq\kappa\), \(f(t)/t^{1.5}\leq t^{0.5}\leq\sqrt{\kappa}\). When \(\kappa+1\leq t\leq\kappa(d+1)\), we have \[f(t)/t^{1.5}\leq 2\kappa/\sqrt{t}+\sqrt{t}/\kappa\leq 2\kappa/\sqrt{\kappa+1}+ \sqrt{(d+1)/\kappa}.\] When \(\kappa(d+1)+1\leq t\leq n\), \(f(t)/t^{1.5}\leq 2(d+\kappa-1)/\sqrt{t}<2(d+\kappa-1)/\sqrt{\kappa(d+1)}\). Thus, \(f(t)^{2}/t^{3}=O(\max\{\kappa,d/\kappa\})\). Then by Lemma 14, \(\max\sum_{i=j}^{m}f(t_{j})^{2}/t_{j}^{2}=O(\max\{n\kappa,nd/\kappa\})\), and the proof is complete. ### Proof of Theorem 12 Proof.: Let \(X_{i}=\frac{1}{\sqrt{nd}}(L_{i}-\mathbb{E}[L_{i}])\), then \(\mathbb{E}[X_{i}]=0\) and \(\mathbb{E}[X_{i}^{4}]\leq\left(\frac{2}{\sqrt{np}(1-p)}Y_{M}\right)^{4}<\infty\). Recall that \(B_{r}(v)\) is defined to be the set of vertices within \(r\) hops of a vertex \(v\) in Definition 3. For two units \(i\) and \(j\), if \(B_{2}(i)\) and \(B_{2}(j)\) are not both intersect with any clusters, then \(X_{i}\) and \(X_{j}\) are independent. Using the notation \(D\) in Lemma 11 and under condition (1), a unit is connected to at most \(d\) clusters and each cluster is connected to at most \(2d+1\) nodes, thus \(D\leq d(2d+1)\). Under condition (2), for unit \(i\) and any unit \(j\notin B_{3}(i)\), \(X_{i}\) and \(X_{j}\) are independent, thus \(D\leq|B_{3}(i)|<d^{3}\). Apply Lemma 11 and the result follows. ## Appendix B Auxiliary Results **Lemma 13**.: _We have the following bound for \(\delta\) (defined in Proposition 4):_ \[\delta\leq\frac{\max_{k\in[m]}|C_{k}|}{n}.\] Proof.: First, we prove the inequality below: \[\sum_{1\leq k\neq l\leq m}\left(\sum_{i\in C_{k}}\sum_{i^{\prime}\in\mathcal{N}_{ i}\cap C_{l}}v_{ii^{\prime}}\right)\left(\sum_{j\in C_{l}}\sum_{j^{\prime}\in \mathcal{N}_{j}\cap C_{k}}v_{jj^{\prime}}\right)\leq n\max_{k\in[m]}|C_{k}|. \tag{17}\] By Eq (16) in the proof of Proposition 2 and Eq (1), we have \[\sum_{k\neq l}(\sum_{i\in C_{k}}\sum_{i^{\prime}\in\mathcal{N}_{ i}\cap C_{l}}v_{ii^{\prime}})(\sum_{j\in C_{l}}\sum_{j^{\prime}\in\mathcal{N}_{j} \cap C_{k}}v_{jj^{\prime}})\] \[= \sum_{i,j}\left((1-x_{ij})(\sum_{i^{\prime}\in\mathcal{N}_{i}}v_{ ii^{\prime}}x_{ji^{\prime}})(\sum_{j^{\prime}\in\mathcal{N}_{j}}v_{jj^{\prime}}x_{ ij^{\prime}})\right)\] \[\leq \sum_{i,j}(\sum_{i^{\prime}\in\mathcal{N}_{i}}|v_{ii^{\prime}}|x _{ji^{\prime}})(\sum_{j^{\prime}\in\mathcal{N}_{j}}|v_{jj^{\prime}}|x_{ii^{ \prime}})\quad\text{(by taking the absolute value)}\] \[\leq \sum_{i,j}(\sum_{i^{\prime}\in\mathcal{N}_{i}}|v_{ii^{\prime}}|x _{ji^{\prime}})\quad\text{(because }\sum_{i^{\prime}\in\mathcal{N}_{i}}|v_{ii^{\prime}}|\leq 1)\] \[= \sum_{i=1}^{n}\sum_{i^{\prime}\in\mathcal{N}_{i}}|v_{ii^{\prime}} ||C_{c(i^{\prime})}|\] \[\leq \sum_{i=1}^{n}(\max_{k\in[m]}|C_{k}|)\sum_{i^{\prime}\in\mathcal{ N}_{i}}|v_{ii^{\prime}}|\] \[\leq n\max_{k\in[m]}|C_{k}|\quad\text{(because }\sum_{i^{\prime}\in \mathcal{N}_{i}}|v_{ii^{\prime}}|\leq 1).\] Using Eq (17) and the definition of \(\delta\), we have \[\delta=\frac{1}{n^{2}}\sum_{1\leq k\neq l\leq m}\left(\sum_{i\in C_{k}}\sum_{i ^{\prime}\in\mathcal{N}_{i}\cap C_{l}}v_{ii^{\prime}}\right)\left(\sum_{j\in C _{l}}\sum_{j^{\prime}\in\mathcal{N}_{j}\cap C_{k}}v_{jj^{\prime}}\right)\leq \frac{\max_{k\in[m]}|C_{k}|}{n}.\] **Lemma 14**.: _For any function \(f(x):[1,n]\rightarrow\mathbb{R}\), if there exists \(a:=\max\limits_{x\in[1,n]}f(x)/x\), then the optimal value of the following optimization problem is bounded by_ \[\max\left\{\sum_{j=1}^{m}f(t_{j}),\text{subject to }m\in\mathbb{Z}^{+},\mathbf{t} \in[1,n]^{m},\sum_{j=1}^{m}t_{j}=n\right\}\leq na.\] Proof.: The result follows from the following inequality: \[\sum_{j=1}^{m}f(t_{j})=\sum_{j=1}^{m}\frac{f(t_{j})}{t_{j}}t_{j}\leq a\sum_{j= 1}^{m}t_{j}=na.\] **Lemma 15**.: _For a graph \(G(V,E)\) with the edge weights \(\{v_{ij}\}_{(i,j)\in E}\) and the maximum degree \(d\), the sum of weights in a maximum weight matching is at least \((\sum_{i\in V}\sum_{j\in\mathcal{N}_{i}}v_{ij})/2d\)._ Proof.: Let \(G^{\prime}(V^{\prime})\) be the subgraph of \(G(V,E)\) induced by a subset of vertices \(V^{\prime}\subset V\) and \(G^{\prime}(V^{\prime},E^{\prime})\) the subgraph induced by \(V^{\prime}\subset V\) and \(E^{\prime}\subset E\). For brevity of notations, we let \(V(E)\) denote the subset of \(V\) such that each vertex in \(V(E)\) is incident to at least one edge in \(E\). We provide Algorithm 4 based on maximum matching. The function edge_set_to_clustering(\(E\)) turns an edge set \(E\) into a clustering design. If \((u,v)\in E\), then \(u\) and \(v\) are in the same cluster. If \(u\) is not covered by \(E\), the \(u\) itself is a cluster. The idea of this algorithm is to decompose the graph \(G(V,E)\) into clustering set \(\mathbb{C}=\{\mathbf{C}_{1},\mathbf{C}_{2},...\}\), such that each edge is covered by exactly one cluster \(C_{j}^{k}\) from a specific clustering \(\mathbf{C}_{k}\). The set \(A\) represents the uncovered edges. In line 4, we found a vertex set \(U\) with the largest degree. In line 5, we found a maximum matching \(M_{1}\) from the subgraph \(G^{\prime}(U)\), thus an unmatched vertex in \(U\) cannot be adjacent to another unmatched vertex. In lines 6 to 9, we found edge set \(M_{2}\) such that each vertex in \(V(M_{1}\cup M_{2})\cap U\) has exactly one edge from \(M_{1}\cap M_{2}\) incident to it. Also, the edges covered by clustering edge_set_to_clustering(\(M_{1}\cup M_{2}\)) is exactly \(M_{1}\cup M_{2}\). After constructing a clustering from \(M_{1}\) and \(M_{2}\) in line 10, we found the set of vertex \(U^{\prime}\) without incident to \(M_{1}\cup M_{2}\). Since the graph \(G^{\prime}(U,\{(u^{\prime},v^{\prime})\in A|u^{\prime}\in U^{\prime},v^{\prime }\in U\})\) is bipartite and every \(u^{\prime}\in U^{\prime}\) has the same degree, the maximum matching \(M_{3}\) we found in line 14 is always a perfect matching by Hall's marriage theorem. Thus, remove \(M_{1}\),\(M_{2}\) and \(M_{3}\) from \(A\) will strictly decrease \(\max_{v\in V}\texttt{degree}(v,A)\). Since the maximum degree of graph \(G(V,E)\) is \(d\), the algorithm will terminate after at most \(d\) while loops. Also, once an edge is covered by a clustering, we directly remove it from the uncovered edge set \(A\), making each edge is covered exactly once. Algorithm 4 decomposes a graph with maximum degree \(d\) to at most \(2d\) graph matchings, such that each edge in \(E\) is covered exactly once. This implies that a maximum weight matching under the weights \(\{v_{i,j}\}_{i,j\in[n],i\neq j}\) is at least \((\sum_{i\in V}\sum_{j\in\mathcal{N}_{i}}v_{ij})/2d\). ``` Input: A graph \(G(V,E)\) 1 Initialize the clustering set \(\mathbb{C}\leftarrow\{\}\) 2 Initialize set \(A\gets E\) 3while\(|A|>0\)do 4\(U\leftarrow\{u\in V|\texttt{degree}(u,A)=\max_{v\in V}\texttt{degree}(v,A)\}\) 5 Find a maximum matching \(M_{1}=\{(v_{1},u_{1}),(v_{2},u_{2}),...\}\) from the subgraph \(G^{\prime}(U)\) 6\(M_{2}\leftarrow\{\}\) 7for\(u\in U\)do 8if\(u\in U/V(M_{1})\) and \(\mathcal{N}_{u}\not\subset U\)then 9 Choose one \(u^{\prime}\in\mathcal{N}_{u}/U\) and let \(M_{2}\gets M_{2}\cup\{(u,u^{\prime})\}\) 10\(\mathbb{C}\leftarrow\mathbb{C}\cup\{\texttt{edge\_set\_to\_clustering}(M_{1} \cup M_{2})\}\) 11\(A\gets A/(M_{1}\cup M_{2})\) 12\(U^{\prime}=U/V(M_{1}\cup M_{2})\) 13if\(|U^{\prime}|>0\)then 14 Find a maximum matching \(M_{3}\) from the subgraph \(G^{\prime}(U,\{(u^{\prime},v^{\prime})\in A|u^{\prime}\in U^{\prime},v^{\prime }\in U\})\) 15\(\mathbb{C}\leftarrow\mathbb{C}\cup\{\texttt{edge\_set\_to\_clustering}(M_{3})\}\) 16\(A\gets A/M_{3}\) ``` **Algorithm 4**Graph decomposition into matchings Algorithm 4 decomposes a graph with maximum degree \(d\) to at most \(2d\) graph matchings, such that each edge in \(E\) is covered exactly once. This implies that a maximum weight matching under the weights \(\{v_{i,j}\}_{i,j\in[n],i\neq j}\) is at least \((\sum_{i\in V}\sum_{j\in\mathcal{N}_{i}}v_{ij})/2d\).
2309.06256
Mitigating the Alignment Tax of RLHF
LLMs acquire a wide range of abilities during pre-training, but aligning LLMs under Reinforcement Learning with Human Feedback (RLHF) can lead to forgetting, which is also known as the alignment tax. To empirically verify this hypothesis, we conducted experiments with existing RLHF algorithms using OpenLLaMA-3B, which revealed a pronounced alignment tax in NLP tasks. On the other hand, despite various techniques to mitigate forgetting, they are often at odds with the RLHF performance, leading to a trade-off between reward maximization and forgetting mitigation. In light of the above pressing issue in aligning LLMs, in this paper we explore model averaging, which interpolates between pre and post RLHF model weights, to achieve a more efficient reward-tax Pareto front. To understand its effectiveness, We offer theoretical insights into model averaging, revealing that it enhances performance Pareto front by increasing feature diversity on the layers where tasks share overlapped feature spaces. Empirical evidence corroborates our analysis by showing the benefits of averaging low-level transformer layers. Building on the analysis and the observation that averaging different layers of the transformer leads to significantly different reward-tax trade-offs, we propose Adaptive Model Averaging (AMA) to adaptively find various combination ratios of model layers. AMA seeks to maximize the alignment reward while incurring minimal alignment tax. Moreover, we validate AMA's performance across a range of RLHF algorithms over OpenLLaMA-3B and further extend our findings to Mistral-7B.
Yong Lin, Hangyu Lin, Wei Xiong, Shizhe Diao, Jianmeng Liu, Jipeng Zhang, Rui Pan, Haoxiang Wang, Wenbin Hu, Hanning Zhang, Hanze Dong, Renjie Pi, Han Zhao, Nan Jiang, Heng Ji, Yuan Yao, Tong Zhang
2023-09-12T14:16:54Z
http://arxiv.org/abs/2309.06256v3
Speciality vs Generality: An Empirical Study on Catastrophic Forgetting in Fine-tuning Foundation Models ###### Abstract Foundation models, including Vision Language Models (VLMs) and Large Language Models (LLMs), possess the \(generality\) to handle diverse distributions and tasks, which stems from their extensive pre-training datasets. The fine-tuning of foundation models is a common practice to enhance task performance or align the model's behavior with human expectations, allowing them to gain \(speciality\). However, the small datasets used for fine-tuning may not adequately cover the diverse distributions and tasks encountered during pre-training. Consequently, the pursuit of speciality during fine-tuning can lead to a loss of generality in the model, which is related to catastrophic forgetting (CF) in deep learning. In this study, we demonstrate this phenomenon in both VLMs and LLMs. For instance, fine-tuning VLMs like CLIP on ImageNet results in a loss of generality in handling diverse distributions, and fine-tuning LLMs like Galactica in the medical domain leads to a loss in following instructions and common sense. To address the trade-off between the speciality and generality, we investigate multiple regularization methods from continual learning, the weight averaging method (Wise-FT) from out-of-distributional (OOD) generalization, which interpolates parameters between pre-trained and fine-tuned models, and parameter-efficient fine-tuning methods like Low-Rank Adaptation (LoRA). Our findings show that both continual learning and Wise-ft methods effectively mitigate the loss of generality, with Wise-FT exhibiting the strongest performance in balancing speciality and generality. ## 1 Introduction Foundation models, such as CLIP [59] for vision-language models (VLMs) and GPT-3 [7] for large language models (LLMs), have garnered widespread attention due to their remarkable achievements. These models are pre-trained on vast datasets, which endows them with an impressive level of **generality**[6]. They exhibit the ability to effectively handle diverse distributions and tasks, as illustrated by CLIP's exceptional performance on ImageNet and its variants with distributional shifts. Similarly, GPT-3 showcases its prowess in various tasks such as translation, common sense question-answering, and cloze tasks. The generality of foundation models can be categorized into two aspects. Firstly, _task generality_ highlights the ability of foundation models to handle diverse tasks. For example, an LLM is proficient in instruction-following as well as question-answering (QA) on common sense. Secondly, _distribution generality_ emphasizes the capability of foundation models to accommodate different data distributions within a given task. For instance, VLMs like CLIP demonstrate their proficiency in classifying both ImageNet [18] containing natural photos, and ImageNet-Sketch [77] containing sketches. Another illustration of distribution generality is the LLM's competence in performing medical question-answering tasks on distinct datasets such as MedQA-USMLE [52] containing medical subject questions, and MedMCQA [33] containing real-world medical consultation questions. It is a common practice to fine-tune foundation models on specific tasks to enhance task performance or align the model's behavior with human expectations. During the fine-tuning stage, the foundation models gain **speciality** to achieve exceptional performance on the fine-tuning task. However, since the small fine-tuning dataset does not have sufficient coverage of the distribution as well as tasks, the fine-tuned model can potentially lose its generality. This phenomenon is closely associated with the concept of catastrophic forgetting (CF) observed in deep neural networks (DNN). Previous studies have revealed that when learning new tasks, DNNs have the potential to forget their proficiency in previously learned tasks. In this work, we aim to answer the following question: * _Does the foundation model forget generality when being fine-tuned to gain the speciality for a specific task?_ * _If so, what method can mitigate the speciality-generality tradeoff?_ To address the aforementioned questions comprehensively, we perform experiments utilizing CLIP for VLMs and Galactica [72] for LLMs. For CLIP, we investigate distribution generality forgetting by conducting two experiments. Firstly, we fine-tune CLIP on the ImageNet dataset and evaluate its distribution generality on the ImageNet variants. Secondly, we fine-tune CLIP on the'real' domain of DomainNet and assess its distribution generality on other domains within DomainNet. In the case of Galactica, we fine-tune it on a specific dataset within the medical question-answering (QA) task. Subsequently, we measure its distribution generality across other medical QA datasets and evaluate its task generality in common sense QA as well as instruction following tasks. See Section 3 for more details on the experimental settings. In Section 4, our findings provide a positive response to the first question, showing a trade-off between speciality and generality during fine-tuning. More specifically, the fine-tuned models exhibit notable speciality by achieving exceptional performance on the fine-tuning dataset. However, they demonstrate inferior performance compared to the pre-trained model in terms of generality, including both distribution and task generality. For instance, the performance of CLIP on the ImageNet variants and Galactica on the instruction following task experiences a significant decline. To address the second question, we conduct a systematic investigation of various methods developed across different communities. Let's denote the model as \(f_{\theta}\) with parameter \(\theta\), and use \(\theta_{0}\) to represent the pre-trained parameter. We explore the following methods: * Continual learning methods: These methods involve regularizing the fine-tuned parameter \(\theta\) towards \(\theta_{0}\). We consider adding L1 penalty \(|\theta-\theta_{0}|\)[53] and L2 penalty \(\|\theta-\theta_{0}\|_{2}^{2}\)[82]. We also examine the knowledge distillation (KD) method, which enforces the output of \(f_{\theta}\) to remain close to \(f_{\theta_{0}}\) through the penalty \(\|f_{\theta}(\mathbf{x})-f_{\theta_{0}}(\mathbf{x})\|_{2}^{2}\)[42], where \(\mathbf{x}\) represents the input. * Out-of-distributional (OOD) generalization methods: We consider approaches such as Wise-ft [80] which uses \(f_{\alpha\theta_{0}+(1-\alpha)\theta}\) by interpolating between \(\theta\) and \(\theta_{0}\), where \(\alpha\) is between 0 and 1. * Parameter-efficient fine-tuning methods: We investigate techniques like LoRA [28], which utilize low-rank matrix re-parameterize the update \(\theta-\theta_{0}\). Our further results in Section 4 provide an affirmative answer to the second question by showing that the L1/L2/KD as well as Wise-ft can effectively mitigate catastrophic forgetting and preserve generality during fine-tuning. Among them, Wise-ft achieves the best performance on speciality-generality trade-off. We summarize our main findings as follows: * In our systematic experiments on both VLMs and LLMs, we have observed clear instances where the foundation models tend to forget their generality during the fine-tuning process to gain speciality for a specific task. Notably, the forgetting of LLM is more severe on the tasks that is significantly different from the fine-tuning task. * Continual learning methods such as L1, L2, and KD penalty can effectively mitigate generality forgetting compared to vanilla fine-tuning, while still achieving reasonable performance on the fine-tuned task. * The model averaging method, Wise-ft, demonstrates the strongest performance in balancing pre-trained generality and fine-tuned speciality across various scenarios. * LoRA excels in mitigating forgetting and even surpasses Wise-ft when it can effectively solve the fine-tuning task, but it performs poorly compared to other methods when the task is challenging for LoRA. ## 2 Related Works Foundation Models.Foundation models, including Vision-and-Language Models (VLMs) and Large Language Models (LLMs), are pre-trained using vast amounts of data. While the underlying technology for pre-training these models, such as deep neural networks trained through self-supervised methods on extensive datasets, is not novel, their remarkable capability to generalize and adapt to diverse downstream tasks is unprecedented [6]. An excellent line of VLMs includes CLIP [59], ALIGN [31], BASIC [56] and BLIP [40]. The LLMs include many excellent works, to name a few, GPT [7], LLaMA [75], Galactica [72], Bloom [67]. It is a common practice to fine-tune the foundation model to obtain better performance on a specific task [20], follow the instruction of humans [51, 66, 79] and aligns with humans' preferences [3, 51, 21]. Pretraining, fine-tuning, and distributional shift.Before the emergence of foundation models, the pre-training and fine-tuning paradigm had already achieved remarkable accomplishments across numerous applications [25, 59, 19]. However, when deploying pre-trained models into real-world applications and fine-tuning them, a common challenge arises: encountering novel samples from a target distribution that differs from the fine-tuning distribution [2, 23, 85, 43, 89, 90, 44, 70]. To address this issue, several approaches have been proposed. For instance, [80, 12, 15] suggest leveraging the weight ensemble of the pre-trained model and the fine-tuned model to enhance out-of-distribution (OOD) performance. Another strategy, as proposed in [38], is the LP-FT technique, which involves initializing the pre-trained feature extractor with a reasonably good classifier. This initialization is particularly important when the classifier is randomly initialized, as the pre-trained features can easily be distorted to accommodate the random classifier during fine-tuning, exacerbating the issue of catastrophic forgetting. Catastrophic forgetting and continual learning.DNN tends to lose the knowledge of previously learned task (e.g., pretraining task) when it begins to learn a new task (e.g., the fine-tuning task) [50]. Various attempts have been made to alleviate catastrophic forgetting. [82, 64, 1, 68] imposes a penalty on the change of the parameter on the new task. [37] gains intuition from Taylor expansion of the losses of the old task at the point of fine-tuned parameter, and further proposes EWC by incorporating the Hassien matrix into parameter regularization. The reply-based method tries to approximate and recover the old data distribution. Popular methods in this direction include sampling methods which store a few old training samples with a small memory buffer [76, 63, 13, 11, 9], and generative methods which generate samples from the old distributions with a generative model [10]. Knowledge distillation (KD) methods try to keep the prediction of the fine-tuned model close to that of the old model. KD can be naturally combined with experience reply. For example, [61] proposes to perform KD on the samples of new tasks as well as the old samples stored in the buffer. Notably, previous continual learning focus on sequentially learning tasks which learns a sequence of task in order and measure the forgetting of older tasks when learning new tasks [78]. Whereas, we focus on the generality forgetting of the pre-trained foundation model during fine-tuning a specific task. Refer to Section A.2 for a detailed discussion. ## 3 Experimental Settings and Methods ### Settings Consider that the foundation model has been pre-trained on massive data, containing \(M\) tasks \(\{\mathcal{T}^{1},\mathcal{T}^{2},...,\mathcal{T}^{M}\}\). Denote the pre-trained foundation model as \(f_{\theta_{0}}\) where \(\theta_{0}\) is the model parameter. Each task \(\mathcal{T}^{i}\) consists of instances of \(\mathbf{z}\). For LLMs, \(\mathbf{z}=\mathbf{x}\) is the auto-regressive language tokens; and for VLMs, \(\mathbf{z}=(\mathbf{x},\mathbf{y})\) contains a pair of image input \(\mathbf{x}\) and language label \(\mathbf{y}\). See Appendix A.1 for more discussion on the definition of tasks. Since the pre-training dataset covers a variety of distributions for each task, we consider that a task \(\mathcal{T}^{i}\) contains samples from \(N^{i}\) domains, i.e., \(\mathcal{T}^{i}=\{\mathcal{D}^{i}_{j}\}_{j=1}^{N^{i}}\), where \(\mathcal{D}^{i}_{j}\) represents the samples \(\mathbf{z}\) from the distribution \(\mathbb{P}^{i}_{j}(\mathbf{z})\) and the distributions of different domains in the same task differs from each other, i.e., \(\mathbb{P}^{i}_{j}(\mathbf{z})\neq\mathbb{P}^{i}_{k}(\mathbf{z})\) for \(j\neq k\). We consider the following two types of catastrophic forgetting (CF) when fine-tuning foundation models * Distribution generality forgetting. When the foundation model is finetuned on \(\mathcal{D}^{i}_{j}\), i.e., the \(j\)th domain of task \(\mathcal{T}^{i}\), it may forget the rest domains of task \(\mathcal{T}^{i}\), i.e., \(\{\mathcal{D}^{i}_{k}\}_{k\neq j}\). We are interested in the performance of the fine-tuned foundation model on the \(\{\mathcal{D}^{i}_{k}\}_{k\neq j}\). * Task generality forgetting. When the foundation model is finetuned on the domain \(\mathcal{D}^{i}_{j}\) of task \(\mathcal{T}^{i}\), we are interested in the performance in the other tasks, i.e., \(\{\mathcal{T}^{k}\}_{k\neq i}\). #### 3.1.1 Vision Language Models For VLMs, we investigate the CLIP, a famous vision-language model. CLIP can perform zero-shot classification on a wide range of datasets, showing a strong ability to generalize a variety of data distributions. However, the performance of CLIP can still be inferior on a specific task, especially on the task whose relevant data is insufficient in the training dataset of CLIP [88]. Therefore, CLIP needs to be fine-tuned to enhance the downstream task performance. Since the fine-tuning dataset does not have sufficient coverage of the data distributions, the fine-tuning process can weaken the robustness of CLIP to the distributional shift. For example, fine-tuning CLIP on the domain \(\mathcal{D}^{i}_{j}\) of task \(\mathcal{T}^{i}\) can significantly boost the performance on \(\mathcal{D}^{i}_{j}\), whereas, potentially leads to worsened OOD performance on \(\{\mathcal{D}^{i}_{k}\}_{k\neq j}\). This phenomenon has been studied in OOD literature [80, 24, 74, 45], whereas few work has studied it in the context of catastrophic forgetting. Following [80, 24, 74, 45], we conduct experiments on the following two settings: Figure 1: Our setting to investigate the CF of the generality in pre-trained foundation model. * (a) Fine-tune CLIP on ImageNet and evaluate the forgetting on five variants of ImageNet with distributional shifts. Specifically, we consider the task \(\mathcal{T}^{1}\) to perform classification in ImageNet. We fine-tune \(f_{\theta_{0}}\) on ImageNet i.e., \(\mathcal{D}^{1}_{1}\), and evaluate the distribution generality forgetting on \(\{\mathcal{D}^{1}_{i}\}_{i=2}^{6}\), five variants of ImageNet with natural shifts (ImageNet-V2[62], ImageNet-R[26], ImageNet Sketch[77], ObjectNet[4], and ImageNet-A[27]). * (b) Fine-tune CLIP on the "real" domain of DomainNet [55] and assess the extent of CF on the other domains in DomainNet. The DomainNet dataset represents the task \(\mathcal{T}^{1}\), where the "real" domain corresponds to the fine-tuned domain \(\mathcal{D}^{1}_{1}\), while the remaining domains in DomainNet, namely "Clipart", "Infograph", "Painting", "Quickdraw", and "Sketch", are utilized as the testing domains. We use CLIP with VIT-16/B pre-trained by OpenAI. We include more details on the experiments in Appendix E. In the VLM part, our primary focus revolves around investigating the issue of distribution generality forgetting [80, 24, 74, 45] (not task generality). This is because models like CLIP, which are fine-tuned for specific classification tasks, are unlikely to encounter samples from other classes (other tasks) during deployment. For instance, if we fine-tune CLIP to distinguish between cats and dogs, we would not expect this model to classify camels when deployed. However, when it comes to LLM, we face a different scenario. Let's take the example of training a chat robot for medical purposes. When fine-tuning the LLM for the task of medical knowledge, it is also crucial for the LLM to possess the ability to understand and follow human instructions, which inherently involve distinctly different tasks apart from the medical domain itself. Therefore, in the context of LLM, we will address both the issues of distribution generality forgetting and task generality forgetting to ensure its overall competence and versatility. #### 3.1.2 Large Language Models For LLMs, we adopt Galactica-1.3B [72] and conduct experiments with the LMFlow framework 1[20]. As discussed before, consider a language model tuned for the medical domain, it should have expertise in the medical task, and also be able to perform different tasks such as instruction following. Figure 2: (Left) Illustration samples of the class lemon from ImageNet and 5 variants; (Right) Illustration samples of the class apple from DomainNet. Therefore, we investigate the scenario to fine-tune Galactica on QA task \(\mathcal{T}^{1}\). Specifically, we fine-tune the model on a medical question-answer (QA) dataset, i.e., using MedMCQA [52] as \(\mathcal{D}^{1}_{1}\). MedMCQA is here. We evaluate the forgetting of the pre-trained model in the following aspects: * Evaluate the distribution generality forgetting on the other medical datasets containing distributional shift. Sepcifically, we use PubMedQA [34] and MedQA-USMLE [33] as \(\mathcal{D}^{1}_{2}\) and \(\mathcal{D}^{1}_{3}\). We refer PubMedQA [34] and MedQA-USMLE [33] as Medical OOD datasets in the following discussion. * Evaluate the task generality by forgetting the following aspects: * Common sense QA task \(\mathcal{T}^{2}\), which contains four datasets \(\{\mathcal{D}^{2}_{i}\}_{i=1}^{4}\), namely, ARC Easy [16] and Challenge on [16] science exams, Race [39] on Reading Exams and PIQA [5] on physical interaction. dMCQA focus on the medical domain. * Instruction following task \(\mathcal{T}^{3}\), which containing 3 datasets \(\{\mathcal{D}^{3}_{i}\}_{i=1}^{3}\), namely, Alpaca [71], GPT4 instruct [54] and LMFlow [20]. The performance of the QA task is evaluated by the accuracy and the performance of the instruction following is evaluated by log-likelihood (LL), whose details are in Appendix B. We also give illustrations of each dataset in Table 1. Notably, the conceptual distance between the fine-tuning dataset MedMCQA and the Medical OOD datasets, Common Sense QA datasets, and instruction following datasets indeed increases. * The Medical OOD datasets are relatively close to MedMCQA since they both involve medical QA tasks. This similarity in domain makes them conceptually closer. * On the other hand, the Common Sense QA datasets have a larger distance from MedMCQA compared to the Medical OOD datasets. While all these datasets involve QA tasks with choices (A/B/C), Common Sense QA focuses specifically on the common sense domain, which differs from the medical domain of MedMCQA. This difference in domain knowledge contributes to a greater conceptual distance. * Additionally, the instruction following datasets have a larger distance from MedMCQA. This is because the instruction following datasets contain samples with general instructions, rather than choice QA questions (A/B/C), which is the format of MedMCQA. Figure 3 summarizes the conceptual distance between MedMCQA and the Medical OOD datasets, Common Sense QA datasets, and instruction following datasets. ### Methods #### 3.2.1 Regularization towards Pretrained Weight Let's recall that \(\theta_{0}\) represents the parameters of the pre-trained foundation model. To address the issue of catastrophic forgetting (CF) during fine-tuning, a straightforward approach is to enforce a \begin{table} \begin{tabular}{c|c|l|l} \hline \hline Task Type & Dataset Name & \multicolumn{1}{c}{Example} \\ \hline \multirow{6}{*}{Medical} & \multirow{3}{*}{PubMedQA [34]} & _Context_: Middle third clavicular fracture...? \\ & & _Question_: Does comminution play no role in treated middle third clavicular fracture? \\ & & _Output_: yes \\ \cline{2-3} & \multirow{3}{*}{MedMCQA [52]} & _Question_: Severe painful sensorimotor and autonomic neuropathy along with alopecia may suggest poisoning with: \\ & & (A) Thallium (B) Arsenic (C) Lead (D) Copper. \\ & & _Output_: A \\ \cline{2-3} & \multirow{3}{*}{MedQA-USMLE [33]} & _Question_: A 23-year-old pregnant woman at 22 weeks... \\ & & Which of the following is the best treatment for this patient? \\ & & (A) Ampicillin, (B) Ceftriaxone, \\ & & (C) Doxycycline, (D) Nitrofurantoin. \\ & & _Output_:B \\ \hline \multirow{6}{*}{Common Sense} & \multirow{3}{*}{ARC Easy [16]} & _Question_: What carries oxygen throughout the body? \\ & & (A) white blood cells, (B) brain, \\ & & (C) red blood cells, (D) nerves \\ & & _Output_: C \\ \cline{2-3} & \multirow{3}{*}{ARC Challenge [16]} & _Question_: Which technology was developed most recently? \\ & & (A) cellular telephone, (B) television, \\ & & (C) refrigerator, (D) airplane. \\ & & _Output_: A \\ \cline{2-3} & \multirow{3}{*}{Race [39]} & _Passage_: The rain had continued for a week,... \\ & & _Question_: What did Nancy try to do before she fell over? \\ & & (A) Measure the depth, (B) Look for a tree trunk, \\ & & (C) Protect her cows, (D) Run away \\ & & _Answer_: C \\ \cline{2-3} & \multirow{3}{*}{PIQA [5]} & _Goal_: When boiling butter, when it’s ready, you can \\ & & (Sol1) Pour it onto a plate, (Sol2) Pour it into a jar, \\ & & _Answer_:Sol1 \\ \hline \multirow{6}{*}{Instruction} & \multirow{3}{*}{Alpaca [71]} & _Instruction_: Give three tips for staying healthy. \\ & & _Output_: 1. Eat a balanced diet. 2. Exercise regularly. 3..... \\ \cline{2-3} & \multirow{3}{*}{GPT4 instruct [54]} & _Input_: Compare and contrast the effects of individual...? \\ & & _Output_: Individual performance refers to... \\ \cline{2-3} & \multirow{3}{*}{LMFlow [20]} & _Human_: I think the biggest thing is that it’s in her smile. \\ & & _Assistant_: That sounds very comforting... \\ \cline{1-1} & & _Human_: Ok, can you remind me to change scenes? \\ \cline{1-1} & & _Assistant_: Sure, it’s important to change scenes every... \\ \hline \hline \end{tabular} \end{table} Table 1: Illustrations of datasets of medical QA tasks(PubMedQA, MedMCQA, MedQA-USMLE), common sense QA tasks(ARC Easy/Challenge, Race, PIQA), and instruction following tasks(Alpaca, GPT4 instruct, LMFlow). constraint on the proximity of \(\theta\) to \(\theta_{0}\). In other words, we ensure that \(\theta\) does not deviate too far from \(\theta_{0}\)[82]. We accomplish this by optimizing two penalties: * The L1 penalty \(|\theta-\theta_{0}|\)[53]2, Footnote 2: [53] applies a post-processing technique to find the sparsity structure in the \(\theta-\theta_{0}\). For simplicity, we use the L1 norm to encourage sparsity [87]. This method is also connected with parameter-efficient fine-tuning. We put it in the category of continual learning since it is close to the penalty \(\|\theta-\theta_{0}\|_{2}^{2}\)[82]. * The L2 penalty \(\|\theta-\theta_{0}\|_{2}^{2}\)[82]. It is worth noting that the L1 penalty tends to produce sparse solutions, indicating that \(\theta\) can only differ from \(\theta_{0}\) in a limited subset of parameters [87]. #### 3.2.2 Parameter-efficient Fine-tuning Parameter-efficient fine-tuning aims to achieve comparable performance as traditional fine-tuning while utilizing significantly fewer trainable parameters. One widely adopted method in this domain is LoRA (Low-Rank Adaptation) [28], which effectively represents modified weights \(\Delta\theta\) using low-rank matrix pairs while keeping most of the pre-trained network parameters frozen. This approach has shown performance on par with full fine-tuning. In our study, we apply LoRA specifically to two weight matrices (\(W_{q}\) and \(W_{v}\)) within the self-attention module of the Transformer architecture. We constrain the update of a pre-trained weight matrix \(\Delta\theta_{0}=\theta-\theta_{0}\in\mathbb{R}^{d\times k}\) by representing the updated portion as \(\Delta\theta=BA\), where \(B\in\mathbb{R}^{d\times r}\), \(A\in\mathbb{R}^{r\times k}\), and the rank \(r\) is much smaller than \(min(d,k)\). We explore different values of \(r\) as a hyper-parameter. During training, \(\theta_{0}\) remains fixed, and only \(A\) and \(B\) receive gradient updates. We initialize \(A\) with random Gaussian values and set \(B\) to zero. #### 3.2.3 Knowledge Distillation Knowledge distillation involves transferring knowledge from a larger model (teacher) to a smaller one (student). In our case, we aim to preserve the generality of the pre-trained model during the fine-tuning process. We utilize the pre-trained model \(f_{\theta_{0}}\) as the teacher and the fine-tuned model Figure 3: Illustration of the conceptual distance and the corresponding performance (relative) change of fine-tuning Galactica on MedMCQA. as the student. To ensure the student model's predictions or learned features align closely with those of the teacher model, we enforce an L2 regularization constraint on their outputs: \(\|f_{\theta}(\mathbf{x})-f_{\theta_{0}}(\mathbf{x})\|_{2}^{2}\)[8, 29]. #### 3.2.4 Model Averaging The model averaging method, Wise-ft, introduced in [80], suggests a linear interpolation approach between the pre-trained parameter \(\theta_{0}\) and the fine-tuned parameter \(\theta\). This results in the model \(f_{(1-\alpha)\theta_{0}+\alpha\theta}\), where \(\alpha\) represents a hyper-parameter ranging from 0 to 1. ## 4 Results ### Vision Language Models The results of CLIP are presented in Figure 4. The left and right panels of Figure 4 showcase the outcomes of fine-tuning CLIP on ImageNet and DomainNet, respectively. It can be observed that fine-tuning leads to lower generality performance compared to the original pre-trained model. Although this phenomenon has been studied within the OOD community, its direct connection to catastrophic forgetting (CF) remains unclear. In Appendix C, we provide explicit evidence that this phenomenon is closely associated with representation forgetting [42, 37, 32, 17, 78, 83], which is a common form of CF. Figure 4 provides a comparison of different methods for fine-tuning CLIP. The following observations can be made: * Continual learning methods such as L1, L2, and KD penalty all show improvement in distribution generality performance, indicating that distribution generality forgetting can be mitigated by simple continual learning techniques. * Wise-ft stands out as it significantly alleviates distribution generality forgetting and achieves the best distribution generality performance. In ImageNet, Wise-ft surpasses both the trained and fine-tuned models, as well as methods like L1/L2/LORA/KD, with the highest distribution generality performance exceeding 63%. None of the other methods achieve a distribution generality performance better than 62%. The trend observed in DomainNet is similar to that in ImageNet. * The KD method achieves better speciality performance than Wise-ft while maintaining a relatively high distribution generality performance. Specifically, KD and Wise-ft are comparable to each other, and there is no consistent superiority of one over the other. * The trade-off of LoRA is inferior compared to other methods on VLMs. We note that LoRA can not match the full fine-tuning performance on speciality performance. ### Large Language Models Figures 5 and 6 show the results of fine-tuning Galactica on MedMcQA and PubMedQA, respectively. The main results are as follows: * We did not observe consistent distribution generality forgetting when fine-tuning various medical datasets. Specifically, when fine-tuning MedMCQA, we noted that the performance on the other two datasets from the medical domain, namely PubMedQA and MedQA-USMLE, actually improved simultaneously. However, the opposite effect was observed when fine-tuning PubMedQA, as the performance on MedMCQA and MedQA-USMLE decreased. We observe consistent task generality forgetting on Common Sense datasets and Instruction following datasets in both Figure 5 and 6. For example, Figure 5(c) shows that the log-likelihood of instruction tasks drops from about -255 to -310 as we fine-tune Galactica on MedMCQA; Figure 6(c) shows the instruction performance drops to less than -290 when we fine-tune Galactica on PubMedQA. * **Larger conceptual distance, more severe forgetting.** The results in Figure 5(a)-(c) show that larger conceptual distance leads to more severe forgetting in the LLM. The conceptual distances between MedMCQA and other datasets (OOD medical tests, Common Sense QA, and instruction following) are discussed in Section 3.1.2 and illustrated in Figure 3. In Figure 5(a), the task with the smallest conceptual distance (OOD medical) exhibits no forgetting, indicating that the LLM retains its performance on conceptually similar tasks.However, Figure 5(c) reveals significant forgetting (over 20% drop) in the task with the largest conceptual distance (instruction following). This suggests that the LLM struggles more with tasks that are further away conceptually from the fine-tuning task. The Common Sense QA task Figure 4: On the speciality and generality trade-off of fine-tuning CLIP. Left) fine-tune on ImageNet and evaluate the generality by the average performance of ImageNet variants, i.e., ImageNet-V2, ImageNet-R, ImageNet Sketch, ObjectNet, and ImageNet-A; Right) fine-tune on the ‘real’ domain of DomainNet and evaluate the generality by the average performance on the Clipart”, “Infograph”, “Painting”, “Quickdraw”, and “Sketch” domains. falls between the OOD medical and instruction following tasks in terms of forgetting, as shown in the results. These findings highlight that the LLM's forgetting behavior is influenced by the conceptual distance between the fine-tuning task (MedMCQA) and other tasks. Tasks closer in concept to MedMCQA experience less forgetting, while tasks with larger conceptual distances are more prone to forgetting. * **Model averaging methods achieves strong performance**. Wise-ft [80] consistently addresses the issue of forgetting common sense and instructions. For instance, in Figure 5(c), it is demonstrated that using Wise-ft with \(\alpha=0.3\) effectively enhances the log-likelihood (LL) score, raising it above -270, even when the performance on the fine-tuning dataset remains relatively unchanged. Similarly, in Figure 6(c), Wise-ft with \(\alpha=0.3\) improves the LL score from about -290 to approximately -270. * **The effectiveness of LoRA**. The performance of LoRA varies significantly depending on whether it is fine-tuned on MedMCQA or PubMedQA. Specifically, LoRA demonstrates remarkable mitigation of forgetting in instruction following when fine-tuning on PubMedQA, surpassing even the performance of Wise-ft (e.g., Figure 6(C)). However, LoRA performs poorly compared to Wise-ft and other methods on MedMCQA (e.g., Figure 5(C)). One notable distinction in LoRA's performance between PubMedQA and MedMCQA is that LoRA easily achieves comparable performance on the fine-tuning dataset in PubMedQA. However, in the case of fine-tuning on MedMCQA, LoRA's performance is significantly inferior to full fine-tuning in terms of its performance on the fine-tuning dataset. We speculate that fine-tuning on PubMedQA might possess a better low-rank structure, making it easier for LoRA to adapt to the fine-tuning task. On the other hand, MedMCQA is more challenging for LoRA to adapt to, resulting in a larger magnitude of the low-rank matrix and subsequently leading to more significant forgetting. * L1, L2, and KD penalties have shown the ability to alleviate CF compared to vanilla full fine-tuning. However, they do not consistently match the effectiveness of Wise-ft In Appendix D, we provide additional results that investigate the impact of early stopping, learning rate, and warm-up strategies on CF. ### Discussion with existing works In this technical report, our primary focus is to conduct extensive experiments on generality forgetting and perform a systematic comparison of existing methods. While we do not claim novelty in our methods or results, we do present some new findings that we believe will contribute to future research in this domain. These findings are briefly discussed as follows: VLM part.We adopt the term "ID and OOD performance" to maintain consistency with previous research, which specifically refers to the performance related to specialty and distribution generality in the previous sections. Previous OOD works have found that fine-tuning VLMs on ImageNet leads to worsened OOD performance and Wise-ft can alleviate this issue [80, 24, 74, 45, 73, 2]. However, existing OOD works haven't explicitly linked this phenomenon to CF. Specifically, consider a model composed of a featurizer \(\Phi\) and a classifier \(v\) and denote the pre-trained model as \([\Phi_{0},v_{0}]\). A line of existing works suggests that the fine-tuned feature encoder \(\Phi\) is as effective as or better than the initial encoder \(\Phi_{0}\) for the target domain and the drop in OOD performance is attributed to the fine-tuned classifier \(v\) not being suitable for the target domain [65, 58, 36]. However, our results and analysis in Appendix C present a different outcome: we find that \(\Phi\) actually forgets important features for the target domains when compared to \(\Phi_{0}\). Additionally, we show that simple methods such as knowledge distillation can achieve comparable performance to Wise-ft, the previous SOTA method. A cocurrent work [84] introduces EMT (Evaluating MulTimodality) as a method for assessing CF in multimodal large language models (MLLMs) and reveals multiple popular MLLMs suffer from CF. Compared with [84], we also explore the methods to alleviate the CF. Figure 5: Fine-tune on MedMCQA. We evaluate the forgetting in terms of (a) distribution generality forgetting on the other two medical QA datasets including PubMedQA and MedQA-USMLE, (b) task generality forgetting on common sense tasks including ARC Easy and Challange, Race, and PIQA (c) instruction following tasks including Alpaca, GPT4 instruct and LMFlow. Figure 6: Fine-tune on PubMedQA. We evaluate the forgetting in terms of (a) distribution generality forgetting on the other two medical QA datasets including MedMCQA and MedQA-USMLE, (b) task generality forgetting on common sense tasks including ARC Easy and Challange, Race and PIQA (c) instruction following tasks including Alpaca, GPT4 instruct and LMFlow. LLM part.Most research on forgetting in natural language processing (NLP) focuses on sequential pre-training [14, 22, 35, 57, 47] and fine-tuning tasks [69, 60, 81, 86, 49]. They either sequentially train a model \(f_{\theta}\) from scratch on tasks [\(\mathcal{T}_{1}\), \(\mathcal{T}_{2}\),..., \(\mathcal{T}_{K}\)] or sequentially fine-tune a pre-trained model on tasks [\(\mathcal{T}_{1}\), \(\mathcal{T}_{2}\),..., \(\mathcal{T}_{K}\)]. Evaluation of forgetting is based on the model's performance on \(\mathcal{T}_{i}\) after training it on \(\mathcal{T}_{j}\), where \(i<j\). Our setting differs from theirs as we focus on generality forgetting in \(f_{\theta_{0}}\) during fine-tuning a single task. We visualize the trade-off between generality and specialty in Figure 5 and 6 when fine-tuning LLM. While such trade-offs have been observed in VLM works [80, 24, 74, 45], we did not find similar results in LLM works. A concurrent study [48] has investigated generality forgetting of LLM during fine-tuning sequences of tasks (see Section A.2 for detailed discussion on the differences in settings), whereas they do not explore methods to alleviate CF. Our results demonstrate that Wise-ft achieves superior performance in mitigating LLM's catastrophic forgetting, and we observe an intriguing phenomenon where the effectiveness of LoRA in alleviating forgetting depends on the fine-tuning task's difficulty. ## 5 Conclusion and Limitation In conclusion, our investigation highlights the delicate trade-off between speciality and generality during the fine-tuning of foundation models. To address this challenge, we explore various regularization methods from continual learning, as well as the weight averaging method (Wise-ft) and parameter-efficient fine-tuning techniques like LoRA. Our findings demonstrate that continual learning and Wise-ft methods effectively alleviate the loss of generality, with Wise-ft outperforming others in achieving a balance between speciality and generality. One limitation is that we haven't covered the rehearsal methods, which replay a small portion of the pre-trained dataset. Limitation.One notable limitation of our work is that we have not explored the impact of varying model sizes on the forgetting issue and the corresponding methods. We plan to investigate this aspect in future versions.
2309.16045
Minimum Monotone Tree Decomposition of Density Functions Defined on Graphs
Monotone trees - trees with a function defined on their vertices that decreases the further away from a root node one travels, are a natural model for a process that weakens the further one gets from its source. Given an aggregation of monotone trees, one may wish to reconstruct the individual monotone components. A natural representation of such an aggregation would be a graph. While many methods have been developed for extracting hidden graph structure from datasets, which makes obtaining such an aggregation possible, decomposing such graphs into the original monotone trees is algorithmically challenging. Recently, a polynomial time algorithm has been developed to extract a minimum cardinality collection of monotone trees (M-Tree Set) from a given density tree - but no such algorithm exists for density graphs that may contain cycles. In this work, we prove that extracting such minimum M-Tree Sets of density graphs is NP-Complete. We additionally prove three additional variations of the problem - such as the minimum M-Tree Set such that the intersection between any two monotone trees is either empty or contractible (SM-Tree Set) - are also NP-Complete. We conclude by providing some approximation algorithms, highlighted by a 3-approximation algorithm for computing the minimum SM-Tree Set for density cactus graphs.
Lucas Magee, Yusu Wang
2023-09-27T21:57:52Z
http://arxiv.org/abs/2309.16045v1
# Minimum Monotone Tree Decomposition of Density Functions Defined on Graphs ###### Abstract Monotone trees - trees with a function defined on their vertices that decreases the further away from a root node one travels, are a natural model for a process that weakens the further one gets from its source. Given an aggregation of monotone trees, one may wish to reconstruct the individual monotone components. A natural representation of such an aggregation would be a graph. While many methods have been developed for extracting hidden graph structure from datasets, which makes obtaining such an aggregation possible, decomposing such graphs into the original monotone trees is algorithmically challenging. Recently, a polynomial time algorithm has been developed to extract a minimum cardinality collection of monotone trees (M-Tree Set) from a given density tree - but no such algorithm exists for density graphs that may contain cycles. In this work, we prove that extracting such minimum M-Tree Sets of density graphs is NP-Complete. We additionally prove three additional variations of the problem - such as the minimum M-Tree Set such that the intersection between any two monotone trees is either empty or contractible (SM-Tree Set) - are also NP-Complete. We conclude by providing some approximation algorithms, highlighted by a 3-approximation algorithm for computing the minimum SM-Tree Set for density cactus graphs. ## 1 Introduction A common problem in modern data analysis is taking large, complex datasets and extracting simpler objects that capture the true nature and underlying structure. In this paper we are interested in the case when the input data is the aggregation of a collection of trees. In fact, each tree also has attributes over nodes (e.g., the strength of certain signal) which decreases monotonically from its root - we call such a tree a monotone tree. Such trees come naturally in modeling a process that dissipates as it moves away from the root. One such example is in the construction of neuronal cells: a single neuron has tree morphology, with the cell body (soma) serving as the root. In (tracer-injection based) imaging of brains, the signal often tails off as it moves away from the cell body and out of the injection region, naturally giving rise to a rooted monotone tree. Figure 1 (D) is centered around the soma of a single neuron within a full mouse brain imaging dataset with branches that get weaker as they get further from the soma. Generally, we are interested in the following: given input data that is the aggregation of a collection of monotone trees, we aim to reconstruct the individual monotone trees. The specific version of the problem we consider in this paper is where the input data is a graph \(G=(V,E)\) with a density function \(f:V\rightarrow\mathbb{R}^{\geq 0}\) defined on its vertices. Our goal is to decompose \((G,f)\) into a collection of monotone trees \((T_{1},f_{1}),\ldots,(T_{k},f_{k})\) whose union sums to the original \((G,f)\) at each \(v\in V\). See Section 2 for precise definitions. A primary motivation for considering graphs to be the input is because graphs are flexible and versatile, and recently, a range of methods have been proposed to extract the hidden graph structure from a wide variety of datasets; see e.g., [12, 13, 18, 1, 20, 8, 15, 4, 22, 6, 16]. In the aforementioned example of neurons, the discrete Morse-based algorithm of [6] has been applied successfully to extract a graph representing the summary of a collection of neurons [21, 2]. To extract the individual neurons from such a summary would be a significant achievement for the neuroscience community - which has developed many techniques to extract individual neuron skeletonizations from imaging datasets; see e.g., [9, 19, 5]. However, going from a graph to a collection of trees poses algorithmic challenges. The monotone-tree decomposition problem has been studied in the work of [3], which develops a polynomial-time algorithm for computing the minimum cardinality set of monotone trees (M-Tree Set) of a density function defined on **a tree** (instead of a graph). However many applications for such a decomposition have graphs that may contain cycles, with the authors of [3] explicitly mentioning a need for algorithms that can handle such input domains. **New work.** We consider density functions defined on graphs, which we refer to as _density graphs_. Our goal is to decompose an input density graph \((G,f)\) into as few monotone trees as possible, which we call the _minimum M-Tree Decomposition problem_. See Section 2 for formal definitions and problem setup. Unfortunately, while the minimum M-Tree Decomposition problem can be solved efficiently in polynomial time via an elegant greedy approach when the density graph is itself a tree [3], we show in Section 3 that the problem for graphs in general is NP-Complete. In fact, no polynomial time constant factor approximation algorithm exists for this problem under reasonable assumptions (see Section 3). Additionally, we show NP-Completeness for several variations of the problem (Section 3). We therefore focus on developing approximation algorithms for this problem. In Section 4, we first provide two natural approximation algorithms but with additive error. For the case of multiplicative error, we provide a polynomial time 3-approximation algorithm for computing the so called minimum SM-Tree Set of a density cactus graph. ## 2 Preliminaries ### Problem Definition We will now introduce definitions and notions in order to formally define what we wish to compute. Given a graph \(G(V,E)\), a _density function_ defined on \(G\) is a function \(f:V\rightarrow\mathbb{R}^{\geq 0}\). A _density graph_\((G,f)\) is a graph \(G\) paired with a density function \(f\) defined on its vertices. A _monotone tree_ is a density tree with a _root_\(v\in V\) such that the path from the root to every node \(u\in V\) is non-increasing in density values. See Figure 1 for explicit examples of density trees and monotone trees. While multiple nodes may have the global maximum value on the monotone tree, exactly one node is the root. For example, in Figure 1 (B), either node with the global maximum value may be its root, but only one of them is the root. Given a density graph \((G(V,E),f)\), we wish to build a set of monotone subtrees \((T_{1},f_{1}),(T_{2},f_{2}),\ldots,(T_{n},f_{n})\) such that \(T_{i}\subseteq G\) for all \(i\) and \(\sum_{i=1}^{n}f_{i}(v)=f(v)\) for all \(v\in V\). Note that if a node \(v\in V\) is not in a tree \(T_{i}\) then we say that \(f_{i}(v)=0\) and vice versa. We will refer to such a decomposition as a _monotone tree (M-tree) decomposition_ of the density graph, and refer to the set as an _M-Tree Set_ throughout the remainder of the paper. An M-Tree Set is a _minimum M-Tree Set_ for a density graph if there does not exist an M-Tree Set of the density graph with smaller cardinality. An example of a density graph and a minimum M-Tree Figure 1: (A) - (C) Contain examples of density trees with relative maxima colored red. (A) shows a monotone tree. (B) shows a monotone tree with multiple nodes having the global maximum density value. (C) shows an example of a density tree that is not a monotone tree. (D) A zoom in an individual neuron within a full mouse brain imaging dataset. The dataset is an fMOST imaging dataset that was created as part of the Brain Initiative Cell Census Network and is publicly available for download. et is shown in Figure 2. Note that a density graph may have many different minimum M-Tree Sets. We abbreviate the cardinality of a minimum M-Tree Set for a density graph \((G,f)\) as \(|\mathsf{minMset}((G,f))|\). There are different types of M-Tree Sets that may be relevant for different applications. A _complete M-Tree (CM-Tree) Set_ is an M-Tree Set with the additional restriction that every edge in the density graph \(G\) must be in at least one tree in the set. A _strong M-Tree (SM-Tree) Set_ is an M-Tree Set such that the intersection between any two trees in the set must be either empty or contractible. We similarly abbreviate the cardinality of a minimum SM-Tree Set of \((G,f)\) as \(|\mathsf{minSMset}((G,f))|\). A _full M-Tree (FM-Tree) Set_ is an M-Tree Set such that for each element \((T_{i}(V_{i},E_{i}),f_{i})\), \(f_{i}(v)=f(v)\) for the root node \(v\in V_{i}\) of \((T_{i},f_{i})\). The (minimum) M-Tree Set in Figure 2 is also a (minimum) CM-Tree Set but is neither a SM-Tree Set nor a FM-Tree Set. ### Greedy Algorithm for Density Trees [3] We will now briefly describe the algorithm for computing minimum M-Tree Sets for density trees developed in [3], as some of the ideas will be useful in our work. Please refer to [3] for more details. The approach of [3] relies on a so-called _monotone sweeping operation_ to build individual elements of a minimum M-Tree Sets of density trees. Algorithm 1 explicitly defines a generalized version of this operation that we will need in a later proof. ``` Input: A density tree \((T(V,E),f)\), a starting node \(v\in V\), and a staring value \(\alpha\) such that \(0<\alpha\leq f(v)\) Output: A monotone subtree \((T^{\prime},h_{f,v,\alpha})\) and a remainder \((T,R_{v,\alpha}f)\) (Step 1) Initialize output density subtree \(T^{\prime}\) to only contain the input vertex \(v\), with corresponding density function \(h_{f,v,\alpha}(v)=\alpha\) (Step 2) Perform DFS starting from \(v\). For each edge (\(u\to w\)) traversed: \[h_{f,v,\alpha}(w)=\begin{cases}h_{f,v,\alpha}(u)&f(w)\geq f(u)\\ max(0,h_{f,v,\alpha}(u)-(f(u)-f(w)))&otherwise\end{cases}\] Return monotone tree \((T^{\prime},h_{f,v,\alpha})\) and remainder density tree \((T,R_{v,\alpha}f)\). ``` **Algorithm 1**monotone-sweep\(((T(V,E),f),v\in V,\alpha)\) The operation takes a density tree \((T(V,E),f)\), a node \(v\in V\), and a starting function value \(\alpha\) such that \(0<\alpha\leq f(v)\) as input. A monotone subtree \((T^{\prime},h_{f,v,\alpha})\) and the remainder density tree \((T,R_{v,\alpha}f)\) where \(R_{v,\alpha}f(u)=f(u)-h_{f,v,\alpha}(u)\) for all \(u\in V\) is returned. Algorithm 2, which outputs a minimum M-Tree Set of density trees, performs the monotone sweeping operation iteratively from certain nodes, called the _mode-forced_ nodes of the density tree. To compute these mode-forced nodes, one iteratively remove leaves from the tree if their parent has greater or equal density. Such leaves are referred to as _insignificant vertices_. Once it is no longer possible to remove any additional nodes, the leaves of the remaining graph are the mode-forced nodes of the original density graph. Figure 2: A density graph (left) together with a minimum M-Tree Set (right). Note that a minimum M-Tree Set is not necessarily unique for a density graph. An example of a single iteration of the tree algorithm is shown in Figure 3. The running time complexity of Algorithm 2 is \(O(n*|\text{minMset}((T,f))|)\) where \(n\) is the number of nodes in \(T\). We note that all M-Tree Sets of a density tree are also SM-Tree Sets, so Algorithm 2 also outputs a minimum SM-Tree Set. ``` Input: A density tree \((T(V,E),f)\) Output: A minimum M-Tree set of \((T(V,E),f)\) (Step 1) Find a mode forced vertex \(v\in V\) (Step 2) Perform monotone-sweep\(((T(V,E),f),v,f(v))\) to build a single element of a minimum M-Tree Set. (Step 3) Repeat Steps 1 and 2 on remainder \((T,R_{v,f(v)}f)\) until no density remains. ``` **Algorithm 2**tree-\(\text{\sf-alg}((T(V,E),f))\) ### Additional Property of Monotone Sweeping Operation Unfortunately, neither Algorithm 1 nor Algorithm 2 can be directly used to compute minimum M-Tree Sets of density graphs with cycles. Nevertheless, we can show Claim 2.1 which will later be of use in developing approximation algorithms in Section 4. **Claim 2.1**.: _Given a density tree \((T(V,E),f)\), let \(v\in V\). Let \(a,b\in\mathbb{R}^{+}\) such that, without loss of generality, \(0<a<b\leq f(v)\). Let \((T,R_{v,a}f)\) be the remainder of \(\text{\sf monotone-sweep}((T,f),v,a)\). We can define a similar remainder \((T,R_{v,b}f)\). Then we have \(|\text{\sf minMset}((T,R_{v,b}f))|\leq|\text{\sf minMset}((T,R_{v,a}f))|\)._ Proof.: We will prove the claim by contradiction. Assume that \(|\text{\sf minMset}((T,R_{v,b}f))|>|\text{\sf minMset}((T,R_{v,a}f))|\). In particular, we will construct two new density trees, \((T_{a},f_{a})\) and \((T_{b},f_{b})\), as follows: \(T_{a}\) is equal to our starting tree with the addition of two nodes \(v_{a}\) and \(v_{\infty}\), with two additional edges connecting to \(v_{a}\) to both \(v_{\infty}\) and \(v\). Set \(f_{a}(v_{a})=a\) and \(f_{a}(v_{\infty})=\infty\). Similarly define \(T_{b}\) and \(f_{b}\). Now imagine we run Algorithm 2 on \((T_{a},f_{a})\). \(v_{\infty}\) is a mode-forced node, and thus we can perform the first iteraiton in Algorithm 2 on \(v_{\infty}\). Sweeping from \(v_{\infty}\) will leave remainder with a minimum M-Tree Set of size \(|\text{\sf minMset}((T_{a},f_{a}))|-1\). The remainder is exactly the same as \((T,R_{v,a}f)\) at all nodes \(v\in V\), and is zero at our newly added nodes. Hence, \(|\text{\sf minMset}((T_{a},f_{a}))|=|\text{\sf minMset}((T,R_{v,a}f))|+1\). Similarly, by performing Algorithm 2 on \((T_{b},f_{b})\), we have \(|\text{\sf minMset}((T_{b},f_{b}))|=|\text{\sf minMset}(T,R_{v,b}f)|+1\) Now if our initial assumption is true, namely \(|minMset((T,R_{v,b}f))|>|minMset((T,R_{v,a}f))|\), then by the above argument we have that \[|\text{\sf minMSet}((T_{b},f_{b}))|>|\text{\sf minMSet}((T_{a},f_{a}))|. \tag{1}\] However, we could construct an M-tree set of \((T_{b},f_{b})\) as follows: First construct one monotone tree rooted at \(v_{\infty}\) that leaves no remainder at both \(v_{\infty}\) and \(v_{b}\), then perform the monotone sweep operation starting at \(v\) with starting value \(a\) to build the rest of the component. Note that the remainder after removing this tree is in fact \((T_{a},R_{v,a}f)\), which we can then decompose using the minimum M-tree set of \((T_{a},R_{v,a}f)\). In other words, we can find a M-tree set for \((T_{b},f_{b})\) with \(|\text{\sf minMset}(T,R_{v,a}f)|+1=|\text{\sf minMset}(T_{a},f_{a})|\). This however Figure 3: (A) A density graph with mode-forced nodes colored green and insignificant vertices colored yellow. (B) A single element built by the monotone sweep operation from a mode forced node as performed in Algorithm 2. contradicts with Eqn (1) (and the correctness of Algorithm 2). Hence our assumption cannot hold, and we must have that \(|\mathsf{minMset}(T,R_{v,b}f)|\leq|\mathsf{minMset}(T,R_{v,a}f)|\). This proves the claim. We note that while this proof is for M-Tree Sets specifically, the proof for SM-Tree Sets follows identical arguments. ## 3 Hardness Results Given that there exists a polynomial time algorithm for computing minimum M-Tree Sets of density trees, it is natural to ask whether or not such an algorithm exists for density graphs. We prove Theorem 3.1, stating that the problem is NP-Complete. **Theorem 3.1**.: _Given a density graph \((G(V,E),f)\) and a parameter \(k\), determining whether or not there exists an M-Tree set of size \(\leq k\) is NP-Complete._ Proof.: It is easy to see that this problem is in NP, so we will now show it is also in NP-Hard. First we consider a variation of the Set Cover problem where the intersection between any two sets is at most 1. We refer to this problem as Set Cover Intersect 1 (SC-1). SC-1 is a generalization of the NP-Complete problem of covering points in a plane with as few lines as possible [17], and approximation bounds of SC-1 are well studied in [14]. Given an instance of SC-1 (\(m\) sets \(S_{1},S_{2},\ldots,S_{m}\) covering a universe of \(n\) elements \(e_{1},e_{2},\ldots,e_{n}\), and a number \(k\)), we reduce to an instance of the M-Tree Set decision problem as follows: * Create a bipartite graph \(G(V=A\cup B,E)\) equipped with a density function \(f:V\rightarrow\mathbb{R}^{\geq 0}\) based on the input (SC-1) instance. * In particular, for each set \(S_{i}\), add a node \(a_{S_{i}}\) to \(A\) and set \(f(a_{S_{i}})=|S_{i}|\). * For each element \(e_{j}\), add a node \(b_{e_{j}}\) to \(B\) and set \(f(b_{e_{j}})=1\) * For each set \(S_{i}\), add edge between \(a_{S_{i}}\) and \(b_{e_{j}}\) for each element \(e_{j}\in S_{i}\). An example of this reduction is illustrated in Figure 4. **First Direction: If there is a Set Cover of size \(\leq k\), then there is an M-Tree Set of density graph \((G,f)\) whose cardinality is \(\leq k\).** Let \(S_{cover}\) be a set cover of size \(n\leq k\). For each \(S_{i}\in S_{cover}\), we will construct a monotone tree \((T_{i},f_{i})\) rooted at \(a_{S_{i}}\). In particular, \(f_{i}(a_{S_{i}})=f(a_{S_{i}})\). Then, for each element \(e_{j}\in S_{i}\), \(T_{i}\) will include \(b_{e_{j}}\) and the edge \((a_{S_{i}},b_{e_{j}})\), with \(f_{i}(b_{e_{j}})=1\). Note that if \(e_{j}\) is an element in multiple sets in \(S_{cover}\), simply pick one \(S_{i}\in S_{cover}\) such that \(e_{j}\in S_{i}\) to be the representative set of \(e_{j}\). Finally, for each set \(S_{l}\notin S_{cover}\), for each element \(e_{j}\in S_{l}\), add the node \(a_{S_{l}}\) and the edge \((b_{e_{j}},a_{S_{l}})\) to \(T_{i}\) with \(f_{i}(a_{S_{l}})=1\), where \((T_{i},f_{i})\) is the monotone tree rooted at the node \(a_{S_{i}}\) where \(S_{i}\in S_{cover}\) is the representative set containing \(e_{j}\). Firstly, each element in the M-Tree Set is connected by construction. The only nodes in an element \((T_{i},f_{i})\) are the root node \(a_{S_{i}}\), where \(S_{i}\in S_{cover}\), nodes of the form \(b_{e_{j}}\), where \(e_{j}\in S_{i}\), and nodes of the form \(a_{S_{l}}\), where \(S_{l}\notin S_{cover}\) and there exists \(e_{j}\) in both \(S_{i}\) and \(S_{l}\). Edges of the form \((a_{S_{i}},b_{e_{j}})\) are part of the domain by construction and are included in \(T_{i}\). Similarly, edges of the form \((a_{S_{l}},b_{e_{j}})\) are also part of the domain by construction and are included in \(T_{i}\). For each edge \((a_{S_{l}},b_{e_{j}})\in T_{i}\), there must also exist an edge \((a_{S_{i}},b_{e_{j}})\). Thus all nodes in \(T_{i}\) are connected to \(a_{S_{i}}\) - and in particular at most 2 edges away. Secondly, each element in the M-Tree Set is a tree. Consider element \((T_{i},f_{i})\). By construction, if a cycle were to exist in \(T_{i}\) it would have to be of the form \(a_{S_{i}},b_{e_{p}},a_{S_{l}},b_{e_{q}},a_{S_{i}}\), where both \(e_{p}\) and \(e_{q}\) are in both \(S_{i}\) and \(S_{l}\). However, such a cycle would imply that two sets have at least two elements in their intersection, which is not possible given we reduced from SC-1. Next, each element in the M-Tree Set is a monotone tree. \(f_{i}(v)=1\) for all \(v\in T_{i}\) that are not the root \(a_{S_{i}}\) of \((T_{i},f_{i})\) and \(f_{i}(a_{S_{i}})\geq 1\). Finally, \(f(v)=\sum_{a=1}^{n}f_{i}(v)\) for all \(v\in G\). Each node \(a_{S_{i}}\) such that \(S_{i}\in S_{cover}\) is part of one monotone tree \((T_{i},f_{i})\) and \(f_{i}(a_{S_{i}})=f(a_{S_{i}})\). Each node \(b_{e_{j}}\in B\) is also part of only one monotone tree \((T_{i},f_{i})\) and \(f_{i}(e_{j})=1=f(e_{j})\). Finally, for a set \(S_{l}\notin S_{cover}\), \(a_{S_{l}}\) is included in \(m=|S_{l}|\) monotone trees. For each such monotone tree \((T_{i},f_{i})\), \(f_{i}(a_{S_{l}})=1\), thus \(\sum_{a=1}^{n}f_{i}(a_{S_{l}})=m=f(a_{S_{l}})\). Thus, we have proven that there exists a M-Tree Set of \((G,f)\) of size \(\leq k\). **Second Direction: If there is an M-Tree Set of density graph \((G,f)\) of size \(\leq k\), then there is a Set Cover of size \(\leq k\).** Let \(\{(T_{i},f_{i})\}\) be an M-Tree set of density graph \((G,f)\) of size \(k\). Each monotone tree \((T_{i},f_{i})\) in the set has a root node \(m_{i}\). If multiple vertices in \(T_{i}\) have the maximum value of \(f_{i}\) (as seen in Figure 1(B)) simply set one of them to be \(m_{i}\). Each edge in \(T_{i}\) has implicit direction oriented away from \(m_{i}\). First we prove Lemma 3.2. **Lemma 3.2**.: _Let \(b_{e_{j}}\in B\). Either \(b_{e_{j}}\) is the root of a monotone tree in the M-Tree Set or at least one of its neighbors is the root of a monotone tree in the M-Tree Set._ Proof.: Assume \(b_{e_{j}}\) is not a root of any monotone tree. Consider a monotone tree \((T_{i},f_{i})\) of the M-Tree Set containing \(b_{e_{j}}\). This means that \(f_{i}(b_{e_{j}})>0\). Consider node \(a_{S_{l}}\) that is the parent of \(b_{e_{j}}\) in \((T_{i},f_{i})\). Assume \(a_{S_{l}}\) is not the root node of \((T_{i},f_{i})\). Because \(a_{S_{l}}\) is not the root of the component, it must have a parent \(b_{e_{d}}\). Consider the remaining density graph \((G,g=f-f_{i})\). By definition of monotone tree, \(0<f_{i}(b_{e_{j}})\leq f_{i}(a_{S_{l}})\leq f_{i}(b_{e_{d}})\). By construction, we also know \(f(a_{S_{l}})=\sum_{e_{j}\in S_{l}}f(b_{e_{j}})\). Therefore, \(g(a_{S_{l}})>\sum_{e_{j}\in S_{l}}g(b_{e_{j}})\). Because \(a_{S_{l}}\) has more density than the sum of all of its neighbors in \((G,g)\), it is impossible for \(a_{S_{l}}\) to not be the root of at least one monotone tree in any M-Tree Set of \((G,g)\). Thus if \(b_{e_{j}}\) is not the root of any monotone tree in the M-Tree Set, \(a_{S_{l}}\) must be the root of a monotone tree in the M-Tree Set. We now construct a set cover from the M-Tree Set with the help of Lemma 3.2. Initialize \(S_{cover}\) to be an empty set. For each \(a_{S_{i}}\in A\) that is a root of a monotone tree in the M-Tree Set, add \(S_{i}\) to \(S_{cover}\). Next for each \(b_{e_{j}}\in B\) that is the root of a monotone tree in the M-Tree Set, if there is not already a set \(S_{i}\in S_{cover}\) such that \(e_{j}\in S_{i}\), choose a set \(S_{l}\) such that \(e_{j}\in S_{l}\) to add to the Set Cover. Every element must now be covered by \(S_{cover}\). A node \(e_{j}\) that is not the root in any monotone tree in the M-Tree Set must have a neighbor \(a_{S_{l}}\) that is a root in some monotone tree by Lemma 3.2. The corresponding set \(S_{l}\) was added to \(S_{cover}\) - thus \(e_{j}\) is covered. A node \(e_{m}\) such that \(b_{e_{m}}\) is the root of a monotone tree in the M-Tree Set must also be covered by \(S_{cover}\) - as a set was added explicitly to cover \(e_{m}\) if it was not already covered. We've added at most one set to the cover for every monotone tree in the M-Tree Set, therefore \(|S_{cover}|\leq k\). Combining both directions, we prove that, given a SC-1 instance, we can construct a density graph \((G,f)\) such that there exists a set cover of size \(\leq k\) if and only if the density graph has a M-tree Set of size \(\leq k\). This proves the problem is NP-Hard, and thus the problem is NP-Complete. ### Approximation Hardness From the proof of Theorem 3.1, it is easy to see that given an instance of SC-1, the size of its optimal set cover is equivalent to the cardinality of the minimum M-Tree Set of the density graph constructed in the reduction. Hence the hardness of approximation results for SC-1 translate to the minimum M-Tree Set Figure 4: (A) SC-1 instance with 4 sets and seven elements (B) M-Tree decision problem instance created by following reduction outlined in proof of Theorem 3.1. The top row consists of nodes in \(A\subset V\) in the bipartite graph, which are nodes representing sets, while the bottom row consists of nodes in \(B\subset V\) in the bipartite graph, which are nodes representing elements. problem too. We therefore obtain the following result, stated in Corollary 3.3 which easily follows from a similar result for SC-1. The SC-1 result from [14] is stated in Appendix A. We note that while Corollary 3.3 states the bound in terms of \(n=\) of number of relative maxima, a similar bound can be obtained where \(n=\) number of vertices. **Corollary 3.3**.: _There exists a constant \(c>0\) such that approximating the minimum M-Tree Decomposition problem within a factor of \(c\frac{log(n)}{log(log(n))}\), where \(n\) is the number of relative maxima on the given density graph, in deterministic polynomial time is possible only if \(NP\subset DTIME(2^{n^{1-\epsilon}})\) where \(\epsilon\) is any positive constant less than \(\frac{1}{2}\)._ Proof.: Under the same assumptions mentioned above, there exists a \(c>0\) such that SC-1 cannot be approximated within a factor of \(c\frac{log(n)}{log(log(n))}\), where \(n\) is the number of elements in the universe [14]. We note that for a given SC-1 instance, performing the reduction to the M-Tree Set decision problem seen in the proof of Theorem 3.1 results in a density graph with at most \(\frac{n(n-1)}{2}+n\) relative maxima - the upper bound on the number of sets in the SC-1 instance. Thus, the number of relative maxima on the density graph is \(O(n^{2})\). For sufficiently large \(n\), we have the following: \(c\frac{log(n^{2})}{log(log(n^{2}))}=2c\frac{log(n)}{log(log(n))}=2c\frac{log( n)}{log(log(n))+1}<2c\frac{log(n)}{log(log(n))}\) Thus there exists a \(c>0\) such that minimum M-Tree Decomposition problem cannot be approximated within a factor of \(c\frac{log(n^{2})}{log(log(n^{2}))}\) under the same assumptions mentioned previously. Because the number of relative maxima on the density graph is \(O(n^{2})\), we can substitute the number of relative maxima for \(n^{2}\) to establish our final bound. ### Variations of minimum M-Tree Sets are also NP-Complete In addition to proving that computing minimum M-Tree Sets of density graphs is NP-Complete, we have also proven Theorem 3.4 in Appendix B. The theorem states that computing the minimum CM-Tree Sets, minimum SM-Tree Sets, and minimum FM-Tree Sets of density graphs is also NP-Complete. **Theorem 3.4**.: _Given a density graph \((G(V,E),f)\) and a parameter \(k\), determining whether or not there exists a CM-Tree Set, SM-Tree Set, or FM-Tree Set of size \(\leq k\) are all NP-Complete._ It should be noted that Corollary 3.3 can be extended to CM-Tree Sets and FM-Tree Sets. In contrast, SC-1 is not used in the NP-Complete proof for SM-Tree Sets. Thus, Corollary 3.3 does not apply to minimum SM-Tree Set and there is hope we can develop tighter bounded approximation algorithms for this problem than for the other variations. ## 4 Algorithms ### Additive Error Approximation Algorithms Now that we have shown that computing minimum M-Tree Sets of density graphs, as well as several additional variations, is NP-Complete, we focus on developing approximation algorithms. We define two algorithms with different additive error terms. Firstly, we note that a naive upper bound for a given density graph is the number of relative maxima on the graph. We include Algorithm 6 in Appendix C to establish this naive upper bound. Shifting focus to nontrivial approaches, Algorithm 3 computes the minimum M-Tree Set of a density graph restricted to a spanning tree \(T\subseteq G\). We prove that \(|\textsf{minMset}((T,f))|\leq|\textsf{minMset}((G,f))|+2g\), where \(g\) the _genus_ of \(G\). For a connected graph, \(G(V,E)\), its genus is equal to \(|E|-|V|+1\), which is the number of independent cycles on the graph. This approximation error bound for Algorithm 3 is stated in Theorem 4.1. **Theorem 4.1**.: _Let \((G(V,E),f)\) be a density graph with \(\beta_{1}G=g\). Let \(k^{*}\) be the size of a minimum (S)M-Tree Set of \((G,f)\). Algorithm 3 outputs an (S)M-Tree Set of size at most \(k^{*}+2g\)._ Proof.: We need to prove Lemma 4.2 to provide an upper bound on \(|\mathsf{minMost}(T,f)|\) for any spanning tree \(T\subseteq G\). Algorithm 2 will then output an M-Tree Set of size at most equal to the upper bound, thus completing our proof. The proof is identical for SM-Tree Sets. **Lemma 4.2**.: _Let \((G(V,E),f)\) be a density graph with \(\beta_{1}G=g\) and \(|\mathsf{minMost}(G,f)|=k^{*}\). For any spanning tree \(T\subseteq G\), \(|\mathsf{minMost}(T,f)|\leq k^{*}+2g\)_ Let \(M=\{(T_{i},f_{i})\}\) be a minimum M-Tree set of \((G,f)\). Let \(E_{cut}\) be the set of \(g\) edges that if removed from \(G\) leave spanning tree \(T\). Firstly, we note that \(|\mathsf{minMost}(G,f)|\leq|\mathsf{minMost}(T,f)|\), as any M-Tree Set of \((T,f)\) is also an M-Tree Set of \((G,f)\). We will construct an M-Tree Set of \((T,f)\) from a minimum M-Tree Set of \(G\). For each monotone tree \((T_{i},f_{i})\in M\), consider an edge \(e_{j}=(u,v)\in E_{cut}\) that is in \(T_{i}\). There is implicit direction to \(e_{j}\) with respect to the root of \((T_{i},f_{i})\), meaning either (1) (\(u\to v\)) or (2) (\(v\to u\)). If (1) is the case, we can cut the branch rooted at \(v\) off of \((T_{i},f_{i})\) to create two non-intersecting monotone trees. See Figure 5 for an example. We perform a similar operation if (2) is the case, but instead cut the branch rooted at \(u\). Perform this cut for each edge in \(E_{cut}\) to divide \((T_{i},f_{i})\) into, at most, \(|E_{cut}|+1\) non-intersecting monotone trees. After dividing each tree into at most \(|E_{cut}|+1\) non-intersecting monotone trees, we make 2 key observations - (1) we still have a M-Tree Set of \((G,f)\) and (2) no edge in \(E_{cut}\) is in any monotone tree in the M-Tree Set. Thus the M-Tree Set is also an M-Tree Set of \((T,f)\). We can shrink the size of this M-Tree Set by summing the components that share the same root. In particular, consider an edge \(e_{j}=(u,v)\in E_{cut}\). We have created as many as \(k^{*}\) additional monotone trees rooted at \(u\) and as many as \(k^{*}\) additional monotone trees rooted at \(v\). Sum the monotone trees rooted at \(u\) to create a single monotone tree rooted at \(u\). The sum would clearly still be a monotone tree because all monotone trees are subtrees of tree \(T\), so no cycle or non-non-increasing path from \(u\) will be created. We can similarly do the same for \(v\), and for all edges in \(E_{cut}\). This we have a new M-Tree Set of \((T,f)\), with (at most) an additional monotone tree rooted at each node of each edge in \(E_{cut}\) when compared to the original M-Tree Set of \(G\). Thus \(|\mathsf{minMost}(T,f)|\) is bounded above by \(k^{*}+2g\). ### Approximation Algorithm for Minimum SM-Tree Sets of Density Cactus Graphs. A cactus graph is a graph such that no edge is part of more than one simple cycle [10]. See Figure 6 (A) for an example. Many problems that are NP-hard on graphs belong to P when restricted to cacti - such as vertex cover and independent set [11]. While we do not yet know whether or not computing a minimum Figure 5: (A) shows a single monotone tree with its root colored red and an edge colored green. Cutting the green edge leaves us with two non-intersecting monotone trees shown in (B). M-Tree Set (or any variations) of density cactus graphs is NP-hard, we have developed a 3-approximation algorithm for computing the minimum SM-Tree Set of a density cactus graph. We first prove Theorem 4.3, which states that for any density cactus graph \((G,f)\), there exists a spanning tree \(T\subseteq G\) such that \(|\mathsf{minSMset}(T,f)|\) is at most 3 times \(|\mathsf{minSMset}(G,f)|\). ``` Input: A density tree \((T(V,E),f)\) and two subtrees \(T_{1},T_{2}\) of \(T\) that share a single node \(v\) as intersection Output: A tuple \((a,b)\) representing the number of monotone sweeps \(a\) from mode-forced nodes on \(T_{1}\) to make all mode-forced nodes on \(T\) be part of \(T_{2}\), and the remaining function value \(b\) at \(v\) after the monotone sweeps. While there exists mode-forced node \(u\in V\) off of \(T_{2}\): - monotone-sweep\((T,f),u,f(u)\) Set \(a=\) number of monotone sweeps performed Set \(b=\) remaining density on \(v\) Return \((a,b)\) ``` **Algorithm 4**split-tree-algo((T(V, E), f), \(T_{1}\), \(T_{2}\)) **Theorem 4.3**.: _Let \((G(V,E),f)\) be a density cactus graph. There exists a spanning tree \(T\) of \(G\) such that \(|\mathsf{minSMset}(T,f)|\leq 3|\mathsf{minSMset}(G,f)|\)._ Proof.: Let \(M=\{(T_{i},f_{i})\}\) be a minimum SM-Tree Set of \((G,f)\), \(k=|M|\), and \(\beta_{1}G=g\). Consider graph \(G^{\prime}=\bigcup_{i=1}^{k}T_{i}\). Let \(\beta_{1}G^{\prime}=g^{\prime}\). We note that \(g^{\prime}\leq g\). and that \(M\) is also a minimum SM-Tree Set of \(G^{\prime}\). We will use \(G^{\prime}\) to help construct a spanning tree \(T\) of \(G\) with an SM-Tree Set with the desired cardinality. Note that if \(G^{\prime}\) has no cycles then there obviously exists a spanning tree \(T\) of \(G\) such that \(|\mathsf{minSMset}(T,f)|=|\mathsf{minSMset}(G,f)|\). Additionally, if \(g^{\prime}=1\), then creating \(T\) by removing any edge from the simple cycle \(|\mathsf{minSMset}(T,f)|\leq|\mathsf{minSMset}(G,f)|+2\) (similar arguments to Lemma 4.2 and Theorem 4.1). Therefore assume \(g^{\prime}\geq 2\). Construct spanning tree \(T\) as follows: * Add each edge in \(G\) that is not part of a simple cycle. * For each simple cycle in \(G\) that is not in \(G^{\prime}\), add all edges of cycle to \(T\) except for one missing in \(G^{\prime}\) (it does not matter which if multiple such edges exist). * For each simple cycle in \(G\) that is in \(G^{\prime}\), add all edges of that cycle to \(T\) except for one (does not matter which). Figure 6: (A) An example of a cactus graph - which is a graph such that no edge is part of more than a single simple cycle. It is essentially a tree of cycles graph - which is a graph such that no vertex is part of more than one simple cycle - with the exception that two simple cycles may share a single vertex. Tree of cycles graphs are cactus graphs but cactus graphs (such as this one) are not necessarily tree of cycles graphs. (B) An example of an input for Algorithm 4. The density tree is broken into two subtrees (green and blue) that have a single node as intersection (red). Monotone sweeping is performed iteratively at mode-forced nodes only in one of the subtrees. Once the only remaining mode-forced nodes lie on the other tree, the output tuple containing the number of monotone sweeps performed and the remaining density at the intersection node is returned. Let \(k^{\prime}\) be the \(|\mathsf{minSMset}(T,f)|\). \(k^{\prime}\) is bounded above by \(k+2g^{\prime}\), because removing edges that aren't used in the any monotone tree in \(M\) from the domain will not change \(|\mathsf{minSMset}(G^{\prime},f)|\). Additionally, removing an edge from a simple cycle in \(g^{\prime}\) will increase the \(|\mathsf{minSMset}(G^{\prime},f)|\) by at most 2 (again by Lemma 4.2). \(k^{\prime}\) is also bounded below by \(2+g^{\prime}\). For each cycle in \(G^{\prime}\), the number of monotone trees in M that contain nodes in a simple cycle must be at least 3 - otherwise the set cannot be an SM-Tree Set. So consider a leaf cycle \(C_{0}\) in \(G^{\prime}\). We know that there are at least 3 monotones trees in \(M\) that cover \(C_{0}\). For a cycle \(C_{1}\) adjacent to \(C_{0}\) in \(G^{\prime}\) that there is a single path between the two cycles, and the monotone trees that cover \(C_{0}\) cannot completely cover \(C_{1}\), otherwise \(M\) would not be an SM-Tree Set. There must be at least one monotone tree with nodes on \(C_{1}\) and no nodes on \(C_{0}\). Continuing traversing the graph to all cycles and it is clear that for each cycle there must be an additional monotone tree added to the SM-Tree Set. Thus, we cannot have an SM-Tree Set of size less than \(2+g^{\prime}\). From above, we have \(\frac{k^{\prime}}{k}\leq\frac{k+2g^{\prime}}{k}\leq\frac{2+g^{\prime}+2g^{ \prime}}{2+g^{\prime}}\leq\frac{3g^{\prime}+2}{g^{\prime}+2}<3\). With Theorem 4.3 proven, we aim to compute the optimal density spanning tree of a density cactus graph. To help compute such an optimal density spanning tree, we first define Algorithm 4. Given a density tree \((T(V,E),f)\) divided into two subtrees \(T_{1}\) and \(T_{2}\) that share a single node \(v\in V\) as intersection, Algorithm 4 performs monotone sweeping operations on the mode-forced nodes of \(T_{1}\) until all mode-forced nodes of \((T,f)\) are on \(T_{2}\). An example of a valid input is seen in Figure 6 (B). The output is a tuple \((a,b)\), where \(a\) is the number of monotone sweeps performed and \(b\) is the remaining density on \(v\) after performing the monotone sweeps. The tuple will essentially capture how helpful monotone sweeping from \(T_{1}\) is for building a minimum (S)M-Tree Set on \(T_{2}\). Algorithm 4 can be used to help compute the desired density spanning tree. In particular, it is used in Algorithm 5, to cut the optimal edge from each cycle, one cycle at a time. We prove Theorem 4.4 which states that Algorithm 5 returns an SM-Tree Set at most 3 times larger than a minimum SM-Tree Set of a density cactus graph. The running time of Algorithm 5 is \(O(n^{3})\) where \(n\) is the number of nodes in the input cactus graph. Algorithm 2 (\(O(n^{2})\)) is performed once for each edge (\(O(n)\)) that is part of a simple cycle. ``` Input: Density cactus graph \((G(V,E),f)\) Output: SM-Tree Set of \((G,f)\) If \(G\) is a tree - Compute optimal (S)M-Tree Set of \((G,f)\) using Algorithm 2. - Else If \(G\) has only a single cycle - compute optimal sized (S)M-Tree Set of each density spanning tree of \(G\) and return smallest cardinality (S)M-Tree Set. - Else (G has multiple simple cycles) - Compute a leaf cycle \(C=c_{1},\ldots,c_{m}\) connected to rest of cycles at \(c_{i}\) - Let \(G_{C}=\) the simple cycle \(C\) with all branches off of each node in the cycle - not including the branches off of \(c_{i}\) that do not lead to other cycles in the graph. Let \(G_{\tilde{C}}=T-G_{C}+c_{i}\). - Fix a spanning tree \(T_{G_{\tilde{C}}}\) of \(G_{\tilde{C}}\). - For each spanning tree \(T_{i}\) of \(G_{C}\) computing \(\mathsf{split\text{-}tree\text{-}algo}((G_{C}\cup G_{\tilde{C}},f),T_{i},T_{G_ {\tilde{C}}})\) - Set \(G=G(V,E-e^{*})\) such that \(e^{*}\) is edge removed from \(C\) that results in spanning tree with smallest output of \(\mathsf{split\text{-}tree\text{-}algo}\). - Iterate until basecase (single cycle graph) is achieved ``` **Algorithm 5**opt-spanning-tree-algo((G(V, E), f)) **Theorem 4.4**.: _Algorithm 5 outputs an SM-Tree Set that is at most 3 times the size of a minimum SM-Tree Set of the input density cactus graph._ Proof.: Clearly, the algorithm outputs a minimum SM-Tree Set when the input domain is a tree. When \(G\) contains a single cycle, by Lemma 4.2 the outputted SM-Tree Set will have at most 2 more monotone trees than a minimum SM-Tree Set of \(G\). Therefore, we only need to prove Theorem 4.4 holds when \(G\) contains multiple simple cycles. Because \(G\) is a cactus a leaf cycle \(C=c_{1},\ldots,c_{m}\) exists. Let \(G_{C}\) be the graph of all nodes in the cycle and branches off of those nodes, excluding branches off of \(c_{i}\) that do not lead to other cycles. Let \(G_{\tilde{C}}\) be the graph of \(G\) excluding \(C\) and all branches off of \(C\), except for the node \(c_{i}\) itself. \(G_{C}\) has \(m\) spanning trees, \(T_{1},\ldots,T_{m}\) corresponding to the \(m\) edges of \(C\). Fix a spanning tree \(T_{G_{\tilde{C}}}\) of \(G_{\tilde{C}}\). We next introduce Lemma 4.5. **Lemma 4.5**.: _Let \(T^{*}=\) spanning tree of \(G_{C}\) such that the output of \(\textsf{split-tree-algo}(T^{*}\cup T_{G_{\tilde{C}}},T^{*},T_{G_{\tilde{C}}})\) is minimized. \(|\textsf{minSMset}((T^{*}\cup T_{G_{\tilde{C}}},f))|\leq\) \(|\textsf{minSMset}((T_{k}\cup T_{G_{C}},f))|\) for any spanning tree \(T_{k}\subseteq G_{C}\)._ Proof.: Lemma 4.5 is proven by proving Claim 4.6 and Claim 4.7. **Claim 4.6**.: _If \(\textsf{split-tree-algo}(T_{j}\cup T_{G_{\tilde{C}}},T_{j},T_{G_{\tilde{C}}})\)[0] \(<\textsf{split-tree-algo}(T_{k}\cup T_{G_{\tilde{C}}},T_{k},T_{G_{\tilde{C}}})\)[0] then \(|\textsf{minSMset}((T_{j}\cup G_{\tilde{C}},f))|\leq|\textsf{minSMset}((T_{k} \cup G_{\tilde{C}},f))|\)._ Proof.: Let \(T_{j},T_{k}\) be spanning trees of \(G_{C}\) such that \(a_{j}<a_{k}\), where \(a_{j}=\textsf{split-tree-algo}(T_{j}\cup T_{G_{\tilde{C}}},T_{j},T_{G_{\tilde {C}}})[0]\) and \(a_{k}=\textsf{split-tree-algo}(T_{k}\cup T_{G_{\tilde{C}}},T_{k},T_{G_{\tilde {C}}})[0]\). Let \(s^{*}=|\textsf{minSMset}((T_{G_{\tilde{C}}},f))|\). Algorithm 4 performs Algorithm 2 sweeping from mode-forced nodes on \(T_{j}\), but stops once mode-forced nodes only remain on \(T_{G_{\tilde{C}}}\). Thus it is still constructing minimum SM-Tree Sets but stopping short of completion. The first element of the output of Algorithm 4 indicates the number of iterations required to have only mode-forced nodes on \(T_{G_{\tilde{C}}}\). \(|\textsf{minSMset}((T_{j}\cup T_{G_{\tilde{C}}},f))|\leq a_{j}+s^{*}\). Similarly, \(|\textsf{minSMset}((T_{k}\cup T_{G_{\tilde{C}}},f))|\geq a_{k}+s^{*}-1\). These bounds prove the claim. **Claim 4.7**.: _If \(\textsf{split-tree-algo}(T_{j}\cup T_{G_{\tilde{C}}},T_{j},T_{G_{\tilde{C}}})\)[0] \(=\textsf{split-tree-algo}(T_{k}\cup T_{G_{\tilde{C}}},T_{k},T_{G_{\tilde{C}}})\)[0] and \(\textsf{split-tree-algo}(T_{j}\cup T_{G_{\tilde{C}}},T_{j},T_{G_{\tilde{C}}})\)[1] \(<\textsf{split-tree-algo}(T_{k}\cup T_{G_{\tilde{C}}},T_{k},T_{G_{\tilde{C}}})\)[1] then \(|\textsf{minMset}((T_{j}\cup G_{\tilde{C}},f))|\leq|\textsf{minMset}((T_{k} \cup G_{\tilde{C}},f))|\)._ Proof.: Let \(T_{j},T_{k}\) be spanning trees of \(G_{C}\) such that \(a_{j}=a_{k}\) and \(b_{j}<b_{k}\) where \((a_{j},b_{j})=\textsf{split-tree-algo}(T_{j}\cup T_{G_{\tilde{C}}},T_{j},T_{G_ {\tilde{C}}})\) and \((a_{k},b_{k})=\textsf{split-tree-algo}(T_{k}\cup T_{G_{\tilde{C}}},T_{k},T_{G _{\tilde{C}}})\). \(a_{j}=a_{k}\) indicates that both \(T_{j}\) and \(T_{k}\) require the same number of iteration of monotone sweeps to leave mode-forced nodes on \(T_{G_{C}}\). However, \(b_{j}<b_{k}\) means that \(T_{j}\) is more helpful than \(T_{k}\) for reducing the minimum SM-Tree Set size on the remainder in the same number of monotone sweeps by Claim 2.1. This proves the claim. This completes the proof of Lemma 4.5. It follows from Lemma 4.5 that Algorithm 5 outputs a minimum SM-Tree Set on the density spanning tree that has the smallest sized minimum SM-Tree Set of all density spanning trees. Combining this with Theorem 4.3 finishes the proof. ## 5 Conclusion Decomposing density graphs into a minimum M-Tree Set becomes NP-Complete when the input graph is not restricted to trees. We proved that computing the minimum M-Tree, CM-Tree, SM-Tree, and FM-Tree Set of density graphs is NP-Complete. We provided additive error approximations algorithms for the minimum M-Tree Set problem, as well as developed a 3-approximation algorithm for minimum SM-Tree Sets for density cactus graphs. Future work will be to close the gap between the bounds of approximation we have established with the error bounds of the algorithms we have developed.
2302.14520
Large Language Models Are State-of-the-Art Evaluators of Translation Quality
We describe GEMBA, a GPT-based metric for assessment of translation quality, which works both with a reference translation and without. In our evaluation, we focus on zero-shot prompting, comparing four prompt variants in two modes, based on the availability of the reference. We investigate nine versions of GPT models, including ChatGPT and GPT-4. We show that our method for translation quality assessment only works with GPT~3.5 and larger models. Comparing to results from WMT22's Metrics shared task, our method achieves state-of-the-art accuracy in both modes when compared to MQM-based human labels. Our results are valid on the system level for all three WMT22 Metrics shared task language pairs, namely English into German, English into Russian, and Chinese into English. This provides a first glimpse into the usefulness of pre-trained, generative large language models for quality assessment of translations. We publicly release all our code and prompt templates used for the experiments described in this work, as well as all corresponding scoring results, to allow for external validation and reproducibility.
Tom Kocmi, Christian Federmann
2023-02-28T12:23:48Z
http://arxiv.org/abs/2302.14520v2
# Large Language Models Are State-of-the-Art ###### Abstract We describe GEMBA, a GPT-based metric for assessment of translation quality, which works both with a reference translation and without. In our evaluation, we focus on zero-shot prompting, comparing four prompt variants in two modes, based on the availability of the reference. We investigate seven versions of GPT models, including ChatGPT. We show that our method for translation quality assessment only works with GPT 3.5 and larger models. Comparing to results from WMT22's Metrics shared task, our method achieves state-of-the-art accuracy in both modes when compared to MQM-based human labels. Our results are valid on the system level for all three WMT22 Metrics shared task language pairs, namely English into German, English into Russian, and Chinese into English. This provides a first glimpse into the usefulness of pre-trained, generative large language models for quality assessment of translations. We publicly release all our code and prompt templates used for the experiments described in this work, as well as all corresponding scoring results, to allow for external validation and reproducibility.1 Footnote 1: [https://github.com/MicrosoftTranslator/GEMBA](https://github.com/MicrosoftTranslator/GEMBA) ## 1 Introduction One of the interesting properties of large language models (LLMs) such as GPT Brown et al. (2020) is their (implicit) support for multilingual Q&A. Prompting the model in the right way allows us to translate text between languages Vilar et al. (2022). This is surprising as GPT has not been fine-tuned for the translation task. Hendy et al. (2023) show that GPT-enabled translation achieves high quality when applied for the translation of high-resource languages, but still lacks in terms of translation quality for under-represented languages. Building on this finding--_if the model can translate, it may be able to differentiate good from bad translations_--we apply GPT for the task of translation quality assessment. In the remainder of this paper, inspired by recent progress on generative, pre-trained large language models (LLMs), we explore how these models can be applied for automated assessment of translation quality. The main research question for this work lies in the question: _Can LLMs be used for effective quality assessment of translations?_ We propose GEMBA, which stands for _GPT Estimation Metric Based Assessment_. The metric evaluates each segment translation in isolation and then averages across all obtained scores for the final, system-level score. We define and evaluate several prompt variants for zero-shot assessment of translation quality in two modes, either with a human reference translation, as a quality metric, or without one, as a quality estimation task. We design the main prompts based on the DA+SQM template used for human assessment of translation quality as implemented in the Appraise framework Federmann (2018) for WMT22 Kocmi et al. (2022), building on previous work conducted by Freitag et al. (2021). The main contributions of this paper are: * We demonstrate state-of-the-art capabilities of GPT-based translation quality assessment on the latest WMT22 metrics evaluation data (on the system level); * We experiment with four prompt templates, showing that the least constrained template achieves best performance; * We evaluate seven different models of GPT, showing that only GPT 3.5 and larger models are capable of translation quality assessment; * We show that GEMBA is not yet reliable enough on the segment level and, thus, should be applied for system-level evaluation only. ## 2 The GEMBA Metric To assess translation quality via prompting an LLM, the following parameters are needed: * prompt variant (from a pre-defined set) * source language name, e.g., "Chinese" * target language name, e.g., "English" * source segments \(src_{1..N}\) * candidate translations \(hyp_{1..N}\) * optionally, reference translations \(ref_{1..N}\) We generate a GPT request for every segment, querying as individual zero-shot problems, and then aggregate results. For this initial proof of concept, we leave improvements such as few-shot queries or document-level context to future work. ### Prompt variants We experiment with four distinct prompt types: modeling two scoring and two classification tasks. For the scoring tasks, first, one based on **direct assessment** (_GEMBA-DA_), second, another based on recent research efforts on **scalar quality metrics** (_GEMBA-SQM_).2 As scoring of translation quality may be an unnatural task for an LLM, we also design two classification tasks. The first is based on **one-to-five stars ranking** (_GEMBA-stars_), which is a style often used when users are asked to review various services or products. The second prompt asks the LLM to label translation quality as one of five discrete **quality classes** (_GEMBA-classes_). Footnote 2: Although names are based on existing techniques for human assessment, they do not match perfectly. For each of these four prompt types, we experiment with two modes that differ with respect to the wording of the corresponding query templates which either have access to a human reference or not. As an example, we show the _GEMBA-DA_ prompt in Figure 1. Based on token count, this is the least constrained prompt template that we experiment with. The complete set of prompt templates is available in Appendix A. For naming convention, we mark quality estimation metrics (without reference) with the suffix "[nonef]". ### Scoring process The expected scores are in \([0,100]\) for _GEMBA-DA_ and _GEMBA-SQM_ prompts, same as for human assessment (Graham et al., 2013); for _GEMBA-stars_ the output ranges from \([1,5]\) and _GEMBA-classes_ assigns one of five class labels. We average segment-level scores to obtain system-level scores. For the _GEMBA-classes_ metric variant, we assign classes a numerical value \([0-4]\), based on the label, before averaging. Depending on the GPT model we query, sometimes answers are returned outside these ranges, as text. When we observe such an _invalid_ answer, we add randomness and sample more responses, selecting the first answer matching the output range as the final result. ### GPT models We experiment with seven GPT models--_ranging from GPT 2 up to the latest ChatGPT model_--that are described in Table 1.3 We use the Davinci-003 model as the default model for most experiments and compare the performance of other models in Section 4.3. Specifically, we use these models: Footnote 3: [https://learn.microsoft.com/en-us/zaure/cognitive-services/opensi/concepts/models](https://learn.microsoft.com/en-us/zaure/cognitive-services/opensi/concepts/models) **GPT 2**: We use models provided by Radford et al. (2019), assessing if GPT 2 may be useful for quality assessment--_we find that it is not_; **Ada**: GPT 3. Max request size of 2,048 tokens and training data up to October 2019; **Babbage**: GPT 3. More capable than Ada; **Curie**: GPT 3. More capable than Babbage; **Davinci-002**: GPT 3.5. Max request size of 4,000 tokens and training data up to June 2021. Uses FeedME training; **Davinci-003**: GPT 3.5.1. Uses PPO training; **ChatGPT**: Improved GPT 3.5 model, fine-tuned using Reinforcement Learning from Human Feedback (RLHF). GPT 3 models are based on Ouyang et al. (2022). Due to API request limits not all combinations of prompt variants and GPT models may have been evaluated for this work. \begin{table} \begin{tabular}{l l l} \hline \hline Model name & Abbrev. & Details \\ \hline GPT-2 & — & See Radford et al. (2019); does not work \\ Ada & — & We use text-ada-001; does not work \\ Babbage & Bab & We use text-bababage-001 \\ Curie & Curie & We use text-curies-001 \\ Davinci-002 & Duv2 & We use text-davinci-002 \\ Davinci-003 & Duv3 & We use text-davinci-003 \\ ChatGPT & Chat & We use text-chat-davinci-002 \\ \hline \hline \end{tabular} \end{table} Table 1: Details of all models used in this work. Models are sorted from oldest to newest which also reflects their respective power. GPT 2 and Ada do not work. Score the following translation from {source_lang} to {target_lang} with respect to the human reference on a continuous scale from 0 to 100, where score of zero means "no meaning preserved" and score of one hundred means "perfect meaning and grammar". ## 3 Experiments To measure the performance of the proposed GEMBA metric, we follow the methodology and use test data provided by the WMT22 Metrics shared task Freitag et al. (2022) which hosts an annual evaluation of automatic metrics, benchmarking them against human gold labels. Effectively, we compare GEMBA against the best-performing automatic metrics: COMET Rei et al. (2020, 2022), BLEURT Sellam et al. (2020), or the non-public winner MetricX XXL. ### Test set We use the MQM 2022 test set which contains human judgments for the following three translation directions: English into German, English into Russian, and Chinese into English. The test set contains a total of 54 machine translation system outputs or human translations. It contains a total of 106k segments. Translation systems are mainly from participants of the WMT22 General MT shared task Kocmi et al. (2022). The source segments and human reference translations for each language pair contain around 2,000 sentences from four different texts domains: news, social, conversational, and e-commerce. The gold standard for scoring translation quality is based on human MQM ratings, annotated by professionals who mark individual errors in each translation, as described in Freitag et al. (2021). ### Evaluation methods To determine how well automatic metrics correlate with humans, we measure system-level, pairwise accuracy (_accuracy_, Kocmi et al., 2021). For segment-level evaluation, we use Kendall's Tau (\(\tau\), Freitag et al., 2022). Here, accuracy is defined as the number of system pairs ranked correctly by the metric with respect to the human ranking divided by the total number of system pair comparisons. Formally: \[\text{Accuracy}=\frac{|\text{sign}(\text{metric}\Delta)\ =\ \text{sign}(\text{human}\Delta)|}{|\text{all \ system\ pairs}|}\] The variant of Kendall's Tau used for metric evaluation changed over the years. Initially, Callison-Burch et al. (2011) proposed to use Kendall's Tau-a ignoring human rankings that tied, while penalising ties in automatic metrics. \[\tau=\frac{\text{Concordant}-\text{Discordant}}{\text{Concordant}+\text{ Discordant}}\] where Concordant is the set of all human segment comparisons for which a given metric suggests the same order of systems and Discordant is the set of all human comparisons for which a given metric disagrees. This definition was later updated by Machacek and Bojar (2014), who handle ties as a separate group in contrast to Concordant and Discordant. Metrics shared tasks Mathur et al. (2020) and Freitag et al. (2021) changed this back to the 2011 version. Last year, Freitag et al. (2022) changed it to Kendall's Tau-b, which makes adjustments for ties. Overall, ties in automatic metrics are rare for non-identical translations but are an issue when a method outputs only a discrete set of scores (as in our case). Additionally, Kendall's Tau is susceptible to noise in gold pairwise rankings Freitag et al. (2022). We reproduced all scores reported in the WMT22 Metrics shared task findings paper with the official WMT22 script.4 Reported scores match Table 11 of the WMT22 metrics findings paper Freitag et al. (2022). Figure 1: The best-performing prompt based on Direct Assessment expecting a score between 0–100. Template **portions in bold face** are used only when a human reference translation is available. ## 4 Results We investigate GEMBA's performance for two modes: with a reference translation and without reference translation (in a quality estimation setting). Table 2 reports pairwise accuracy on the system level, comparing _GEMBA-DA_ against the best-performing metrics from the WMT22 Metrics shared task [20]. ### Reference-based The results in Table 2 show that our reference-based **GEMBA-Dav3-DA** metric sets a new state of the art. It outperforms all of the other reference-based metrics from the WMT22 Metrics shared task. The observed level of metric performance is unexpected, especially considering that human labels used as a gold standard in itself are noisy and therefore an accuracy of 100% is impossible to obtain for an automatic metric. ### Quality estimation Table 2 shows that our reference-less metric **GEMBA-Dav3-DA[noref]** achieves the highest performance for the quality estimation mode, and strongly outperforms all of the other reference-less metrics. Moreover, it also outperforms all of the other reference-based metrics, performing only slightly worse than **GEMBA-Dav3-DA**. Again, the observed level of assessment quality is unexpectedly high, highlighting the potential of using LLMs for translation quality assessment tasks. ### Comparison of GPT models We compare the performance of various GPT versions as an automatic metric. Table 3 shows results for all models we have experimented with and all prompt variants tested. We do not show results for GPT-2 or Ada models. Neither of those have produced replies in the specific scoring range and neither seemed to be producing any meaningful replies. We list a couple of their answers in Appendix C. Based on our experiments, we conclude that they are not powerful enough to understand the zero-shot prompts. By contrast, Babbage and Curie models appear to understand what type of answer they should produce, but the quality of their scores seems to be close to random guessing. Thus, both Babbage and Curie are useless for translation quality assessment. The main performance jump occurs for GPT 3.5 and larger models, i.e., Davinci-002, Davinci-003, and ChatGPT. Each of them achieves highly competitive results for all of the prompt variants we have tested. Interestingly, ChatGPT appears to have the lowest quality among these three models. In addition, ChatGPT often replies with a score followed by an explanation of why it has assigned that score. One possible reason for this lower level of performance may be in the form of the prompt, which wasn't modified to instruct ChatGPT not to generate an explanation. Therefore, it is possible that different prompts may improve the performance of the ChatGPT model. Unsurprisingly, the best performance is obtained by the most powerful LLM, Davinci-003. This also confirms the findings of Hendy et al. (2023) who demonstrated superior translation capabilities with this model over all other GPT variants. \begin{table} \begin{tabular}{l c} \hline \hline Metric & Accuracy \\ \hline **GEMBA-Dav3-DA** & 88.0\% \\ **GEMBA-Dav3-DA[noref]** & 86.1\% \\ MetricX XXL & 85.0\% \\ BLEURT-20 & 84.7\% \\ COMET-22 & 83.9\% \\ COMET-20 & 83.6\% \\ UniTr & 82.8\% \\ MS-COMET-22 & 82.8\% \\ MATESE & 81.0\% \\ YSi-1 & 79.2\% \\ COMETKwi[noref] & 78.8\% \\ COMET-QE[noref] & 78.1\% \\ BERTScore & 77.4\% \\ UniTr-src[noref] & 75.9\% \\ MS-COMET-QE-22[noref] & 75.5\% \\ MATESE-QE[noref] & 74.8\% \\ f200spBLEU & 74.1\% \\ chrF & 73.4\% \\ BLEU & 70.8\% \\ \hline \hline \end{tabular} \end{table} Table 2: Results for the system-level pairwise accuracy compared to the current automatic metric. Metrics marked as “[noref]” do not use a reference translation. \begin{table} \begin{tabular}{r r r r r r} \hline \hline & Bab & Curie & Dav2 & Dav3 & Chat \\ \hline DA & 39.1\% & 54.4\% & 85.8\% & 88.0\% & 81.0\% \\ DA[noref] & 55.8\% & 51.5\% & 83.9\% & 86.1\% & 82.1\% \\ SQM & 53.3\% & 40.5\% & 85.8\% & 85.0\% & 85.0\% \\ SQM[noref] & 51.1\% & 41.6\% & 82.8\% & 82.5\% & 81.0\% \\ Stars & 50.0\% & — & 88.3\% & 85.8\% & 84.7\% \\ Stars[noref] & — & — & 79.2\% & 83.2\% & 85.4\% \\ Classes & 47.4\% & 43.4\% & 79.6\% & 85.4\% & 87.2\% \\ Classes[noref] & 35.0\% & 61.7\% & 78.1\% & 78.8\% & 83.6\% \\ \hline \hline \end{tabular} \end{table} Table 3: Accuracy of the system-level pairwise accuracy for quality estimation methods for most combinations of prompts and different GPT models. The evaluation is based on three language pairs and MQM human labels. ### Segment-level performance All previous results are reported on the system level. We also investigate how well the GEMBA metric performs on the segment level, with respect to the human gold annotations. We present Kendall's Tau results for each language pair separately in Table 4 (results for all metrics are in Appendix B). Reference-based **GEMBA-Dav3-DA** is slightly behind the top-performing metrics but continues to have a high correlation with human judgment. On the other hand, quality estimation **GEMBA-Dav3-DA [noref]** has significantly lower segment-level performance in contrast to other top-performing metrics. Still, it outperforms string-based metrics (ChrF and BLEU). The lower performance of a segment-level correlation could be attributed to Kendall's Tau, which penalises ties. Our metric in contrast to other automatic metrics returns a discrete value between 0-100. There is a high probability that two translations will obtain an equal score. In order to investigate this further, we collect all answers across all systems and all three language pairs and then calculate the frequency of each distinct answer value. We can notice several interesting observations in Table 5. The DA reference-based prompt generates only multiples of five, reducing the scale to just 16 distinct values. Over three-quarters of all scores are either score 80, 90, or 95. This could reflect the actual quality of the system translations as the underlying systems are provably high-quality. When we investigate the "DA[noref]", we notice that 79.9% of all scores are of value "95". Despite this fact, the metric still manages to differentiate the systems from each other and outperform all other quality estimation metrics on the system level. The SQM-based method uses the scale in a more fine-grained manner, producing several values that are not multiples of five, i.e., 68, 87, 92, and 93. We conjecture that frequent ties and the discrete scale thus may be the reason behind the lower segment-level performance. ### Failure rate As we described earlier, LLMs may answer with an invalid answer, for example with a textual answer instead of a score, mostly explaining its decision. When such a situation happens, we iteratively increase the temperature--_adding randomness to the model_--and take the first answer matching the expected score output range. This adds non-determinism to our evaluation, therefore we investigate how frequently this phenomenon happens. Table 6 shows the number of invalid answers. For almost all combinations of models and prompts, LLMs understand the prompt and provide answers in a valid range with less than 1% of the answers being invalid.5 Footnote 5: Roughly 1,000 answers equal to 1% of the total volume. Processing answers is straightforward as it is usually a stand-alone number. In some occasions, LLMs give a numerical score and continue with a textual explanation, for such cases, we parse only the first number. A more complex approach needs to be taken for _GEMBA-stars_ prompts where the model provides different answers which we parse separately. Here are some examples of two-star answers: "2", "two", "*\(\bigstar\)", "\(\bigstar\)\(\bigstar\)", "two stars", or "2 stars". For non-English target languages the answer may be produced in the target language, e.g., "\(\raisebox{-0.86pt}{\scalebox{0.8}{$\cdot$}}\raisebox{-0.86pt}{\scalebox{0.8}{$ \cdot$}}\raisebox{-0.86pt}{\scalebox{0.8}{$\cdot$}}\raisebox{-0.86pt}{\scalebox {0.8}{$\cdot$}}\raisebox{-0.86pt}{\scalebox{0.8}{$\cdot$}}\raisebox{-0.86pt}{ \scalebox{0.8}{$\cdot$}}\raisebox{-0.86pt}{\scalebox{0.8}{$\cdot$}}\raisebox{-0.86 pt}{\scalebox{0.8}{$\cdot$}}\raisebox{-0.86pt}{\scalebox{0.8}{$ \cdot$}}\raisebox{-0.86pt}{\scalebox{0.8}{$\cdot$}}\raisebox{-0.86pt}{ \scalebox{0.8}{$\cdot$}}\raisebox{-0.86pt}{\scalebox{0.8}{$\cdot$}}\raisebox{-0.86 pt}{\scalebox{0.8}{$\cdot$}}\raisebox{-0.86pt}{\scalebox{0.8}{$ \cdot$}}\raisebox{-0.86pt}{\scalebox{0.8}{$\cdot$}}\raisebox{-0.86pt}{ \scalebox{0.8}{$\cdot$}}\raisebox{-0.86pt}{\scalebox{0.8}{$\cdot$}}\raisebox{-0.86 pt}{\scalebox{0.8}{$\cdot$}}\raisebox{-0.86pt}{\scalebox{0.8}{$ \cdot$}}\raisebox{-0.86pt}{\scalebox{0.8}{$\cdot$}}\raisebox{-0.86pt}{ \scalebox{0.8}{$\cdot$}}\raisebox{-0.86pt}{\scalebox{0.8}{$\cdot$}}\raisebox{-0.86 pt}{\scalebox{0.8}{$\cdot$}}\raisebox{-0.86pt}{\scalebox{0.8}{$ \cdot$}}\raisebox{-0.86pt}{\scalebox{0.8}{$\cdot$}}\raisebox{-0.86pt}{ \scalebox{0.8}{$\cdot$}}\raisebox{-0.86pt}{\scalebox{0.8}{$\cdot$}}\raisebox{-0.86 pt}{\scalebox{0.8}{$\cdot$}}\raisebox{-0.86pt}{\scalebox{0.8}{$ \cdot$}}\raisebox{-0.86pt}{\scalebox{0.8}{$\cdot$}}\raisebox{-0.86pt}{ \scalebox{0.8}{$\cdot$}}\raisebox{-0.86pt}{\scalebox{0.8}{$\cdot$}}\raisebox{-0.86 pt}{\scalebox{0.8}{$\cdot$}}\raisebox{-0.86pt}{\scalebox{0.8}{$ \cdot$}}\raisebox{-0.86pt}{\scalebox{0.8}{$\cdot$}}\raisebox{-0.86pt}{ \scalebox{0.8}{$\cdot$}}\raisebox{-0.86pt}{\scalebox{0.8}{$\cdot$}}\raisebox{-0.86 pt}{\scalebox{0.8}{$\cdot$}}\raisebox{-0. ## 5 Conclusion We have presented our work on GEMBA, a GPT-based estimation metric-based assessment method. Comparing our metrics to other automated metrics from the WMT22 Metrics shared task we report state-of-the-art performance on the MQM 2022 test set across three language pairs: English to German, English to Russian, and Chinese to English. We intend to continue research on the application of GPT models for quality assessment. Further research will focus on the switch to few-shot (as opposed to our current zero-shot methodology) as well as model fine-tuning. Both of which promise to increase GEMBA accuracy. Furthermore, modifying prompts to support MQM error-based evaluation or post-editing efforts may lead to further improvements. GPT-enhanced evaluation metrics may allow us to make progress with respect to document-level evaluation (due to their ability to use much larger context windows). This could be beneficial as there is little research into document-level metrics (Vernikos et al., 2022). ## Limitations While preliminary results indicate that the GEMBA metric performs very well when compared to other automated metrics evaluated as part of the WMT22 Metrics shared task, it is important to note that these results are based on human labels for _only three language pairs_. We expect that the metrics performance may suffer for other language pairs, mainly under-resourced languages similar to Hendy et al. (2023) who show low translation quality for such languages. In addition, GEMBA's state-of-the-art performance only holds for the system level, while segment-level scores still have room for improvement. Reported results are indicative of the potential performance LLMs could achieve for the translation quality assessment task in the long run. ## Acknowledgments This work would not have been possible without the help and support from our friend and colleague, Olivier Nano, who provided access to GPT models via Microsoft Azure - _Merci beaucoup, Olivier!_ The authors are also grateful to Matt Post, Vikas Raunak, Shabnam Sadegharmaki, and the Microsoft Translator research team for fruitful discussions and helpful feedback.
2309.04441
Comparative Study of Visual SLAM-Based Mobile Robot Localization Using Fiducial Markers
This paper presents a comparative study of three modes for mobile robot localization based on visual SLAM using fiducial markers (i.e., square-shaped artificial landmarks with a black-and-white grid pattern): SLAM, SLAM with a prior map, and localization with a prior map. The reason for comparing the SLAM-based approaches leveraging fiducial markers is because previous work has shown their superior performance over feature-only methods, with less computational burden compared to methods that use both feature and marker detection without compromising the localization performance. The evaluation is conducted using indoor image sequences captured with a hand-held camera containing multiple fiducial markers in the environment. The performance metrics include absolute trajectory error and runtime for the optimization process per frame. In particular, for the last two modes (SLAM and localization with a prior map), we evaluate their performances by perturbing the quality of prior map to study the extent to which each mode is tolerant to such perturbations. Hardware experiments show consistent trajectory error levels across the three modes, with the localization mode exhibiting the shortest runtime among them. Yet, with map perturbations, SLAM with a prior map maintains performance, while localization mode degrades in both aspects.
Jongwon Lee, Su Yeon Choi, David Hanley, Timothy Bretl
2023-09-08T17:05:24Z
http://arxiv.org/abs/2309.04441v1
# Comparative Study of Visual SLAM-Based Mobile Robot Localization Using Fiducial Markers ###### Abstract This paper presents a comparative study of three modes for mobile robot localization based on visual SLAM using fiducial markers (i.e., square-shaped artificial landmarks with a black-and-white grid pattern): SLAM, SLAM with a prior map, and localization with a prior map. The reason for comparing the SLAM-based approaches leveraging fiducial markers is because previous work has shown their superior performance over feature-only methods, with less computational burden compared to methods that use both feature and marker detection without compromising the localization performance. The evaluation is conducted using indoor image sequences captured with a hand-held camera containing multiple fiducial markers in the environment. The performance metrics include absolute trajectory error and runtime for the optimization process per frame. In particular, for the last two modes (SLAM and localization with a prior map), we evaluate their performances by perturbing the quality of prior map to study the extent to which each mode is tolerant to such perturbations. Hardware experiments show consistent trajectory error levels across the three modes, with the localization mode exhibiting the shortest runtime among them. Yet, with map perturbations, SLAM with a prior map maintains performance, while localization mode degrades in both aspects. ## I Introduction The use of fiducial markers--square-shaped planar artificial landmarks with a black-and-white grid pattern--has been favored for the application of visual simultaneous localization and mapping (SLAM) in the scenario when these markers can be deployable in the given environment. This preference emerges due to the robustness and accuracy exhibited by fiducial marker-based SLAM approaches [1-4] over canonical approaches using visual features (e.g., ORB-SLAM [5]) or pixel values (e.g., DSO [6]). The spectrum of fiducial marker-based SLAM spans from methods exclusively utilizing fiducial marker detection outcomes [1-3] to hybrid techniques incorporating both marker detections and features [4]. As our focus here centers on fiducial marker-based SLAM, it is important to note that the subsequent analysis dedicates solely to SLAM with fiducial markers. Within this context, three distinct operational modes of fiducial marker-based SLAM are discernible. The first mode, _SLAM_, entails a comprehensive approach that estimates the robot's pose while mapping out the surrounding environment. The second mode, _SLAM with a prior map_, starts from a pre-existing map as an initial reference value. In cases where the map is a priori known and its states remain fixed, the third mode, _localization_, exclusively estimates the robot's pose. A noticeable gap in the existing literature lies in the absence of comparisons between these three modes, through quantifiable metrics such as error analysis and processing speed. While certain fiducial marker-based SLAM approaches, such as SPM-SLAM [1] and TagSLAM [2], report error and processing speed the _SLAM_ mode, they do not provide corresponding data for the _SLAM with a prior map_ and _localization_ modes. Another crucial aspect yet to be addressed pertains to the resilience of the _SLAM with a prior map_ and _localization_ modes in response to variations in marker map quality. Indeed, the performance of these modes is influenced by the fidelity of the marker map. If the map is considered "ideal" or at least of a high standard, both modes are anticipated to yield comparable or superior error performance while placing a lesser computational burden in contrast to the _SLAM_ mode. This attribute proves particularly advantageous in scenarios requiring real-time robot localization under resource-constrained, onboard platforms. However, if the map quality deviates from such conditions, it may introduce degradation of the performance of each operational mode. In this paper, we aim to provide an evaluation of the three different operational modes for robot localization based on SLAM with fiducial markers, in terms of absolute trajectory error and processing speed. In particular, for the last two modes (_SLAM with a prior map_ and _localization_ modes), we evaluate their performances by perturbing the accuracy of the prior map to study the extent to which each mode is tolerant to such perturbations. We present an overview of the structure and content of our paper as follows. Section II outlines our data collection and experimental setup to perform fiducial marker-based SLAM in three different operational modes. We then proceed to report the metrics comparing the three modes in Section III, which include the absolute trajectory error and the runtime for the optimization process per frame. We specifically provide the runtime for the optimization process per frame because it is the major component that contributes to variations in processing speed across the three modes, unlike the other components which are more or less the same such as detecting fiducial markers. Finally, Section IV summarizes our findings, provides concluding remarks, and discusses the implications of our work. ## II Experiments ### _Data collection_ We collected an indoor dataset comprising multiple sequences of camera images and Vicon Mocap data as ground truth using a hand-held device equipped with an Intel Real-alsense D435i and reflective markers. During data collection, five 36h11 AprilTags [7] with side lengths of 0.2m were placed within the environment, as depicted in Fig. 1. The dataset consists of eight sequences: one sequence lasts 60 seconds, dedicated to generating a prior map using the _SLAM_ mode for the _SLAM with a prior map_ and _localization_ modes; the other seven sequences last 30 seconds each, serving all three modes. The distinguishing factor between these two categories of sequences is that the former captures all AprilTags to create a comprehensive prior map, crucial for the _SLAM with a prior map_ and _localization_ modes, while the latter does not require this complete mapping of all the AprilTags placed. ### _Implementation details_ We used the existing WOLF codebase [3] to execute the _SLAM_ mode (constructing a marker map from scratch), the _SLAM with a prior map_ mode (loading a pre-saved marker map for initial estimation and performing SLAM), and the _localization_ mode (using a pre-saved marker map as a fixed reference for localization). As a preliminary step, the _SLAM_ mode was employed to create a prior map for both the _SLAM with a prior map_ and _localization_ modes. It is important to note that this marker map is not flawless, as it results from estimating the poses of fiducial markers rather than using their true references, inherently harboring potential sources of error even before intentional perturbation is introduced. For the _SLAM with a prior map_ and _localization_ modes, we systematically perturbed the positions of every fiducial marker along directions at random within the map, varying from 0.1m to 0.5m in every 0.1m. This approach allowed us to investigate the tolerance of each method to variations in the marker map's quality. We assessed the absolute trajectory error using an open-source evaluation tool [8] and the runtime for the optimization process per frame through Ceres Solver [9]. All executions and evaluations were carried out on an octa-core Intel i7-10700 CPU operating at 2.90 GHz, with 32 GB of RAM. ## III Results Table I presents the absolute trajectory error for each of the three localization modes across all sequences. Specifically, for the _SLAM with a prior map_ and _localization_ modes, we provide results by perturbing the positions of fiducial markers within the prior marker map. Again, the prior map is imperfect and hence is inherently regarded to be perturbed even before intentional perturbation is introduced. The perturbations range from 0.1m to 0.5m, with increments of 0.1m, along directions at random. When no perturbation is applied, all three modes yield results that exhibit minimal differences within a few centimeters. However, as perturbations are introduced to the _localization_ mode, errors increase to tens of centimeters even with the smallest perturbation (i.e., \(\delta\mathbf{p}=0.10\)), while the _SLAM with a prior map_ mode consistently maintains results within a few centimeters. This aligns with the common understanding that the _SLAM with a prior map_ mode is capable of recovering reference marker pose values within the map by concurrently updating both localization and mapping outcomes, whereas the _localization_ mode cannot as the map is regarded accurate and hence is fixed during optimization, thereby limiting updates to only the localization results. Table II presents the runtime for the optimization process per frame for each of the three localization modes across all sequences. Similar to the error analysis discussed earlier, we also present results for the _SLAM with a prior map_ and _localization_ modes by introducing inherent perturbations to the positions of fiducial markers within the prior marker map. Again, it is worth noting that the prior marker map itself already contains undesired perturbations due to its inherent imperfection. The _SLAM with a prior map_ mode does not exhibit any distinct runtime trend in relation to the extent of perturbation applied. However, in a general sense, it shows a slightly longer runtime (up to about 20%) than the _SLAM_ mode. Conversely, the _localization_ mode shows shorter runtime (up to about 20%) than the _SLAM_ mode when no perturbation is introduced. Yet, as the perturbation level increases, the _localization_ mode experiences a corresponding rise in runtime, culminating in up to about 40% longer runtime at \(\delta\mathbf{p}=0.50\) compared to the _SLAM_ mode. This behavior is indicative of the _localization_ mode grappling with ill-posed problems stemming from inaccurate map quality. ## IV Conclusions In summary, this paper conducted a comparative analysis of three distinct robot localization modes based on visual SLAM with fiducial markers: _SLAM_, _SLAM with a prior map_, and _localization with a prior map_. We evaluated these Fig. 1: For data collection, an experimental setup was arranged with the placement of five 36h11 AprilTags, each having side lengths of 0.2m. modes in terms of trajectory error and runtime for the optimization process. Specifically, we introduced perturbations to the map for the _SLAM and localization with prior map_ modes to examine their impact on the aforementioned metrics. When no perturbations were introduced, our hardware experiments show that all three modes exhibited similar levels of trajectory error, while the _localization_ mode showed the shortest runtime. However, in scenarios involving map perturbations, the _SLAM with a prior map_ mode maintained its trajectory error and runtime performance at levels comparable to those in the absence of perturbation. On the other hand, the _localization_ mode experienced deteriorating trajectory error and runtime performance as the magnitude of map perturbation increased. ## Acknowledgment This work is supported by Supernal, LLC.
2309.14652
Pricing Personalized Preferences for Privacy Protection in Constant Function Market Makers
Constant function market makers (CFMMs) are a popular decentralized exchange mechanism and have recently been the subject of much research, but major CFMMs give traders no privacy. Prior work proposes randomly splitting and shuffling trades to give some privacy to all users [Chitra et al. 2022], or adding noise to the market state after each trade and charging a fixed `privacy fee' to all traders [Frongillo and Waggoner 2018]. In contrast, we propose a noisy CFMM mechanism where users specify personal privacy requirements and pay personalized fees. We show that the noise added for privacy protection creates additional arbitrage opportunities. We call a mechanism priceable if there exists a privacy fee that always matches the additional arbitrage loss in expectation. We show that a mechanism is priceable if and only if the noise added is zero-mean in the asset amount. We also show that priceability and setting the right fee are necessary for a mechanism to be truthful, and that this fee is inversely proportional to the CFMM's liquidity.
Mohak Goyal, Geoffrey Ramseyer
2023-09-26T04:12:50Z
http://arxiv.org/abs/2309.14652v1
# Pricing Personalized Preferences for Privacy Protection in Constant Function Market Makers ###### Abstract. Constant function market makers (CFMMs) are a popular decentralized exchange mechanism and have recently been the subject of much research, but major CFMMs give traders no privacy. Prior work proposes randomly splitting and shuffling trades to give some privacy to _all_ users (Chitra et al., 2022), or adding noise to the market state after each trade and charging a _fixed_' privacy fee' to all traders (Frongillo and Waggoner, 2018). In contrast, we propose a noisy CFMM mechanism where users specify personal privacy requirements and pay personalized fees. We show that the noise added for privacy protection creates _additional arbitrage_ opportunities. We call a mechanism _priceable_ if there exists a privacy fee that always matches the additional arbitrage loss in expectation. We show that a mechanism is pracleable if and only if the noise added is zero-mean in the asset amount. We also show that pracleability and setting the right fee are necessary for a mechanism to be _truthful_, and that this fee is inversely proportional to the CFMM's liquidity. Local differential privacy; decentralized finance; automated market makers; mechanism design. + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + + Footnote †: journal: Computer Science non-zero curvature, then its spot price is distorted from \(\hat{p}\) for the privacy of the trade. Then, the arbitrager has an _additional_ arbitrage opportunity to profit off the CFMM by moving the spot price again to \(\hat{p}\). To be sustainable, the CFMM must charge an additional fee for privacy preservation to account for this additional arbitrage. However, finding this fee amount without knowing \(\hat{p}\) is not trivial. We call a private CFMM mechanism _pricoable_ (Definition 4.2) if there exists a fee, independent of the true price, that pays for the additional arbitrage. We show that a noisy CFMM mechanism is priceable if and only if the noise is zero-mean in the asset amount. The proof is technical, but intuitively, a zero-mean noise does not create additional liquidity in expectation. Pricing the additional arbitrage at the post-trade spot price ensures that an arbitrageur does not gain in expectation, regardless of the true price. We also define _truthfulness_ of a noise-adding CFMM mechanism, such that a trader without innate privacy requirements has the incentive to leave the CFMM spot price at the external market price \(\hat{p}\) in one trade (Definition 4.1). We show that zero-mean noise distribution is necessary for truthfulness. With the right privacy fee, a zero-mean noise distribution leads to a truthful mechanism. This ensures that the noisy CFMM's spot price will closely follow the price on an external exchange as long as arbitragers are present. For priceable mechanisms, we formulate the privacy fee as a function of the CFMM state, the trading function, and the trader's privacy requirements. We show that a more liquid (Definition 4.9) CFMM requires a smaller privacy fee. In fact, the privacy fee of a trade is inversely proportional to the liquidity (Proposition 4.10). This aligns with the fee-liquidity relationship of Frongillo and Waggoner (2018) in an information aggregation context. ### Related Work #### 1.2.1. Prediction Markets Closest to our work is that of Frongillo and Waggoner (2018), which gives a bounded-loss differentially-private cost-function-based prediction market via a similar mechanism--adding noise to the market after each trade. Our work differs in the following ways: * They mandate the same privacy specification \((\tau,\varepsilon)\) and privacy fee for all traders regardless of trade size. Further, they restrict the maximum size of a trade. We give a framework for specifying personalized privacy requirements in \(\varepsilon_{i}\) and \(\tau_{i}\) for trader \(i\), and find a personalized privacy fee, which decreases with \(\varepsilon_{i}\) and increases with widening the masking interval. Thus, we do not restrict the trade size, except when it causes the CFMM to run out of an asset. * They use the continual observation technique of Dwork et al. (2010) and Chan et al. (2011) wherein old noise drawings are reused cleverly. The benefit of this approach is that the total noise over \(T\) trades is limited to \(O(\log T)\) (informally, many noise terms cancel each other out). However, the drawback is that the noise added after each trade is also \(O(\log T)\). Such a mechanism is not directly useable for CFMMs in DeFi since (1) the total number of trades, \(T\), is often not fixed, (2) storing \(O(T)\) noise drawings in a hidden manner for future re-use may not be possible, (3) the privacy fee which accounts for the additional arbitrage would be \(\Omega(\log T)\) larger since it depends on the magnitude of noise added after individual trades. We do not aim to reduce the total noise variance since, per our assumptions, arbitragers keep the CFMM price close to accurate. In exchange, we use smaller noise variance on individual trades to facilitate a smaller personalized privacy fee. Earlier, Cummings et al. (2016) showed that without a fee, noise-adding bounded-loss cost-function-based market makers must have a 'fast'-growing \(\varepsilon\) as more trades are made by the market (i.e., quickly diminishing privacy guarantees). #### 1.2.2. Decentralized Finance Chitra et al. (2022) give an algorithm for splitting and shuffling trades in a block to ensure privacy. They study the worst-case price discrepancy between their algorithm and a non-private CFMM, and its trade-off with DP guarantees. Our paper differs in the following: * They consider blockchains with "consensus rules for executing trades in a particular order." In comparison, our mechanism does not require this capability. Most mainstream blockchains do not have this capability. We, however, require the blockchain to have verifiable randomness, the same as their mechanism. * Their mechanism provides the same privacy guarantees to all traders, whereas we provide a personalized mechanism. * Their shuffling mechanism requires many trades in a block to be able to _hide_ trades, and their privacy guarantees depend on the number of trades in a block. While this is often true, having a mechanism invariant to the number of trades in a block, such as ours, is useful for some contexts. Outside of CFMMs, Penumbra (2023) use batch auctions and homomorphic encryption to enable private swaps in DeFi. Davidow and Manevich (2023) designed a verifiable LDP system for payments. See Dai (2021); Zhou et al. (2022); Zhu et al. (2022) for more perspectives on privacy in DeFi. #### 1.2.3. Differential Privacy We adopt the personalized local differential privacy (PLDP) framework of Chen et al. (2016). The original framework was motivated by a model of users sharing spatial data and requiring the ability to mask their trade in a specified region. For example, user \(A\) may not mind disclosing that they are in New York City but do not want to share the location within New York City. Another user, \(B\), may be willing to disclose that they are in Ghana, but not any more. This framework is better suited for masking trade sizes than simple LDP. The masking interval desired by a trader buying 1 unit \(X\) would generally be different from that desired by a trader selling 100 \(X\). See Dwork et al. (2010) for an overview of DP. See Yang et al. (2020) for a survey of the applications of LDP, which include Google Chrome Browser (Erlingsson et al., 2014) and Apple OS (mac, 2016). ## 2. Preliminaries ### Constant Function Market Makers (CFMMs) A CFMM is an automated market-maker parameterized by a trading function \(f\). We focus the scenario where a CFMM trades between a volatile asset \(X\) and a stable numeraire asset \(Y\), which is the context in which CFMMs are most often employed. A liquidity provider endows the CFMM with nonnegative _reserves_\((x,y)\in\mathbb{R}_{+}^{2}\) of each asset. A CFMM with reserves \(x,y\) and trading function \(f\) accepts any trade \((\Delta_{x},\Delta_{y})\) that results in reserves \(x+\Delta_{x},y-\Delta_{y}\) if and only if \(f(x+\Delta_{x},y-\Delta_{y})=f(x,y)\). The following characterization of trading functions is standard in the literature and is required to ensure that the CFMM works as intended. **Assumption 1**.: _CFMM trading functions are quasi-concave, continuous, and non-decreasing (in both coordinates) on \(\mathbb{R}^{2}_{+}\)._ This characterization implies that given any level curve of the trading function \(f\), the amount \(x\) of asset \(X\) in the reserves, uniquely determines the amount of \(Y\), which we denote \(\mathcal{Y}(x)\) when the level curve of the function \(f\) is clear from the context. Commonly studied CFMMs include the constant product \(f(x,y)=xy\), used by, for example, the exchange Uniswap (Adams et al., 2020), and \(f(x,y)=2-e^{-x}-e^{-y}\) which implements the Logarithmic Market Scoring Rule (LMSR) (Hanson, 2007). CFMMs, in practice, charge a "trading fee" to all trades which is a percentage of the trade size. For simplicity, we assume that any fee is not added to the CFMM reserves - it is taken by the LP. An important quantity for the analysis of CFMMs is the price it provides for an infinitesimal trade, referred to as its spot price. **Definition 2.1** (Spot Price).: _At state \((\hat{x},\hat{y})\), the spot price of a CFMM with trading function \(f\) is \(\frac{df}{dx}/\frac{\partial f}{\partial y}\) at \((\hat{x},\hat{y})\)._ _When \(f\) is not differentiable, the spot price is any subgradient of \(f\)._ Assumption 1 implies the price offered to a trader is never better than the spot price. This has an important implication: when the price at the external market deviates, arbitragers align the CFMM's spot price with it. In this sense, a CFMM is a "truthful" mechanism - that an arbitrage cannot do better by "reporting" a false price to the CFMM. We make an additional assumption. **Assumption 2**.: _The CFMM never runs out of an asset._ ### Differential Privacy The traditional notion of DP is for a centralized model ("global" DP), where a trusted data curator holds all users' accurate data and reveals aggregate statistics to queries with the constraint that it cannot be inverted to infer any specific user's data. We consider here a different notion of privacy, called personalized local differential privacy (Chen et al., 2016), wherein each user's data is privatized (usually by adding noise) _before_ being stored with the mechanism. The privacy guarantee is given by a parameter \(\varepsilon\); higher \(\varepsilon\) implies worse privacy protection. **Definition 2.2** (Personalized Local Differential Privacy (PLDP)).: _Given the personalized privacy specification \((\tau_{i},\varepsilon_{i})\) of user \(i\), a randomized algorithm \(\mathcal{A}\) satisfies \((\tau_{i},\varepsilon_{i})-\)PLDP for \(i\) if for any pair of values \(l,l^{\prime}\in\tau_{i}\), and for any measurable subset \(O\subseteq Range(\mathcal{A})\),_ \[Pr[\mathcal{A}(l)\in O]\leq\exp(\varepsilon_{i})\cdot Pr[\mathcal{A}(l^{ \prime})\in O],\] _where the probability space is over the randomness in \(\mathcal{A}\)._ Observe that traditional local differential privacy (LDP) (Kasiviswanathan et al., 2011) is a special case of PLDP where all users have the same \(\tau\) and \(\varepsilon\). A larger masking interval \(\tau\) implies stronger privacy, and within that interval, a smaller \(\varepsilon\) corresponds to a better privacy guarantee. For example, \(\varepsilon=0\) corresponds to the case that all points in masking interval \(\tau\) produce the same distribution of the algorithm's output. ## 3. Model We describe here the model of the adversary, our noisy CFMM mechanism, the external market price, and the associated arbitrage. ### Adversary Attack Model We adopt the following attack model, which was first given by Angeris et al. (2021), and also used by Chitra et al. (2022). 1. [leftmargin=*] 2. Eavesdropper Eve knows the trading function of the CFMM. 3. Eve knows _when_ trader Tod makes a transaction with the CFMM. 4. Eve can query the _spot price_ of CFMM. 5. Eve can query whether a given non-zero trade is _valid_. 6. Eve can do (3) and (4) _before and after_ Tod's trade. Angeris et al. (2021) showed that this much information is sufficient for Eve to infer exactly the amount traded by Tod in a traditional CFMM. ### System Model We borrow here the model of some private smart contract systems (such as Hawk (Kosba et al., 2016)), which delegate private computation to a "manager," that can be implemented via trusted execution environments (TEE) or multiparty computation (MPC) protocols, as in, e.g., Banerjee et al. (2021) and Baum et al. (2022). We specifically require a way for the CFMM to maintain private state, and to execute code on that private state. This private information includes not only the CFMM reserves but also a private account \(\mathcal{H}\), used in the mechanism discussed below. We also require a source of unpredictable (pseudo)randomness, such as the output of a VRF (Micali et al., 1999). We also assume the existence of arbitrageurs, who may trade on the CFMM and on external, highly liquid exchanges, and that arbitrageurs are aware of the market price on external exchanges. ### Privacy Protection Mechanism for CFMMs The noisy CFMM randomly distorts its state after every trade. **Noisy CFMM Mechanism** 1. [leftmargin=*] 2. The CFMM maintains a "hidden" private account \(\mathcal{H}\) of reserves. This account is used to make "noise trades"as follows. 3. At some point in time, the CFMM state is \((x,\mathcal{Y}(x))\). 4. Trader \(i\) makes a trade selling \(\Delta_{i}\) units of X to the CFMM and specifies their privacy requirements \((\tau_{i},\varepsilon_{i})\) such that \(\Delta_{i}\in\tau_{i}\). 5. Trader \(i\) gets \(\mathcal{Y}(x)-\mathcal{Y}(x+\Delta_{i})\) units of \(Y\) in exchange for \(\Delta_{i}\) units of X, and pays a fee, which is the sum of a "trading fee" and a "privacy fee" \(\gamma_{i}\). The fee is not added to the CFMM state. 6. The CFMM immediately makes a random trade of \(\eta_{i}\sim\mathcal{D}_{i}\) units of \(X\) with the hidden account \(\mathcal{H}\). 7. The new CFMM state is \(x^{\prime}=x+\Delta_{i}+\eta_{i},y^{\prime}=\mathcal{Y}(x^{\prime})\). This state can be queried by the next trader (and Eve, via the attack of SS3.1). We assume that an external operator supports the CFMM to ensure that \(\mathcal{H}\) never runs dry, but if \(\mathcal{H}\) cannot support the maximum possible noise for a trade request, the CFMM rejects the request. We use noise trade distributions of bounded support - we will give examples in SS4.1. We consider noise trade distributions which, for a given CFMM, depend only on the trade amount \(\Delta_{i}\) and privacy specification \((\tau_{i},\varepsilon_{i})\). Importantly, it is independent of the CFMM state. This setting is standard in the DP literature. The privacy fee \(\gamma_{i}\) depends on the CFMM state and the noise trade distribution \(\mathcal{D}_{i}\). Since our mechanism must also supports non-private trades as traditional CFMMs do, the noise trade \(\eta_{i}\) and the privacy fee \(\gamma_{i}\) are zero when \(\varepsilon_{i}=\infty\) or \(\tau_{i}=\{\Delta_{i}\}\) for user \(i\). ### Arbitrage and Additional Arbitrage In this paper, we study the case where an external market with infinite liquidity exists, and arbitrageurs can trade both with the CFMM and the external market. We consider the external market price as the "true" price. This setting is standard in DeFi literature on CFMMs, for example, in Evans et al. (2021); Goyal et al. (2023); Milionis et al. (2022). Our result also holds when arbitrageurs trade on private knowledge of future or true prices, which is important when a bigger external market does not exist. Whenever the CFMM spot price deviates from the true price, an arbitrageur makes a risk-less profit by aligning CFMM's spot price with the true price. We now illustrate the _additional_ arbitrage problem with the noisy CFMM mechanism. Suppose for the moment that the privacy fee were zero and that Tod has exclusive access to the CFMM. Tod can exploit the noisy CFMM mechanism in the following manner. #### 3.4.1. Arbitrage on Noisy CFMM 1. [leftmargin=*] 2. Tod observes the CFMM spot price \(p\) and the true price \(\hat{p}\). 3. If \(p\neq\hat{p}\), they make a trade with the CFMM, such that the post-trade CFMM spot price \(\tilde{p}\) satisfies \(\hat{p}=\hat{p}\). They specify some non-trivial privacy requirements. Tod makes a risk-free profit by taking the reverse trade on the external market. 4. The CFMM executes its noise trade and ends up with a spot price \(p^{\prime}\), which is not equal to \(\hat{p}\) with nonzero probability. 5. Steps 2 and 3 repeat until a zero noise trade is drawn. Observe that the arbitrage profit of Step 2 is not due to the privacy feature of the CFMM. In this paper, we do not account for this component of the CFMM's loss. In a traditional CFMM, arbitrage stops at Step 2. In a noisy CFMM, additional arbitrage profit is created in Step 3, which is the subject of study of this paper. Definition 3.1 (Additional Arbitrage).: _A CFMM \(C\) and a noisy CFMM \(\tilde{C}\) have identical states and trading functions. For any true price \(\hat{p}\), the additional arbitrage is the difference of the maximum possible profit in expectation from trading with \(\tilde{C}\) and \(C\)._ ## 4. Truthfulness and Priceability of Noisy CFMM Mechanism The strategy above is not the only possible thing trader Tod might do. For example, they could conceivably leave the CFMM at a spot price \(\hat{p}\) different from the true price \(\hat{p}\) and make a sequence of multiple trades with the noisy CFMM for a higher overall profit. This motivates the following property. Definition 4.1 (Truthfulness).: _A noisy CFMM mechanism is truthful if a trader with no privacy requirements, strictly increasing utility in Y, no utility for X, exclusive access to the CFMM, and access to the external market, always maximizes their utility by trading with the CFMM only once such that the post-trade spot price is equal to the true price of X, and with privacy level \(\varepsilon=\infty\)._ This definition does not apply to traders who trade for intrinsic reasons without regard to price movements (so-called "uninformed traders"). It also does not apply to traders with privacy requirements since we do not quantify the value that traders attach to privacy protection. It applies to strategic and informed traders who perform arbitrage without privacy requirements. Although defined narrowly, our notion of truthfulness is important for two reasons: (1) it explains if the noisy CFMM will closely follow the true price and therefore be attractive to uninformed traders, and (2) it paves the way for us to compute correct privacy fees and give a minimal characterization of acceptable noise trade distributions. We define a property to study the economic feasibility of the privacy feature on CFMMs. Definition 4.2 (Priceability).: _A noisy CFMM mechanism is priceable if there always exists a privacy fee which depends only on the CFMM state, its trading function, and the noise trade distribution such that the mechanism has the following property._ _For any true price and CFMM state, the additional arbitrage is zero._ The crucial feature of Definition 4.2 is that the privacy fee cannot depend on the true price \(\hat{p}\). Observe that for a privacy fee of zero, the noisy CFMM creates strictly greater arbitrage profit than a traditional CFMM when the CFMM has non-zero curvature (SS3.4.1). Truthfulness and priceability have different motivations. Truthfulness ensures that the CFMM spot price follows the true price and is attractive to uninformed traders. Priceability ensures that the CFMM does not lose money to additional arbitrage created by the privacy feature. However, these two properties are closely related technically. The following observations follow from the definitions of these properties. Observation 1 ().: _Truthfulness implies priceability._ Observation 2 ().: _There exists a privacy fee function under which a priceable noisy CFMM mechanism is truthful._ Before discussing our results characterizing the noise trade distribution and privacy fee, we first give an alternate formulation of a CFMM. A level curve of a trading function can be seen as a map from the amount of X in the reserves to the CFMM spot price. We denote this map by \(P(x)\), where the trading function and the level curve are clear from the context. When the trading function is not differentiable, \(P(x)\) is the largest subgradient at that point. Assumptions 1 and 2 imply that \(P(x)\) has the following properties. Observation 3 ().: _Spot price \(P(x)\) is monotonically decreasing, \(P(x)\rightarrow\infty\) as \(x\to 0\) and \(P(x)\to 0\) as \(x\rightarrow\infty\)._ We similarly define \(P^{-1}(p)\) as the largest \(x\) for which the spot price is \(p\) when the trading function and its level curve are clear from the context (since we do not re-invest fee into the reserves, our CFMM is path independent and stays at the initial level curve). We make the following assumption for technical ease. Assumption 3 ().: _The trading fee is zero._ This assumption is for expository clarity, and is not _required._ When a \(c\) percent trading fee is charged, the truthfulness property becomes _approximate_ such that an arbitrageur corrects the CFMM spot price only if it is more than \(c\) percent away from the true price. In practice, \(c\) is generally below \(1\%\). Coming to our results, we first describe what does not work for truthfulness. **Theorem 4.3**: _If the noise trade distribution \(\mathcal{D}(\Delta,\varepsilon,\tau)\) has non-zero-mean for some \((\Delta,\varepsilon,\tau)\), then the noisy CFMM mechanism is not truthful for any finite privacy fee._ Denote the trade size, privacy level, and masking interval for which \(\mathcal{D}\) has non-zero mean by \((\tilde{\Delta},\tilde{\varepsilon},\tilde{\tau})\). Denote the mean by \(\mu\). We consider two cases where \(\mu\) is either positive or negative. **Case 1:**\(\mu>0\). Here, the situation where truthfulness is violated has the true price \(\hat{p}\) exceeds the initial spot price \(p\). The arbitrageur's profit under the truthful strategy is: \[\int_{P^{-1}(\hat{p})}^{P^{-1}(\hat{p})}(P(a)-\hat{p})\ da. \tag{1}\] Consider the following strategy for the arbitrageur: 1. Make a trade of \((\tilde{\Delta},\tilde{\varepsilon},\tilde{\tau})\), and pay the privacy fee \(\gamma\). 2. Make a non-private trade with post-trade CFMM spot price \(\hat{p}\). When the noise trade drawn is \(\eta\), the profit with this strategy is: \[-\gamma+\int_{P^{-1}(\hat{p})}^{P^{-1}(\hat{p})+\tilde{\Delta}}(P(a)-\hat{p} )\ da+\int_{P^{-1}(\hat{p})+\tilde{\Delta}+\eta}^{P^{-1}(\hat{p})}(P(a)-\hat{p })\ da\] The excess gain from deviating from the truthful strategy is \[-\gamma+\int_{P^{-1}(\hat{p})+\tilde{\Delta}+\eta}^{P^{-1}(\hat{p })+\tilde{\Delta}}(P(a)-\hat{p})\ da\] \[= -\gamma+\int_{P^{-1}(\hat{p})+\tilde{\Delta}+\eta}^{P^{-1}(\hat{p })+\tilde{\Delta}}P(a)\ da+\eta\hat{p}\] Recall that \(\mathbb{E}(\eta)=\mu>0\) in this case. The first two terms are bounded and independent of \(\hat{p}\). For large enough \(\hat{p}\), the positive third term dominates, and the result follows for this case. **Case 2:**\(\mu<0\). Here we construct a scenario where the true price \(\hat{p}\) is less than the initial spot price \(p\). The arbitrageur's profit under the truthful strategy is same as (1). Consider the following three-step strategy for the arbitrageur: 1. Make a non-private trade with the CFMM to buy X such that its spot price becomes \(p^{\prime}\) (where \(p^{\prime}\) is a large value and will be set precisely later). 2. Make a trade of \((\tilde{\Delta},\tilde{\varepsilon},\tilde{\tau})\), and pay the privacy fee \(\gamma\). 3. Make a non-private trade with post-trade CFMM spot price \(\hat{p}\). When the noise trade drawn is \(\eta\), the profit with this strategy is: \[-\gamma+\int_{P^{-1}(\hat{p})}^{P^{-1}(\hat{p}^{\prime})}(P(a)- \hat{p})\ da+\int_{P^{-1}(\hat{p}^{\prime})}^{P^{-1}(\hat{p}^{\prime})+\tilde{ \Delta}}(P(a)-\hat{p})\ da\] \[+\int_{P^{-1}(\hat{p}^{\prime})+\tilde{\Delta}+\eta}^{P^{-1}( \hat{p})}(P(a)-\hat{p})\ da\] The excess gain from deviating from the truthful strategy is: \[-\gamma+\int_{P^{-1}(\hat{p})}^{P^{-1}(\hat{p}^{\prime})}(P(a)- \hat{p})\ da+\int_{P^{-1}(\hat{p}^{\prime})}^{P^{-1}(\hat{p}^{\prime})+\tilde{ \Delta}}(P(a)-\hat{p})\ da\] \[+\int_{P^{-1}(\hat{p})}^{P^{-1}(\hat{p})}(P(a)-\hat{p})\ da-\int_ {P^{-1}(\hat{p})}^{P^{-1}(\hat{p})}(P(a)-\hat{p})\ da\] \[= -\gamma+\int_{P^{-1}(\hat{p}^{\prime})+\tilde{\Delta}+\eta}^{P^{- 1}(\hat{p}^{\prime})+\tilde{\Delta}}(P(a)-\hat{p})\ da\] \[= -\gamma+\eta\hat{p}+\int_{P^{-1}(\hat{p}^{\prime})+\tilde{\Delta }}^{P^{-1}(\hat{p}^{\prime})+\tilde{\Delta}}P(a)\ da\] Since \(P(\cdot)\) is monotonically decreasing and positive, \[\geq-\gamma+\eta\hat{p}-\eta P(P^{-1}(\hat{p}^{\prime})+\tilde{\Delta})\] Recall that \(\mathbb{E}(\eta)=\mu<0\). We construct the case where the true price \(\hat{p}\) is smaller than \(\frac{1}{|\mu|}\). We set \(\hat{p}^{\prime}\) in the arbitrageur's strategy such that \(-\mu P(P^{-1}(\hat{p}^{\prime})+\tilde{\Delta})\) is larger than \(-\gamma+\mu\hat{p}\). Such a \(\hat{p}^{\prime}\) always exists under Assumptions 1 and 2. Since our noise trade distributions have bounded support, a large enough level curve of the CFMM exists where the noise trade is supported at spot price \(\hat{p}^{\prime}\). This implies that, in expectation, the arbitrageur can obtain a larger profit with our \(3\)-step strategy than under the truthful strategy. This completes the proof. Discussion: We need two different arbitrage strategies for \(\mu>0\) and \(\mu<0\) cases due to an asymmetry between X and Y. The noise trades are in units of \(X\), and when the true price of \(X\) is small, the loss of these noise trades will be small when made in a state close to the true price. To show that the arbitrageur can nonetheless make a large profit, we need an extra step in their strategy. This result shows that the noise trade distribution being zero-mean is necessary for truthfulness. For non-zero-mean noise trade distributions, the proof shows that there exists a true price \(\hat{p}\) under which the arbitrageur is better off paying the privacy fee in exchange for the additional arbitrage that the noise trade creates. Also note the following corollary. **Corollary 4.4**: _If the noise trade distribution \(\mathcal{D}(\Delta,\varepsilon,\tau)\) is not zero-mean for some \((\Delta,\varepsilon,\tau)\), the noisy CFMM is not priceable._ This result establishes that the noise trade distribution being zero-mean is necessary for priceability too. We later show that it is also sufficient (Corollary 4.7). Before discussing the result, we define a quantity termed 'noise fee' as a function of the CFMM state, trading function, and noise trade distribution. It captures the expected arbitrage gain from making the trade that "reverses the noise" when the CFMM spot price is the true price. **Definition 4.5** (Noise fee): _For CFMM state \((\tilde{x},\mathcal{Y}(\tilde{x}))\), trade \(\tilde{\Delta}\), privacy specification \((\tilde{\tau},\tilde{\tau})\) and noise trade distribution \(\mathcal{D}\), the noise fee \(\Gamma\) is: \(\Gamma=\int_{-\infty}^{\infty}\mathcal{D}(\eta)\int_{\tilde{x}+\tilde{\Delta}+ \eta}^{\tilde{\Delta}}(P(a)-P(\tilde{x}+\tilde{\Delta}))\ da\ d\eta\)._ _For zero-mean noise trade distributions, the noise fee simplifies to \(\Gamma=\int_{-\infty}^{\infty}\mathcal{D}(\eta)\int_{\tilde{x}+\tilde{\Delta}+ \eta}^{\tilde{\Delta}+\tilde{\Delta}}P(a)\ da\ d\eta\)._ See that the noise fee is defined with the idea that it must account for the additional arbitrage _when_ the post-trade spot price is equal to the true price. Importantly, it is oblivious to the actual true price of X. However, as we show in the next result (Theorem 4.6), for zero-mean \(\mathcal{D}\), the noise fee is the "right" choice for the privacy fee and it makes the noisy CFMM mechanism truthful. **Theorem 4.6**.: _If the noise trade distribution is zero-mean, a privacy fee exists under which the noisy CFMM mechanism is truthful._ _The minimum such privacy fee is the noise fee \(\Gamma\) (Definition 4.5)._ Appendix A gives a proof, which shows that for any finitely-bounded sequence of trades made by an arbitrageur, even adaptive to the noise, the expected gain in excess of that of the truthful strategy is equal to the noise fee. The proof invokes the Doob's optional stopping theorem on martingales (Grimmett and Stirzaker, 2020, Chapter 12.5). Thus, risk-neutral arbitrageurs should treat a noisy CFMM the same as a traditional CFMM. Privacy therefore does not hamper the basic functionality of a CFMM of offering close to true prices in the presence of arbitrageurs. The following corollary follows from Observation 1. **Corollary 4.7**.: _A noisy CFMM mechanism is priceable if the noise trade distribution \(\mathcal{D}\) is always zero-mean._ Therefore, for zero-mean noise trade distributions, the mechanism can protect the CFMM from additional arbitrage _without observing_ the true price. The privacy feature does not harm the returns of risk-neutral LPs when the privacy fee is set correctly. ### Choice of Noise Trade Distributions We first discuss a noise distribution from the LDP literature, which is well-suited for our application. **Definition 4.8** (Binary Mechanism or Duchi et al. (2018)).: _For privacy specification \((\tau=[l,u],\varepsilon)\) and data (trade) \(\Delta\), add noise_ 1. \((\frac{u+l}{2}-\Delta-\frac{u-l}{2}\frac{\varepsilon^{\prime}+1}{\varepsilon^ {\prime}-1})\) _with probability_ \(\frac{1}{2}[1-\Delta^{\prime}\frac{\varepsilon^{\prime}-1}{\varepsilon^{ \prime}+1}]\)_._ 2. \((\frac{u+l}{2}-\Delta+\frac{u-l}{2}\frac{\varepsilon^{\prime}+1}{\varepsilon^ {\prime}-1})\) _with probability_ \(\frac{1}{2}[1+\Delta^{\prime}\frac{\varepsilon^{\prime}-1}{\varepsilon^{ \prime}+1}]\)_._ _Here \(\Delta^{\prime}=\frac{2\Delta-(u+l)}{u-l}\)._ Importantly, the noise is zero-mean and of bounded magnitude. Kairouz et al. (2014) showed that in the high privacy regime, i.e., for all \(\varepsilon\) less than some \(\varepsilon^{*}\), the binary mechanism is optimal for minimizing the 'information loss' due to privacy. We are interested in designing noise distributions that minimize the privacy fee. Observe that the privacy fee is a convex (linear) function of \(\mathcal{D}(\eta)\) for all \(\eta\) for all state \(\tilde{x}\) and trade \(\tilde{\Delta}\). The privacy requirements can be expressed as convex constraints in \(\mathcal{D}(\eta)\), as in Kairouz et al. (2014). The mean being zero is also a linear constraint. Given a state and trading function, we can give a convex program to find the "cheapest" noise trade distribution that preserves \((\tau,\varepsilon)\)-PLDP. However, since noise distributions have to be independent of the CFMM state, finding a distribution that works reasonably well for all states is not obvious and is the subject of future work. ### Relation Between Privacy Fee and Liquidity Notice from Definition 4.5 that the privacy fee is zero in a region where the spot price is constant. This matches the intuition that the constant sum CFMM provides privacy for free due to its lack of curvature - it has "infinite" liquidity at a price point. We define liquidity at a price as follows. **Definition 4.9** (Liquidity).: _When spot price \(P(\cdot)\) is differentiable and \(\frac{dP(x)}{dx}|_{x=P^{-1}(\rho)}\neq 0\), the liquidity \(L(p)\) is \(\left(1/\frac{dP(x)}{dx}\right)\Big{|}_{x=P^{-1}(\rho)}\). \(L(p)=0\) when \(P(\cdot)\) is not differentiable, and_ \(L(p)\) _is undefined when \(\left.\frac{dP(x)}{dx}\right|_{x=P^{-1}(\rho)}=0\)._ For a given trade, the privacy fee captures the additional arbitrage profit that can be captured by reversing the associated noise trade. Intuitively, this quantity is larger if the noise trade creates a larger "impact" on the CFMM spot price. This impact is larger when the liquidity is smaller. We illustrate this intuition with an example of the widely adopted constant product "Uniswap" CFMM and the binary mechanism in Appendix B. This intuition holds more generally and formally, as in the following result. **Proposition 4.10**.: _When the noise trade distribution is zero-mean, the privacy fee for a trade \(\tilde{\Delta}\) at spot price \(p\) is approximately inversely proportional to the CFMM liquidity at \(p\) when \(\tilde{\Delta}\) and the maximum possible size of the noise trade are \(o(P^{-1}(p))\)._ Proof.: The privacy fee equals the noise fee by Theorem 4.6. We take a first-order approximation of the inverse of the liquidity. In Definition 4.5 of the noise fee, the price difference \((P(a)-P(P^{-1}(p)+\tilde{\Delta}))\) in the interval \(a\in[P(P^{-1}(p))+\tilde{\Delta}+\eta,P(P^{-1}(p))+\tilde{\Delta}]\) is approximately inversely proportional to the liquidity at price \(p\) when the trade \(\tilde{\Delta}\) and noise trade \(\eta\) are \(o((P^{-1}(p)))\). This result shows that more liquid CFMMs are better suited for the needs of privacy-seeking traders. However, these CFMMs are prone to larger divergence loss, and also require larger capital investment from LPs. A holistic analysis of the design trade-offs of a noisy CFMM mechanism, possibly with custom-made trading functions, is the subject of future work. ## 5. Conclusions In this work, we develop a noisy CFMM mechanism for privacy protection. Users can specify their desired privacy level and a masking interval for their trade and pay a personalized privacy fee. We find the minimum fee required to ensure that arbitrageurs cannot exploit the CFMM privacy feature. We also show that the noise being zero-mean in trade size is a necessary and sufficient condition for such a minimal fee to exist (we call this condition the priceability of the mechanism). We also show that our noisy CFMM is truthful under this fee, i.e., arbitrageurs are incentivized to align the CFMM's spot price with the external market price. For future work, it would be useful to design the noise trade distributions which can reduce the cost implied by the privacy fee. Efficient instantiations of our model, particularly the requirement of a private, secure account for conducting the noise trades are also avenues for future research.
2301.00005
Intrinsic Motivation in Dynamical Control Systems
Biological systems often choose actions without an explicit reward signal, a phenomenon known as intrinsic motivation. The computational principles underlying this behavior remain poorly understood. In this study, we investigate an information-theoretic approach to intrinsic motivation, based on maximizing an agent's empowerment (the mutual information between its past actions and future states). We show that this approach generalizes previous attempts to formalize intrinsic motivation, and we provide a computationally efficient algorithm for computing the necessary quantities. We test our approach on several benchmark control problems, and we explain its success in guiding intrinsically motivated behaviors by relating our information-theoretic control function to fundamental properties of the dynamical system representing the combined agent-environment system. This opens the door for designing practical artificial, intrinsically motivated controllers and for linking animal behaviors to their dynamical properties.
Stas Tiomkin, Ilya Nemenman, Daniel Polani, Naftali Tishby
2022-12-29T05:20:08Z
http://arxiv.org/abs/2301.00005v1
# Intrinsic Motivation in Dynamical Control Systems ###### Abstract Biological systems often choose actions without an explicit reward signal, a phenomenon known as intrinsic motivation. The computational principles underlying this behavior remain poorly understood. In this study, we investigate an information-theoretic approach to intrinsic motivation, based on maximizing an agent's empowerment (the mutual information between its past actions and future states). We show that this approach generalizes previous attempts to formalize intrinsic motivation, and we provide a computationally efficient algorithm for computing the necessary quantities. We test our approach on several benchmark control problems, and we explain its success in guiding intrinsically motivated behaviors by relating our information-theoretic control function to fundamental properties of the dynamical system representing the combined agent-environment system. This opens the door for designing practical artificial, intrinsically motivated controllers and for linking animal behaviors to their dynamical properties. _Keywords--_ information capacity \(|\) sensitivity gain \(|\) stabilization \(|\) predictive information ## Introduction Living organisms are able to generate behaviors that solve novel challenges without prior experience. Can this ability be explained by a single, generic mechanism? One proposal is that novel, useful behaviors can be generated through _intrinsic motivation_[1], which is defined informally as a set of computational algorithms that are derived directly from the intrinsic properties of the organism-environment dynamics and not specifically learned. Increasingly, there is a move away from reinforcement learning and its extrinsically specified reward structure [2, 3] in the theory and practice of artificial agents, robots, and machine learning more generally [4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20]. A specific class of such intrinsic motivation algorithms for artificial systems is known as _empowerment maximization_. It proposes that agents should maximize the mutual information [21] between their potential actions and a subsequent future state of the world [22]. This corresponds to maximizing the diversity of future world states achievable as a result of the chosen actions, potentiating a broader set of behavior options in the future. Intrinsically motivated synthetic agents develop behaviors that are atypical for inanimate engineered systems and often resemble those of simple living systems. Interestingly, potentiating future actions is also a key part of the success of modern reward-based training algorithms [23, 8, 24]. Despite the successes of empowerment maximization, it remains unclear how well it can be used as a general intrinsic motivation principle. There are many different versions of intrinsic motivation related to empowerment, and their relation to each other is unknown [25, 20, 23]. Additionally, most work on empowerment maximization has relied on simulation case studies and ad hoc approximations, and analytical results are scarce. In order to gain insight, it is important to link empowerment to other, better-understood characterizations of the systems in question. Finally, calculating the mutual information between two interlinked processes in the general case is a challenging task [26, 27], which has so far limited the use of empowerment maximization to simple cases. In this work, we unify different versions of intrinsic motivation related to the empowerment maximization paradigm. Here our main contribution is in showing analytically that empowerment-like quantities are linked to the sensitivity of the agent-environment dynamics to the agent's actions. This connects empowerment maximization to well-understood properties of dynamical systems. Since highly sensitive regions of the dynamics potentiate many diverse future behaviors, the connection to dynamical systems also explains why empowerment-based intrinsic motivations succeed in generating behaviors that resemble those of living systems. The analytical results allow us to develop a practical computational algorithm for calculating empowerment for complex scenarios in the continuous time limit, which is the second major contribution of the paper. We apply the algorithm to standard benchmarks used in intrinsic motivation research [28, 29, 14]. Specifically, a controller based on the efficient calculation of empowerment manages to balance an inverted pendula without extrinsic rewards. This opens the door for designing complex robotic intrinsically motivated agents with systematically computed -- rather than heuristically estimated -- empowerment. ## Results ### Preliminaries **Notation** We consider an agent that takes on states \(x(t)\in\mathcal{X}:=\mathbb{R}^{d_{x}}\), evolving in time under the dynamics \(f\) with (small) stochastic perturbations \(\eta(t)\in\mathbb{R}^{d_{x}}\). Via its (small) actions, \(a(t)\in\mathcal{A}:=\mathbb{R}^{d_{x}}\) filtered through the control gain \(g\), the agent can affect the dynamics of the system: \[dx(t)=f(x(t))dt+g(x(t))da(t)+d\eta\;. \tag{1}\] Here \(d\eta\) denotes the system noise, modeled as a Wiener process. The agent's actions \(a(t)\) are modeled by a stochastic control process with variance \(\sigma_{t}^{2}\) controlled by the agent and with a mean of zero. This models potential effect of actions centered around the null action. To compute various quantities of interest, we will consider a discretized versions of this system, for which we adopt a modified notation. To distinguish it from the continuous version, we replace the continuous time in parentheses by an integer index, \(x_{k}:=x(t+k\cdot\Delta t)\). Here \(\Delta t\) denotes the physical time step, and we adopted the convention that \(x_{0}=x(t)\), so that the index corresponding to the current physical time, \(t\), is chosen as \(0\). We will consider trajectories of a fixed duration, and the agent will apply actions over a part of that trajectory. We denote by \(T_{e}\) the time index of the very last state of the trajectory, which we also refer to as the _time horizon_. We further use \(T_{a}\) to denote the (discretized) duration of the action sequence. Then state, control and perturbation trajectories at finite equidistant times, \(\{t+k\cdot\Delta t\}_{k=0}^{T_{a}}\), are denoted by \(x_{0}^{T_{e}}=\{x_{k}\}_{k=0}^{T_{e}}\), \(a_{0}^{T_{a}}\equiv\{a_{k}\}_{k=0}^{T_{a}}\), and \(\eta_{0}^{T_{e}}=\{\eta_{k}\}_{k=0}^{T_{e}}\), respectively. For consistency with the control theory literature, we write a trajectory in the reverse order, e.g., \(x_{0}^{T_{e}}=(x_{T_{e}},\ldots x_{0})\). When we wish to emphasize the continuous nature of the underlying process, we will write \(t_{e}\equiv t+T_{e}\cdot\Delta t\) and \(t_{a}\equiv t+T_{a}\cdot\Delta t\) for explicitly continuous times. **Reinforcement Learning vs. Intrinsic Motivation** To elicit a desired behavior in an agent, one typically uses reinforcement learning (RL). RL is task-specific, and an agent needs an _extrinsic_ feedback about its performance from a _reward_ function to learn the behavior. The precise construction of this reward function is critical to achieve a desired performance in a short training time [2]. Some of the complications include a significant degree of arbitrariness when choosing amongst reward functions with equivalent performance [31] and the difficulty of translating an often vague desired behavior into a concrete reward function. Furthermore, complex behaviors consist of combinations of shorter sequences. Designing a reward function capable of partitioning the solution into such parts and hence learning it in a realistic time is hard [32]. In contrast to this, in living systems, acquisition of skills often starts with task-unspecific learning. This endows organisms with _potentiating_ skills, which are not rewarding on their own. This is then followed by task-oriented specialization, which combines task-unspecific behaviors into complex and explicitly rewarding tasks [1, 33]. While specific tasks are often refined with the help of an extrinsic reinforcement, the potentiating tasks usually are intrinsically motivated [9]. **Empowerment** The type of intrinsic motivation we focus on is _empowerment_. Empowerment is based on information-theoretic quantities [4, 23, 34, 35, 36, 37, 38, 39, 40]. It defines a pseudo-utility function on the state space, based on the system dynamics only, without resorting to a reward. Formally, we express the dynamics of the system by the conditional probability distribution \(p(x_{T_{e}}\mid a_{0}^{T_{e}-1},x_{0})\) of the resulting state when one starts in a state \(x_{0}\) and subsequently carries out an action sequence \(a_{0}^{T_{e}-1}\). Then the empowerment \(\mathcal{C}(x_{0})\) is a function of the starting state, \(x_{0}\). It is given by the maximally achievable mutual information (the channel capacity [21]) between the control action sequence of length \(T_{e}\) and the final state when starting in the state \(x_{0}\): \[\mathcal{C}(x_{0}):=\max_{p(a_{0}^{T_{e}-1}|x_{0})}\;I(X_{T_{e}};A_{0}^{T_{e}- 1}|x_{0}). \tag{2}\] Here \(p(\cdot)\) denotes a probability density or a probability distribution function, and \(I\) is the mutual information [21] \[I(X_{T_{e}};A_{0}^{T_{e}-1}|x_{0})=H(X_{T_{e}}|x_{0})-H(X_{T_{e}}\mid A_{0}^{T _{e}-1}x_{0}). \tag{3}\] \(H\) is the entropy, and conditioning an entropy on a random variable means the entropy of the conditional distribution, averaged over the conditioning variable. The empowerment \(\mathcal{C}(x_{0})\) depends on both the state, \(x_{0}\), and the time horizon, \(T_{e}\). However, for notational convenience, we omit all parameters from the notation except for the dependency on \(x_{0}\). Locally maximizing empowerment (e.g., by following its gradient over \(x_{0}\)) guides an agent to perform actions atypical within the natural dynamics of the system. Indeed, since empowerment measures the diversity of achievable future states, maximizing it increases this diversity ("empowers" the agent - hence the name). Thus it is expected to be particularly useful for learning potentiating tasks [9]. Crucially, empowerment quantifies the relation between the final state and the _intentional_ control, rather than the diversity of states due to the stochasticity of the system. In particular, it is not just the en Figure 1: Unified view on information theoretical intrinsic motivation, for a discretized process sequence. Starting at time \(x_{0}\) (i.e. \(x(t)\)), potential actions are applied for \(T_{a}\) times, following that, after waiting for \(\Delta T\) time steps, the future system trajectory is considered until \(T_{e}\). A controlled Lyapunov exponent is a Lyapunov exponent, but only in directions controlled by the agent, cf., (11). “Kicked CEF” refers to a variant of Causal Entropic Forcing [30], with the addition that an action kicks the system at the beginning of a trajectory. For more details see _Generalized Empowerment_. tropy of a passive diffusion process in the state variables, but of the subprocess that the agent can actively generate. Furthermore, it quantifies diversity due to _potential_ future action sequences, which are not then necessarily carried out. Empowerment is typically used in the form of the _empowerment maximization principle_[17], treats \(\mathcal{C}(x_{0})\) as a pseudo-utility function. At each time step, an agent chooses an action to greedily optimize its empowerment at the next time step. That is, the agent climbs up in its empowerment landscape, eventually achieving a local maximum of \(\mathcal{C}\): \[a^{*}\big{(}x(t)\big{)}=\underset{a\in\mathcal{A}}{\operatorname{argmax}}\ \mathbb{E}_{\eta}\big{[}\mathcal{C}\big{(}f(x(t))+g(x(t))a\Delta t^{\prime}+d \eta\big{)}\big{]}\;. \tag{4}\] Here \(\mathcal{A}\) is the set of permitted actions, \(\Delta t^{\prime}\) is a small time step used to simulate the actual behavior of the system (and which is selected independently from the time step \(\Delta t\) used to discretize (1)). An empowerment-maximizing agent generates its behavior by repeating this action selection procedure for each decision step it takes. Crucially, no general analytical solutions or efficient algorithms for numerical estimation of empowerment for arbitrary dynamical systems are known, limiting adoption of the empowerment maximization principle. Our goal is to provide a method to calculate it under specific approximations. ## Appendix B Empowerment in dynamical systems The linear response approximationTo relate empowerment to traditional quantities used to describe dynamical systems, we assume that the control signal \(a\) in (1) is small. This is true in some of the most interesting cases, where the challenge is to solve a problem with only _weak controls_ that cannot easily "force" a solution. Under this assumption, (1) is approximated by a linear time-variant dynamics around the trajectories of the autonomous dynamics (i.e., for \(a=0\)). To proceed, we now introduce the following notation. We define \(\bar{x}_{s}\) as the \(s\)-th step of the trajectory in the discretized deterministic approximation of the dynamics (1), given by \[\bar{x}_{s}=f(\bar{x}_{s-1})+g(\bar{x}_{s-1})\Delta a_{s-1} \tag{5}\] with \(\bar{x}_{0}=x_{0}\equiv x(t)\). For example, \(\bar{x}_{3}=f(f(f(\bar{x}_{0})+g(\bar{x}_{0})\Delta a_{0})+g(\bar{x}_{1})\Delta a _{1})+g(\bar{x}_{2})\Delta a_{2}\). We denote this recursive mapping from \(\bar{x}_{0}\) to \(\bar{x}_{s}\) by \(F\), \(\bar{x}_{s}=F(\bar{x}_{0};\Delta a_{0}^{s-1})\). Then the sensitivity of the state at the time step \(s\) to the action at the time step \(r\) can be calculated via the iterated differentiation chain rule applied to the state derivative of the dynamics \(F\): \[\frac{\partial\bar{x}_{s}}{\partial a_{r}}=\prod_{r=r+2}^{s}\nabla_{z}f(\bar{ x}_{r-1})\;g(\bar{x}_{r}), \tag{6}\] where \(\nabla_{z}f(\bar{x}_{r})\) is the \(d_{x}\times d_{x}\) Jacobian matrix, which approximates \(f\) up to the linear order in the state and the control. Specifically, the \((i,j)\)-th entry of \(\nabla_{z}f(\bar{x}_{r})\) is \(\frac{\partial f_{i}(x_{r})}{\partial a_{r,j}}\), where indices \(i,j\) stand for components of the vectors \(s\) and \(f\). For \(s=r+1\), the expression in (6) evaluates to \(\frac{\partial\bar{x}_{s+1}}{\partial a_{r}}=g(x_{r})\). Now we define the linear response of the sequence of the system's states \(x_{s_{1}}^{s_{2}}\) to the sequence of the agent's actions \(\Delta a_{r_{1}}^{r_{2}}\) \[\mathcal{F}_{r_{1},r_{2}}^{s_{1},s_{2}}(x_{0})=\left[\begin{array}{c|c} \boxed{\partial\bar{x}_{s_{2}}}&\boxed{\partial\bar{x}_{s_{2}-1}}&\boxed{ \partial\bar{x}_{s_{1}}}\\ \boxed{\partial\bar{x}_{s_{2}}}&\boxed{\partial\bar{x}_{s_{2}-1}}&\boxed{ \partial\bar{x}_{s_{2}-1}}\\ \boxed{\partial\bar{x}_{s_{2}}}&\boxed{\partial\bar{x}_{s_{2}-1}}&\boxed{ \partial\bar{x}_{s_{2}-1}}\\ \boxed{\partial\bar{x}_{s_{1}}}&\boxed{\partial\bar{x}_{s_{1}}}&\boxed{ \partial\bar{x}_{s_{1}}}\\ \end{array}\right]_{\begin{array}{c}\\ Here \(\rho_{i}(x_{0})\) are the singular values of the appropriate submatrix \(\mathcal{F}_{r_{1}^{\prime},r_{2}^{\prime}}^{\mathcal{F}_{r_{1}^{\prime}}^{\prime },r_{2}^{\prime}}(x_{0})\); for example, the traditional empowerment corresponds to the red-dashed submatrix in (7). Further, \(P\) is the _power_ of the control signal \(\Delta a\) over the whole control period, and \(\sigma_{i}\geq 0\) is that part of the overall power of the control signal which is associated with \(i\)-th singular value (called _channel power_). The channel power can be computed by the usual _water-filling_ procedure [21]. Note that here we denote \(P\) as power, as per control-theoretic convention, but since we fix the time interval over which it is applied, the units of \(P\) are those of energy. As per our _weak control_ assumption, we assume \(P\) to be suitably small. With (10), calculation of any generalized empowerment becomes tractable, at least in principle. This also shows explicitly that the (generalized) empowerment is a function of the sensitivity matrix \(\mathcal{F}\), and with it of quantities used to characterize dynamics, such as the Lyapunov exponents. To compute \(\mathcal{C}^{T_{e},T_{e},\Delta T}(x_{0})\) efficiently for an arbitrary dynamical system (1) and arbitrary long time horizons and arbitrary small discretization steps, we start by discretizing the time and calculating the linear response matrix \(\mathcal{F}\). While in this paper we do this by analytical differentiation, numerical differentiation can be used whenever \(f\) is unknown. We then calculate the singular values of \(\mathcal{F}\); this is straightforward on modern computers for dimensionalities of up to a few hundred. Finally, we apply the "water filling" procedure to find the set of channel powers \(\sigma_{i}\) to match the available total power \(P\) in (10), and from there we calculate the (generalized) empowerment value. We will employ this approach for all examples in this paper. Connecting Generalized Empowerment to Related QuantitiesGeneralized empowerment with different durations of action and observation sequences is related to various quantities describing dynamical systems, including those defining intrinsic motivation [20, 23, 41, 8]. For example, Causal Entropic Forcing (CEF) [20] is defined as actions that maximize the entropy of future trajectories of a system. With \(T_{a}=1\) and \(\Delta T=0\), \(\mathcal{C}^{T_{e},T_{a},\Delta T}\) in (9) measures the immediate consequences of a single action on a trajectory with a fixed time horizon \(T_{e}\). Maximizing \(\mathcal{C}^{T_{e},T_{a},\Delta T}\) is then equivalent to choosing actions that maximize _susceptibility_, and not the entropy of trajectories with a given time horizon. In other words, one can interpret \(\mathcal{C}^{T_{e},1,0}\) as a "kicked", or agent-controllable, version of CEF, where just the first action can be selected by the agent at any time, and uncontrolled future variability is discarded in action planning (see Fig. 1 for an illustration). Such kicked CEF corresponds to the green submatrix in (7). Now consider the top right corner (blue) of (7) with \(T_{e}=T_{a}=1\), or, equivalently, \(s_{2}^{\prime}=s_{2}\) and \(s_{1}^{\prime}=s_{2}^{\prime}-1\). In the limit of a very long horizon, \(s_{2}\rightarrow\infty\), the appropriate submatrix of \(\mathcal{F}\) is \[\Lambda\equiv\lim_{s_{2}\rightarrow\infty}\biggl{(}\frac{\partial\bar{x}_{2 _{2}}}{\partial a_{r_{1}}}\Bigr{)}\Bigl{(}\frac{\partial\bar{x}_{s_{2}}}{ \partial a_{r_{1}}}\Bigr{)}^{\frac{1}{s_{2}}}, \tag{11}\] where \(\dagger\) is the transpose, and \(\frac{\partial\bar{x}_{s_{2}}}{\partial a_{r_{1}}}\) is given by (6). In the special case that the control gain is the identity, \(g(x)=x\), the logarithm of the eigenvalues of \(\Lambda\) reduces to the usual characteristic Lyapunov exponents of the dynamical system [42]. However, once a more general control gain is applied, the action-controlled perturbation, \(a_{r_{1}}\) may be able to affect only a part of the state space. This means that \(\Lambda\) not only is a generalized empowerment with specific indices, but it is also a specialization of the concept of Lyapunov exponents to the controllable subspace. Thus we refer to the log-spectrum of \(\Lambda\) as the _control Lyapunov exponents_, cf. Fig. 1. In summary, (9) and the linearization, (7), provide a unified view of various sensitivities of the dynamics to the controls, and hence on various versions of intrinsic motivation. ### Intrinsic motivation in power-constrained agents An agent controlling a system with unconstrained actions can trivially reach any state in a controllable dynamical system [43] by simply forcing their desired outcome without sophisticated control. Thus to render the setup interesting, we consider only power-constrained, or _weak_ agents. To show that empowerment maximization, in the linearized regime, is an efficient control principle, we use it to stabilize a family of inverted pendula (single pole, double pole, and cart-pole), which are simple, paradigmatic models of important phenomena, such as human walking [44]. Solutions for the stabilization problem are known. They require to accumulate energy by swinging the pendulum back and forth into resonance without overshooting and then to keep the pendulum upright. When details of the system are not specified _a priori_, this solution needs to be learned by the agent. Finding such an indirect control policy by traditional reinforcement learning is nontrivial [3], since the increasing oscillations require a long time for the balancing to take place, and the acquisition of informative rewards indicating success is significantly delayed. As we will show, it is precisely in such situations that intrinsic motivation based on empowerment is especially useful, since it is determined from only comparatively local properties of the dynamics along the present trajectory and its potential future variations. Inverted pendulumWe start with a relatively simple task of swinging up and stabilizing an inverted pendulum without an external reward. With an angle of \(\theta\) (in radians) from the upright vertical, the equations of motion of the pendulum are \[\begin{pmatrix}d\theta(t)\\ d\theta(t)\end{pmatrix}=\begin{pmatrix}\dot{\theta}(t)dt\\ \frac{\theta}{\theta}\sin(\theta(t))\,dt+\frac{d\alpha(t)}{m!^{2}}+\frac{dW(t)}{ m!^{2}}\end{pmatrix}, \tag{12}\] where \(\dot{\theta}\) is the angular velocity of the pendulum, \(m\) is its mass, \(l\) is the length, \(a\) is the torque applied by the agent, \(g\) is the free fall acceleration, and \(dW(t)\) is a Wiener process. We apply a (stochastically chosen) control signal \(a(t)\) for the duration \(T_{e}\) and observe the final state \(\tilde{\theta}=\theta+\tilde{\eta}_{\text{obs}}\), where \(\tilde{\eta}_{\text{obs}}\) is the standard Gaussian observation noise at the final state. Empowerment is then given by the maximally achievable mutual information between \(a(t)\) and \(\tilde{\theta}\) at a given power level for \(a(t)\), i.e., the channel capacity between the two. The observation noise effectively determines the resolution, at which the end state is considered. Note that in our linear approximation the process noise \(dW(t)\) undergoes the same gain sequence as the control signal, and thus it rescales the empowerment landscape and changes the behavior of the system. Thus to compare empowerment values in different states, it is essential to include the observation noise. We now apply our empowerment-based control protocol, (4), to the inverted pendulum. We calculate the empowerment landscape by using the time-discretized version of Eqs. (1, 12). For this, we map the deterministic part of the dynamics (\(f,g\) in (1)) onto discrete time as per (5). We then compute the channel capacity by applying (10) using the singular values from (8), where states are given by \((\theta,\dot{\theta})\in\mathbb{R}^{d_{e}}\), and actions consist of applying a torque \(a\). The landscapes for the original empowerment, the controlled Lyapunov exponent, and the kicked CEF versions of the problem, all with the time horizons of \(t_{e}=0.5\,\mathrm{s}\) and the discretization \(\Delta t=10^{-3}\) are shown in Fig. 2. Then, from each state, we choose the control action to greedily optimize the generalized empowerment. The panels in the upper row in this Figure also show trajectories obtained this way. The lower row shows time traces of the control signal derived from the generalized empowerment maximization. In all cases, initially, the agent drives the pendulum at the maximum allowable torque, which we set to be power-constrained to \(\pm 1\,\mathrm{N}\,\mathrm{m}\). Around 13, 10, and 10 seconds after the start (for the three versions of the empowerment, respectively), the pendulum accumulates enough energy to reach the vertical, and the agents reduce the torques to very small values, \(a\ll 1\,\mathrm{N}\,\mathrm{m}\), which are now sufficient to keep the pendulum in the upright position and prevent it from falling. It is striking that the generalized empowerment landscapes and their induced trajectories are qualitatively similar to those that would be generated by an optimal value function, derived by standard optimal control techniques based on a reward specifically designed to achieve the top position [3]. In our analysis, we chose a particular discretization \(\Delta t=10^{-3}\) s, and we need to show that our results depend only weekly on this choice. For this, we repeat our analysis at different \(\Delta t\). Figure 3 shows the dependence of the maximum value of the original empowerment (black dot in left panel of Fig. 2) on \(\Delta t\). To the extent that the estimate converges to a well-defined number linearly as \(\Delta t\to 0\), the discrete time dynamics provides a consistent approximation to the continuous time dynamics. Double PendulumNow we show that the empowerment maximization formalism is capable of dealing with more challenging problems, such as a power-constrained control of a (potentially chaotic) double pendulum [16], Fig. 4, with equations of motion: \[d\ddot{\theta}_{1}(t)= -\frac{1}{d_{1}(t)}\bigg{(}d_{2}(t)\ddot{\theta}_{2}(t)+\phi_{1} (t)\bigg{)}, \tag{13}\] \[d\ddot{\theta}_{2}(t)= \frac{1}{m_{2}\ell_{c_{2}}^{2}+I_{2}-\frac{d_{2}^{2}(t)}{d_{1}(t) }}\Big{(}da(t)+dW(t)+\frac{d_{2}^{2}(t)}{d_{1}(t)}\phi_{1}(t)\] \[\qquad\qquad-m_{2}\ell_{1}\ell_{c_{2}}\dot{\theta}_{1}(t)^{2}\sin \theta_{2}(t)-\phi_{2}(t)\bigg{)},\] with \[d_{1}(t)= m_{1}\ell_{c_{1}}^{2}+m_{2}(\ell_{1}^{2}+\ell_{c_{2}}^{2}+2\ell_{1} \ell_{c_{2}}\cos\theta_{2}(t))+I_{1}+I_{2},\] \[d_{2}(t)= m_{2}(\ell_{c_{2}}^{2}+\ell_{1}\ell_{c_{2}}\cos\theta_{2}(t)+I_{ 2},\] \[\phi_{1}(t)= -m_{2}\ell_{1}\ell_{c_{2}}\dot{\theta}(t)^{2}\sin\theta_{2}(t)-2m _{2}\ell_{1}\ell_{c_{2}}\dot{\theta}_{2}(t)\dot{\theta}_{1}(t)\sin\theta_{2}(t)\] \[\qquad\qquad+(m_{1}\ell_{c_{1}}+m_{2}\ell_{1})g\cos\theta_{1}(t)+ \phi_{2}(t),\] \[\phi_{2}(t)= m_{2}\ell_{c_{2}}g\cos(\theta_{1}(t)+\theta_{2}(t)).\] We add Wiener noise, \(dW(t)\), and permit the controller to apply a scalar control signal \(|a(t)|\leq 1\), at the joint between the two links. In the equations of motion, \(m_{i}\), \(\ell_{i}\), \(\ell_{c_{i}}\), and \(I_{i}\) stand for the mass, the length, the length to center of mass, and the moment of inertia of the \(i\)-th link, \(i\in[1,2]\), respec Figure 3: Convergence of the method for \(\Delta t\to 0\) and \(t_{e}=0.5\,\mathrm{s}\). As time resolution is refined fourfold at every stage, one arrives at a well-defined value for the empowerment estimation as \(\Delta t\to 0\). The numerical stability of this limit approximation is consistent throughout the landscape. Figure 2: Intrinsic motivation based control in the power-constrained regime. Top row: generalized empowerment landscapes in the linear approximation for empowerment (left), controlled Lyapunov exponent (middle), and kicked CEF (right) versions of the problem, plotted against \(\theta\) (horizontal axis) and \(\dot{\theta}\) (vertical axis), measured in rad and rad/s, respectively. Black dots in each panel are the final state, and white lines are the trajectories of the pendulum, starting at the bottom denoted by the red dots. Bottom row: the control signals chosen from the generalized empowerment maximization as a function of time. Here the time horizon is \(t_{e}=0.5\mathrm{s}\). tively. Figure 4 shows the landscape for the original empowerment for selected slices of the phase space. This landscape is more complex than for the single-pendulum. Nonetheless it retains the property that, following the local gradient in the state space directly, one ultimately reaches the state of the maximum empowerment, which is precisely where both links of the pendulum are balanced upright. The vertical position, however, is a priori not sufficient to guarantee the balancing since the control only applies torque at the joint linking the pendulum halves. That is, the controller cannot move the pendulum in arbitrary directions through the state space. Surprisingly, this concern notwithstanding, the algorithm still balances the pendulum, cf. Fig. 4. **Cart-Pole** We have additionally verified that the empowerment maximization also balances an inverted pendulum on a moving cart, cf. Fig.5. Here the control signal (force) is applied to the cart. Thus the pendulum is now affected only indirectly. The dynamics of this system is: \[d\ddot{x}(t)= \frac{m\sin\theta(t)(\ell\dot{\theta}^{2}(t)+g\cos\theta(t))+da( t)+dW(t)}{M+m\sin^{2}\theta(t)}, \tag{14}\] \[d\ddot{\theta}(t)= -da(t)\cos\theta(t)-m\ell\dot{\theta}^{2}(t)\cos\theta(t)\sin \theta(t)\] \[-(M+m)g\sin\theta(t),\] where \(x(t)\), \(\theta(t)\), \(m\), \(M\), \(\ell\), \(g\), \(|a(t)|\leq 1\) are the \(x\) coordinate of the center of mass of the cart, the angle of the pole, the pole mass, the cart mass, the pole length, the free fall acceleration, and the force applied to the cart. ## Discussion In this study, we focused on a class of intrinsic motivation models that mimic decision-making abilities of biological organisms in various situations without explicit reward signals. We used an information-theoretic formulation in which the controller starts with knowledge of the (stochastic) dynamical equations describing the agent and the environment, and then selects actions that "empower" the agent. That is, the controller improves its ability to affect the system in the future, as measured by the mutual information between the action sequence and the subsequent responses. This leads the system to the most sensitive points in the state space, which we showed solves a problem known to be difficult for simple reinforcement learning algorithms: balancing inverted pendula. Depending on which subsets of the past actions and future responses are used to drive the intrinsic motivation, our approach interpolates between the original formulation of empowerment maximization, maximization of the "kicked" version of Causal Entropic Forcing, and maximization of the "controlled" subset of the Lyapunov exponents of the agent-environment pair. This provides insight into which properties of the dynamical system are responsible for the behaviors pro Figure 4: **Top left:** Double pendulum with control torque on the joint between the links with dynamics given by (13) **Top right:** Slices through the empowerment landscape of a double pendulum. Each subplot shows a particular slice in the 4D landscape, when two other coordinates are zero. For example, the plot with axes \(\dot{\theta}_{2}\), \(\dot{\theta}_{1}\) is shown for \(\theta_{2}=0\,\text{rad}\) and \(\theta_{1}=0\,\text{rad}\). Bottom: Traversing the state space of the double pendulum according to (4). The first and the second 15s are shown with different scale for the instantaneous empowerment. The initial and the final positions are both links down and both links up, respectively. Torque is applied to the middle joint only. duced by these different motivation functions. One big challenge in using information-theoretic quantities is computing them, which can be difficult to do either analytically or from data. Our paper makes a significant contribution to solving this problem in the context of empowerment by providing an explicit algorithm for computing various versions of empowerment, for arbitrary lengths of pasts and futures, using the small noise/small control approximation to the dynamics, while still treating the dynamics as nonlinear. This is often the most interesting regime, modeling weak, power-constrained controllers. Crucially, our algorithm is local, so that climbing up the empowerment gradient only requires estimation of the dynamics in the vicinity of the current state of the system. This should be possible in real control applications by using the data directly, possibly with the help of deep neural networks to approximate the relevant dynamical landscapes [45, 46, 47]. Therefore, knowing the exact form of the dynamical system, which could be a potential limitation of our approach, is not strictly required. This opens up opportunities for scaling our method to more complex scenarios. Our work suggests that, in addition to the Lyapunov spectrum, defined via the trajectory divergence in time due to a small _arbitrary_ perturbation, one may want to consider the _optimal_ Lyapunov spectrum, where the initial perturbation is _optimally_ aligned with the controllable directions in the dynamics. We defer a systematic study of optimal Lyapunov spectra to future work. A potential extension of our analysis relates to social interactions. Interacting agents have their own intrinsic motivations and affect each other's ability to achieve their goals. Understanding how multiple agents interact, each trying to empower itself in the presence of others, and whether and when this leads to cooperation or conflict is a promising area for future research. Crucially, the ability to affect someone else's empowerment may provide insight into what distinguishes social interactions from purely physical interactions among nearby individuals. ###### Acknowledgements. ST was supported in part by California State University, and the College of Engineering at SJSU. IN was supported in part by the Simons Foundation Investigator award, the Simons-Emory Consortium on Motor Control, and NIH grant 2R01NS084844. DP acknowledges partial support by the EC H2020-641321 socSMCs FET Proactive project and the Pazy Foundation.
2303.18027
Evaluating GPT-4 and ChatGPT on Japanese Medical Licensing Examinations
As large language models (LLMs) gain popularity among speakers of diverse languages, we believe that it is crucial to benchmark them to better understand model behaviors, failures, and limitations in languages beyond English. In this work, we evaluate LLM APIs (ChatGPT, GPT-3, and GPT-4) on the Japanese national medical licensing examinations from the past five years, including the current year. Our team comprises native Japanese-speaking NLP researchers and a practicing cardiologist based in Japan. Our experiments show that GPT-4 outperforms ChatGPT and GPT-3 and passes all six years of the exams, highlighting LLMs' potential in a language that is typologically distant from English. However, our evaluation also exposes critical limitations of the current LLM APIs. First, LLMs sometimes select prohibited choices that should be strictly avoided in medical practice in Japan, such as suggesting euthanasia. Further, our analysis shows that the API costs are generally higher and the maximum context size is smaller for Japanese because of the way non-Latin scripts are currently tokenized in the pipeline. We release our benchmark as Igaku QA as well as all model outputs and exam metadata. We hope that our results and benchmark will spur progress on more diverse applications of LLMs. Our benchmark is available at https://github.com/jungokasai/IgakuQA.
Jungo Kasai, Yuhei Kasai, Keisuke Sakaguchi, Yutaro Yamada, Dragomir Radev
2023-03-31T13:04:47Z
http://arxiv.org/abs/2303.18027v2
# Evaluating GPT-4 and ChatGPT ###### Abstract As large language models (LLMs) gain popularity among speakers of diverse languages, we believe that it is crucial to benchmark them to better understand model behaviors, failures, and limitations in languages beyond English. In this work, we evaluate LLM APIs (ChatGPT, GPT-3, and GPT-4) on the Japanese national medical licensing examinations from the past five years, including the current year. Our team comprises native Japanese-speaking NLP researchers and a practicing cardiologist based in Japan. Our experiments show that GPT-4 outperforms ChatGPT and GPT-3 and passes all six years of the exams, highlighting LLMs' potential in a language that is topologically distant from English. However, our evaluation also exposes critical limitations of the current LLM APIs. First, LLMs sometimes select _prohibited choices_ (30.5%) that should be strictly avoided in medical practice in Japan, such as suggesting euthanasia. Further, our analysis shows that the API costs are generally higher and the maximum context size is smaller for Japanese because of the way non-Latin scripts are currently tokenized in the pipeline. We release our benchmark as Igaku QA as well as all model outputs and exam metadata. We hope that our results and benchmark will spur progress on more diverse applications of LLMs.1 Footnote 1: [https://github.com/jungokasai/IgakuQA](https://github.com/jungokasai/IgakuQA). ## 1 Introduction Much recent work builds approaches to various language tasks on the impressive abilities of large language models (e.g., GPT-3 and 4, Brown et al., 2020; OpenAI, 2023; BLOOM, Scao et al., 2022). Large language models (LLMs) can perform downstream tasks by conditioning generation on a small number of task demonstrations or task instructions, without the need for any parameter updates (Brown et al., 2020). A 75-year-old man living alone was diagnosed with lung cancer with lung and bone metastases three months ago. He is currently receiving home care as per his wish to spend his last days at home peacefully. He has been almost bedridden for the past two weeks and has been receiving visits from a visiting nurse service and a home helper as the cannot even go to the toilet. Five days ago, he developed lower back pain, and the visiting physician administered drug therapy, but his pain has worsened. Today, when the doctor visited him for treatment, the patient strongly appealed, "Please let me die today." What is the appropriate response of the doctor to the patients appeal today? a. "Don't say things like that, let's do our best." b. "I will assist you in departing peacefully right away." c. "Let's consider hospitalization to a hospice right away." d. "There is a curative treatment for advanced lung cancer, so please don't worry." e. "Could you please tell me why you feel that way?" This option shows consideration for the patient's wishes and emphasizes the importance of supporting the patient. Palliative measures, including euthanasia, are recognized under the Medical Practitioners' Act. However, depending on the patient's condition, it may be necessary to consider other options, such as treatment in a hospice specializing in palliative care, drug therapy to alleviate the patient's pain and anxiety, and providing psychological support. Regardless of the choice of option, it is important to respect the patients' wishes. Therefore, option e, "Could you please tell me why you feel that way?" may also need to be considered among the options. Figure 1: Example problem from the Japanese medical licensing exam where ChatGPT chooses a **prohibited choice** (30.5%) because **ethanasia is illegal in Japan**. Test takers who choose four or more prohibited choices would fail regardless of their exam total scores (§2.2). The problem and the ChatGPT output above are in Japanese and translated into English by the authors for readability. Almost all exam problems are five-choice questions over a wide range of topics in medicine and public health. See Fig. 3 for the numbers of problems broken down by category. et al., 2020; Wei et al., 2022; Su et al., 2023). Recent work (Kung et al., 2022; Choi et al., 2023; Kung et al., 2022; Nori et al., 2023; Bubeck et al., 2023, _inter alia_) started to evaluate the performance of these LLM-based approaches across diverse specialty areas and disciplines, beyond conventional natural language processing benchmarks, such as the GLUE tasks (Wang et al., 2018, 2019), semantic parsing (Zelle and Mooney, 1996; Yu et al., 2018), and grammatical error correction (Napoles et al., 2017; Coyne and Sakaguchi, 2023). These evaluation results suggest that LLMs have the potential to transform many applications and industries in the future. Meanwhile, many of these diverse benchmarks are limited to the English language. While the training data for LLMs are often English-centric (Brown et al., 2020; Wang and Komatsuzaki, 2021; Zhang et al., 2022; Touvron et al., 2023), many LLMs exhibit _multilinguality_. For example, ChatGPT and GPT-4 have shown competitive performance on the WMT machine translation benchmark for some language pairs (Jiao et al., 2023; Barrault et al., 2020). LLM-based services, such as ChatGPT, are now used by non-English speakers every day. We thus argue that it has become increasingly critical to benchmark LLMs on diverse specialized domains in _non-English_ languages. This work makes a first step towards this goal. Specifically, we evaluate LLMs (GPT-3 and 4 and ChatGPT) on Japanese medical licensing examinations from the past five years (2018-2023), including the current year, and release the data as the Igaku QA (\(\mathbb{E}\) QA) benchmark. The exam takes place every year for final-year medical school students in Japan. It consists of 400 multiple-choice questions2 and covers a wide range of topics in medicine and public health. Final-year students who pass the exam are given the Japanese medical license and enter a two-year residency program as doctors. Footnote 2: A couple of arithmetic questions directly ask for numbers (e.g., daily urine protein excretion) rather than correct choices. More detail and the passing criteria are discussed in §2. Importantly, our Igaku QA benchmark does _not_ rely on any translation of English resources and comes solely from the Japanese medical licensing examinations. Our project is led by native Japanese-speaking NLP researchers and a practicing cardiologist based in Japan. Many previous benchmarks in non-English languages are created by translating exisitng English datasets (Conneau et al., 2018; Artetxe and Schwenk, 2019; Artetxe et al., 2020; Lewis et al., 2020; Longpre et al., 2021; Ponti et al., 2020). Evaluation on such datasets can introduce serious problems. The content of these translation-based datasets often becomes English-centric and substantially diverges from actual use cases by native speakers of the target language (Mohammad et al., 2016; Clark et al., 2020; Asai et al., 2021, 2022). This divergence becomes particularly severe in medical applications. For instance, medical practice should follow the rules and law in the country. As illustrated by an exam problem in Fig. 1 (the original problem and the ChatGPT output were in Japanese but translated by the authors for readability), **ethanasia is illegal in Japan, and doctors should not suggest it in their clinical practice in Japan (Choice b in Fig. 1)**. However, we observed that ChatGPT chose this option. Indeed, in the Japanese medical licensing exam, 20+ choices are considered _prohibited_ (\(\frac{\text{\#8}}{\text{\#7}\text{\#7}\text{\#7}}\)), and test takers who choose four or more prohibited choices will fail regardless of their total scores (see more detail for evaluation criteria in §2.2). Medical practice also requires knowledge about specific statistics or systems in the country (e.g., _what are the responsibilities and duties of a public health center_ (\(\frac{\text{\#8}}{\text{\#7}\text{\#7}}\)) _in Japan?_). Our approach avoids these potential pitfalls in designing evaluation for non-English languages and provides useful evaluation data to the research community. Our experiments (SS3) show that unlike the previous language models, GPT-4 can successfully pass the Japanese medical licensing examinations over the past five years, including the current year. This result suggests the potential of non-English AI applications in medical support, education, and assessment as LLMs continue to improve in the future. Nonetheless, GPT-4 still substantially underperforms the majority-vote performance among the medical school students. Moreover, though the results on Japanese are as promising as the recent findings on the United States Medical Licensing Examination (USMLE; Kung et al., 2022), there are significant limitations in Japanese (and similarly distant languages): increased API costs and smaller context window sizes due to tokenization and lack of customization specific to the country (SS3.3) We hope that our evaluation results and Igaku QA benchmark will spur further research on clinical applications of LLMs, especially in non-English languages in the world. Background In this section, we briefly discuss the medical licensure process in Japan and its difference from the US system (SS2.1). We then describe the exam structure, evaluation criteria, and topics that are covered (SS2.2), as well as our Igaku QA collection process (SS2.3). More example problems will be presented in SS3.3. ### National Medical Practitioners Qualifying Examination (NMPQE) Fig. 2 illustrates the standard timeline of the medical licensing process in Japan in comparison to the United States. Students in Japan typically take the National Medical Practitioners Qualifying Examination (NMPQE) in their final year of the six-year medical school education. The exam covers a wide range of topics and assesses students' knowledge about clinical and social medicine and public health. Note that hands-on clinical exposure typically happens after passing the exam and obtaining the license. This differs from the US system, where the licensing process consists of three steps (Step 1, foundational sciences; Step 2 CK, clinical knowledge; Step 3, generalist medical practice) and students enter a residency program _during_ this process. For more comprehensive discussion and the historical context of the difference between the United States and Japan, see Kuwabara et al. (2015). ### Details of NMPQE Since 2018,3 the Japanese medical licensing exam is structured in the following format: it consists of Parts A-F, each comprising 50-75 multiple-choice questions with five answer choices, totaling to 400 questions in all. There are some problems that require selecting two or three choices, in which case all choices need to be correctly selected to earn the point(s). Note that there are a few exceptions: a small number of problems are arithmetic questions that ask for numbers directly or contain more than five choices. In 2022, a total of 10,061 people took the exam, and 91.7% of them passed.4 Footnote 3: The exams from 2017 or earlier had Parts A-I with more problems in total. Footnote 4: [https://www.mhlw.go.jp/stf/shingi2/0000197611_00004.html](https://www.mhlw.go.jp/stf/shingi2/0000197611_00004.html). See Appendix §B for more exam statistics. Prohibited Choices ()In the multiple-choice questions, 25+ choices are marked as _prohibited choices_ (). These are choices that correspond to decisions that should be strictly avoided in medical practice in Japan. For example, euthanasia is illegal in Japan and doctors are not allowed to suggest it in their medical practice (see Choice b in Fig. 1). Similarly, when a patient desires to have children in the future and there is a viable alternative, a total hysterectomy is considered as a prohibited choice. Evaluation CriteriaParts A-F split into two sections: required (Parts B and E) and general sections (others). Regarding the required section, one point is awarded for each general question, and three points for each clinical practical question. For the general section, one point is awarded for each question.5 Footnote 5: Although the total number of problems remains the same from year to year, a few questions are often disregarded due to their difficulty or ambiguity. A Student passes the exam if and only if all the following three criteria are satisfied: * The score on the required section is 80% of the total score or higher. * The score on the general section is 70% of the total score or higher. * Only up to three prohibited choices () may be selected. CategoriesFig. 3 plots the numbers of problems broken down by category from 2022. The categorization is based on exam preparation books that Figure 2: Standard timeline comparisons of the medical licensing processes in Japan and the United States. The Japanese system involves only **one national licensing examination (NMPQE)** towards the end of the six-year medical school education. The United States Medical Licensing Examination (USMLE) consists of three steps, which are taken over a period of time during medical school and residency. are widely used by Japanese medical students (R) (R multi-hop question answering Wei et al. (2022); Press et al. (2023). However, similar to the findings in law school exams where Chain-of-Thought Figure 4: Our prompt for GPT-3. English translations are provided here for readability. We use three incorrect examples that are randomly sampled from the Japanese medical licensing exam in 2006. Figure 5: Our prompt for ChatGPT and GPT-4. English translations are provided here for readability. We use three in-context examples that are randomly sampled from the Japanese medical licensing exam in 2006. Figure 3: Breakdown of the exam problems by category from the year 2022. The categorization is based on books widely used by Japanese medical students (2018, 2019, 2020, 2021, 2022). The exam problems span 28 categories that cover a wide range of topics in medicine. See Appendix §B for the statistics for the earlier exams and Japanese-to-English translations of the category names. The distribution is very similar over the past five years. prompting did not improve performance Choi et al. (2023), we did not find any improvements from adding intermediate steps of explanations on ChatGPT.11 Lastly, we also provide the **Student Majority** baseline that picks the choice(s) selected by the highest percentage of test takers. Footnote 11: We release all model outputs in our experiments and leave it to future work to explore better methods to produce intermediate reasoning steps. Evaluation MethodsWe perform automatic evaluations by exact matching. As discussed in SS2.2, almost all problems are multiple-choice questions with a few exceptions that require numbers. Exact matching is a reliable metric on Igaku QA since there are no free-form answers, contrasting with open-ended generation tasks that often require human evaluations or advanced metrics Kasai et al. (2022, 2022); Khashabi et al. (2022); Hu et al. (2023). There were a small number of cases where an LLM fails to follow the format specified by the in-context examples (e.g., outputting text, instead of choosing an option). Note that this formatting issue was limited in our case, but there are ways to force a strict answer format, which later work can explore Nori et al. (2023) ### Results Seen in Table 1 are the results from the Japanese medical licensing examinations from the past five years (2018-2023). We see a consistent trend over the five years: GPT-4 achieves the best performance, followed by ChatGPT/ChatGPT-EN/ChatGPT-Exp and GPT-3. Moreover, GPT-4 and ChatGPT-EN are the only ones that do not select more than three prohibited choices over the five years. **GPT-4 manages to pass the exam in all six years but substantially underperforms the student majority baseline**. ChatGPT-EN outperforms ChatGPT to a certain degree in the majority of cases, suggesting limitations of LLMs' multilinguality when translation is not done explicitly. ### Analysis, Discussion, and Examples Tokenization and API CostThroughout our experiments, we found that **use in Japanese typically requires more tokens (roughly 2x) than that in English, meaning that LLM APIs cost more for Japanese both financially and computationally.** For instance, the example in Fig. 5 results in a total of 779 tokens, but the English counterpart only uses 447 tokens on GPT-4. This is because GPT-4 (and other OpenAI APIs) splits each Japanese character into multiple tokens. In addition to the increased API cost, this tokenization scheme makes the context window for Japanese substantially smaller than that for English. We thus argue that tokenization will be crucial to improve the efficiency, accessibility, and long-context performance in typologically diverse languages (e.g., Japanese, Chinese, and Vietnamese) beyond English. Future work can explore methods like vocabulary swapping Mosin et al. (2023); Jain et al. (2023). "use oral hypoglycemic agents if dietary therapy is ineffective," but this is considered as a prohibited choice; there are significant concerns about suggesting the use of oral hypoglycemic agents during pregnancy due to the potential dangers, including fetal teratogenesis, hypoglycemia, hyperbilirubinemia, and polycythemia (Sutherland et al., 1974; Langer et al., 2000; Kavitha et al., 2013). This result demonstrates critical challenges when LLMs are applied to specialized, high-stakes applications, such as medicine, finance, and law. Geographic and Temporal ContextWe also found several problems require geographic and/or temporal context, departing from conventional question answering datasets. For instance, the problem in Fig. 7 requires Japan-specific knowledge. Open-book approaches or retrieval augmentation can be used to further improve the performance on these problems (Kasai et al., 2022c). While evaluation on geographic or temporal context is not the main focus of the Igaku QA benchmark, it is one of the challenges that large-scale question answering systems face in real-world applications (Zhang and Choi, 2021; Jang et al., 2022a, b; Liska et al., 2022; Kasai et al., 2022c). GPT vs. Medical StudentsFig. 8 compares the student accuracy (the ratio of the students who select the correct choice(s)) and the GPT-4 result ( green: correct; red: wrong) for each problem from 2022. We find correlation between the student accuracy and the likelihood of the correct prediction, suggesting that GPT-4 struggles on questions that are also difficult for humans. We see similar patters from other models (Appendix SSC). ## 4 Related Work We evaluated GPT LLM APIs on the Japanese national medical licensing examinations that students take at the end of their six-year medical school education. Here we discuss the connections to well-studied clinical natural language processing (for non-English languages in particular), multilingual language modeling, and open-domain question answering. Clinical Natural Language Processing Beyond EnglishSimilar to many other applications of natural langauge processing (NLP), English is by far the most resource-rich language in clinical NLP (Neveol et al., 2018). For example, many \begin{table} \begin{tabular}{l c c c c c c c c c c c c c c c c c c} \hline \hline & \multicolumn{3}{c}{**2018**} & \multicolumn{3}{c}{**2019**} & \multicolumn{3}{c}{**2020**} & \multicolumn{3}{c}{**2021**} & \multicolumn{3}{c}{**2022**} & \multicolumn{3}{c}{**2023**} \\ \cline{2-13} **Model** & \multicolumn{1}{c}{Req.} & Gen. & P.1 & \multicolumn{1}{c}{Req.} & Gen. & P.1 & \multicolumn{1}{c}{Req.} & Gen. & P.1 & \multicolumn{1}{c}{Req.} & Gen. & P.1 & \multicolumn{1}{c}{Req.} & Gen. & P.1 & \multicolumn{1}{c}{Req.} & Gen. & P.1 \\ \hline ChatGPT & 123 & 143 & 1 & 100 & 150 & 5 & 118 & 148 & 2 & 143 & 154 & 3 & 124 & 163 & 2 & 120 & 140 & – \\ ChatGPT-EN & 123 & 158 & 2 & 117 & 157 & 3 & 116 & 147 & 2 & 110 & 167 & 0 & 140 & 187 & 1 & 142 & 159 & – \\ GPT-3 & 105 & 104 & 5 & 93 & 117 & 5 & 97 & 111 & 4 & 94 & 109 & 3 & 106 & 111 & 6 & 86 & 113 & – \\ GPT-4 & 161 & 221 & 0 & 170 & 215 & 1 & 168 & 219 & 0 & 173 & 225 & 0 & 164 & 228 & 1 & 170 & 221 & – \\ \hline Student Majority & 196 & 276 & 0 & 196 & 274 & 0 & 195 & 276 & 0 & 200 & 277 & 0 & 195 & 287 & 0 & – & – & – \\ \hline Total & 200 & 299 & 33 & 200 & 296 & 40 & 197 & 299 & 26 & 200 & 300 & 26 & 197 & 297 & 26 & 200 & 295 & – \\ Passing Score & 160 & 208 & 3 & 160 & 209 & 3 & 158 & 217 & 3 & 160 & 209 & 3 & 157 & 214 & 3 & 160 & 220 & – \\ \hline \hline \end{tabular} \end{table} Table 1: Results on the Japanese medical licensing examinations from 2018 through 2023. Req., Gen., and P. indicate the required section, general section, and prohibited choices (\(\frac{\text{det}}{\text{det}}\text{det}\text{det}\text{det}\text{det}\text{det}\text{det}\text{det}\text{det}\text{det}\text{det}\text{det}\text{det}\text{det}\text{det}\text{det}\text{det}\text{det}\text{det}\text{det}\text{det}\text{det}\text{det}\text{det}\text{det}\text{det}\text{det}\text{det}\text{det}\text{det}\text{det}\text{det}\text{det}\text{det}\text{det}\text{det}\text{det}\text{det}\text{det}\text{det} \text{det}\text{det}\text{det}\text{det}\text{det}\text{det}\text{det}\text{det}\text{det} \text{det}\text{det}\text{det}\text{det}\text{det}\text{det}\text{det}\text{det}\text{det} \text{det}\text{det}\text{det}\text{det}\text{det}\text{det}\text{det}\text{det}\text{det} \text{det}\text{det}\text{det}\text{det}\text{det}\text{det}\text{det}\text{det}\text{det}\text{det} \text{det}\text{det}\text{det}\text{det}\text{det}\text{det}\text{det}\text{det}\text{det}\text{det} \text{det}\text{det}\text{det}\text{det}\text{det}\text{det}\text{det}\text{det}\text{det}\text{det}\text{det} \text{det}\text{det}\text{det}\text{det}\text{det}\text{det}\text{det}\text{det}\text{det}\text{det} \text{det}\text{det}\text{det}\text{det}\text{det}\text{det}\text{det}\text{det}\text{det}\text{det}\text{det} \text{det}\text{det}\text{det}\text{det}\text{det}\text{det}\text{det}\text{det}\text{det}\text{det}\text{det}\text{det}\text{det}\text{det}\text{det}\text{det} \text{det}\ advanced NLP tools, such as part-of-speech taggers Smith et al. (2005); Tsuruoka et al. (2005); Divita et al. (2006), _inter alia_), are developed for biomedical applications in the English language. Some efforts in clinical NLP for non-English languages include: core NLP models and pipelines (e.g., parsing Nishimoto et al. (2008), abbreviation/vocabulary expansion Shinohara et al. (2013); Ahltorp et al. (2016), question answering Ito et al. (2016), and pretrained transformers Wada et al. (2020); Kawazoe et al. (2021) for Japanese biomedical text); datasets and resources Rebholz-Schuhmann et al. (2013); Neveol et al. (2014); Aramaki et al. (2014); Kors et al. (2015); and crosslingual transfer Deleger et al. (2009); Papaioannou et al. (2022). As LLMs and generative models become increasingly powerful and popular among English speakers and speakers of other languages like Japanese, evaluations of these models should be diversified accordingly. Benchmarks that have been developed to assess the qualifications and skills for human experts, such as bar or medical licensing examinations, can be useful in this regard. For a more comprehensive survey on clinical NLP in languages other than English, see Neveol et al. (2018). Multilingual Language ModelsMuch recent work on multilingual NLP hypothesized that although each language is unique, different languages manifest similar characteristics (e.g., morphological, lexical, syntactic) which can be exploited by training a single, _polyglot_ model with data from multiple languages Ammar (2016). This polyglot approach has proven successful in various NLP tasks, including syntactic dependency parsing Ammar et al. (2016), semantic role labeling Mulcaire et al. (2018), named entity recognition Xie et al. (2018), and language modeling for phonetic sequences Tsvetkov et al. (2016) and for speech recognition Ragni et al. (2016). More recently, researchers developed multilingual pretrained language models Mulcaire et al. (2019, 2021); Xue et al. (2021); Liu et al. (2020) that can be used for machine translation or crosslingual transfer in downstream tasks. Though there are variants that use crosslingual supervision (e.g., Lample and Conneau (2019), many of these polyglot models can benefit from joint training of different languages without any explicit supervision. We suspect that similar polyglot language modeling is happenning in LLMs, such as ChatGPT and GPT-4, which we tested on our Igaku QA benchmark in Japanese, a language typologically distant from English. Open-Domain and Multilingual Question AnsweringMuch prior work proposed datasets for open-domain QA for English and beyond Clark et al. (2020); Asai et al. (2021, 2022); Longpre et al. (2021); Zhang et al. (2021). Several works pointed out the problem of translation-based question answering evaluations Clark et al. (2020); Asai et al. (2021): questions raised mainly by English speakers can diverge from information needs from speakers of other languages. For instance, these translation-based benchmarks can overly represent English-centric topics, such as American politics, sports, and culture. To mitigate this English-centric problem, some datasets only sample questions from native speakers of each langauge Clark et al. (2020). Consistent with such data creation methods, our Igaku QA consists of problems that are written by native Japanese speakers to evaluate the qualifica Figure 8: Student (test taker) accuracy vs. GPT-4 results. All problems from 2022 are sorted by the student accuracy, and the bar is green when GPT-4 predicts the correct choice(s) and red otherwise. We see correlation between the student accuracy and the likelihood of the correct prediction. We see similar patters for other models (Appendix SSC). tions and skills for medical practice in the country. ## 5 Conclusion We presented our evaluations of the GPT APIs on the Japanese medical licensing examinations from 2018 to 2022. The newest model, GPT-4, outperforms the others and manages to pass the examinations. Through our benchmark, we highlighted several important limitations of the current LLM APIs when they are applied to a specialized domain in Japanese, a language typologically distant from English. We open-source our benchmark as Igaku QA, as well as the model outputs and meta information for future research. ## Limitations This work evaluates large language models on Japanese medical licensing examinations. We highlight several core limitations of our evaluations: **reproducibility and potential data leakage**, **language coverage**, and **scope of evaluation**. First, as our experiments are performed using black-box LLM APIs, our results are not fully reproducible, and the results may change with updates in the APIs. Further, since the language model training data and setups are not well documented, there are potential risks of data leakage that overestimates the performance of LLMs. To mitigate these issues, we release all model outputs and experimental settings as well as the Igaku QA benchmark. This way if there is any update in the APIs, we can easily update our results and analyze changes in behaviors after the update. **We have also included results from the current year [1], which we believe is after the training of GPT-4, to address potential data leakage. We observed consistent performance with the previous years.** Clearly, our benchmark is limited to the Japanese language and Japanese medical licensing examination. It is an important research avenue to explore evaluations in more languages and domains. Nonetheless, evaluation in the medical domain requires expertise, including knowledge specific to the country and its medical system and standard medical practice. As discussed in this paper, there are potential risks if benchmarks are simply translated to various languages. The second author of this work is a doctor in a Japanese hospital, and such interdisciplinary efforts are necessary. Lastly, we note limitations in the scope of our evaluations. For example, we did not use image information during evaluations because the current OpenAI LLM APIs do not support image input. While some problems with images can be solved based solely on the problem text, many problems with images (and, of course, medical practice in general) need multimodal reasoning. We leave it to future work to test models in multimodal settings. Despite these challenges, we believe that it is important to benchmark black-box LLMs; they are increasingly used by people around the world across various disciplines. We hope that our evaluations and Igaku QA benchmark will contribute to a better understanding of their behaviors, failures, and potential risks and benefits in diverse areas. ## In Memory of Professor Dragomir Radev The day after I completed this manuscript, I was eagerly awaiting your usual email and feedback. To my great shock, I received the unexpected and sad news of your sudden passing. I went to your office at Yale to leave white lilies, which I believe symbolize the purity of your life-long commitment to mentorship, education, and research. LILY (Language, Information, and Learning at Yale) is also the name of your NLP lab at Yale University. I am extremely fortunate to be part of the LILY lab since the beginning in Spring 2017. I still vividly remember the day I visited your office for the first time. At the time, I was in my senior year with almost no prior research experience. Despite this, you kindly offered to mentor me on my research project. After graduation, I sought your advice as to what I should do next. Soon after, you and Professor Bob Frank from Yale Linguistics very kindly secured funding and offered me a research assistant position. This experience became the foundation of my NLP research career. During my Ph.D. at the University of Washington, we continued to meet regularly and collaborate on many exciting projects. Among many other things, your attitude towards research has always struck me as passionate and open-minded. The field of NLP has experienced many changes since I started. We used to talk a lot about building core NLP models using LSTMs. Now, we are seeing tremendous progress from large language models. You always showed great enthusiasm for the latest advancements and how they are transforming the way we approach NLP problems. Your passion for this field was contagious. You con sistently encouraged me to be open to new ideas, even when I was skeptical or anxious about new directions. As you led by example, no matter how the field changes, researchers have a responsibility to demonstrate limitations and potentials of new technologies for society. As one of the researchers who were extremely lucky to have you as an advisor, I feel obliged to pay it forward by continuing to support younger generations of scholars. Thank you so much for the amazing six years. May you find eternal rest and peace. ## Acknowledgements We thank Noriyuki Kojima and Koji Shiono for their helpful feedback on this work.
2309.13129
AntiBARTy Diffusion for Property Guided Antibody Design
Over the past decade, antibodies have steadily grown in therapeutic importance thanks to their high specificity and low risk of adverse effects compared to other drug modalities. While traditional antibody discovery is primarily wet lab driven, the rapid improvement of ML-based generative modeling has made in-silico approaches an increasingly viable route for discovery and engineering. To this end, we train an antibody-specific language model, AntiBARTy, based on BART (Bidirectional and Auto-Regressive Transformer) and use its latent space to train a property-conditional diffusion model for guided IgG de novo design. As a test case, we show that we can effectively generate novel antibodies with improved in-silico solubility while maintaining antibody validity and controlling sequence diversity.
Jordan Venderley
2023-09-22T18:30:50Z
http://arxiv.org/abs/2309.13129v1
# AntiBARTy Diffusion for Property Guided Antibody Design ###### Abstract Over the past decade, antibodies have steadily grown in therapeutic importance thanks to their high specificity and low risk of adverse effects compared to other drug modalities. While traditional antibody discovery is primarily wet lab driven, the rapid improvement of ML-based generative modeling has made in-silico approaches an increasingly viable route for discovery and engineering. To this end, we train an antibody-specific language model, AntiBARTy, based on BART (Bidirectional and Auto-Regressive Transformer) and use its latent space to train a property-conditional diffusion model for guided IgG de novo design. As a test case, we show that we can effectively generate novel antibodies with improved in-silico solubility while maintaining antibody validity and controlling sequence diversity. ## 1 Introduction Industrial antibody discovery traditionally relies on phage display-based libraries or hybridoma technology using transgenic mice in order to obtain target-specific binders for antigens of interest. Once isolated and sequenced, these high-affinity antibodies are used as a starting point for lead optimization in which properties important for e.g. safety, developability, manufacturability, etc. are also optimized while attempting to either maintain or further improve affinity. While advancements in next-generation sequencing (NGS) technology have enabled high-throughput sequence determination, different approaches for sequencing/sorting come with tradeoffs between information completeness and throughput. [11; 25] In practice, despite throughput limitations, hybridoma-derived, single-cell sorted antibodies constitute most of the candidates selected from discovery for further engineering and much of the diversity offered by e.g. bulk repertoires is often under-sampled or not fully utilized. Recent progress in machine learning is well-poised to take advantage of the wealth of data offered by NGS. By training on large sequence corpora, generative models can build strong priors on sequence space that can then be effectively sampled. The ability to condition this sampling on biophysical properties, targets, or intra-complex chains offers a powerful route for augmenting the discovery process. Since property data is often limited in volume, it's advantageous to construct this prior distribution via pretraining and then guide the sampling process towards favorable property modes by bootstrapping another generative model onto the frozen latent space. This gives rise to the following strategy in which we: 1. Train an antibody-specific language model, AntiBARTy, based on BART (Bidirectional and Auto-Regressive Transformer) on large antibody sequence corpora 2. Use its latent space to train a property-conditional diffusion model for classifier-free guided IgG de novo design Collectively, we refer to this approach as AntiBARTy Diffusion and demonstrate that it can effectively generate novel antibodies with improved (in-silico) properties. Related Work ### Language Models With the availability of massive antibody sequence databases such as the Observed Antibody Space (OAS) and the wide-spread success of transformers in language modeling [32], it's unsurprising that numerous works have trained transformer-based models for antibodies. Typically these fall into two categories: encoder-only architectures for representation learning [28; 14; 33; 22; 31; 23; 2] and decoder-only architectures for generative modeling [30]. BART (Bidirectional and Auto-Regressive Transformers) offers a less traditional approach and generalizes the encoder-only and decoder-only architectures. [15] It conceptualizes pre-training as a decorruption task in which corrupted sequences are fed to the encoder and decorrupted with the decoder. These corruptions can be arbitrary, but canonically consist of token masking, deletion, insertion, and infilling and in the large corruption limit, BART reduces to a pure (decoder-only) language model. ### Diffusion Models Diffusion models [10] have risen to prominence through their SOTA performance in image generation and their impressive conditioning mechanisms.[26; 24; 29] In protein design, structure-based SE(3)-equivariant diffusion models have seen great experimental success designing strong de novo binders without any experimental optimization e.g. mutagenesis. [34] Joint sequence and structure-based diffusion has also been proposed for general proteins. [16] While some diffusion models have been explored for antibodies [19], experimental success is much more difficult in this domain due to the sparsity of publicly available antibody structures and the low-resolution but high sequence variability of the complementarity-determining region (CDR) loops that drive antigen binding. Unlike structures, there are billions of antibody sequences publicly available. [21] Melding the guidance capabilities of diffusion models with strong transformer-based priors on sequences offers the potential for computationally cheap and effective conditioning. Diffusion in the latent space of BART has been previously explored for natural language modeling, but to our knowledge not for any protein related tasks. [18] Compared to this work, we propose the use of pooled embeddings for conditioning which we found yielded better downstream sequence quality and also leverage classifier-free guidance during sampling. For sequence-based diffusion of antibodies, a recent work proposed a form of controllable, categorical diffusion with Bayesian optimization for fixed-length sequence design using an allotted edit budget. Excitingly, it reports experimental success in improving antibody binding affinities as part of lead optimization [6]. While our work, AntiBARTy Diffusion, was not designed to work within a fixed-edit budget, it does allow for variable sequence length guidance. Since indels play an important role in affinity maturation it is natural to include them in the design process, and they are a native component of AntiBARTy through indel-type corruption. [35] Additionally our use of classifier-free guidance does not require an explicit discriminator which can in some cases be difficult to train. [9] ## 3 Methods ### AntiBARTy We train a BART-style transformer [15] on all human IgG heavy and light sequences extracted from the Observed Antibody Space, totaling 254M heavy chains Fvs and 342M light chain Fvs. [21] In preprocessing we remove chains outside the range of [100,140] amino acids as well as any chains with unknown (X) amino acids. For added diversity, we augment this dataset with 28M similarly preprocessed sequences from UniProtKB.[1] All sequences are prepended with either a <heavy>, <light>, or <protein> tag and appended with an <EOS> token. Both encoder and decoder contain 6 attention layers with the model totaling roughly 16M parameters. Sequence corruption (masking, indels, infilling) is performed on-the-fly during training, in alignment with best practices. [17] For accelerated performance we employ Flash Attention [3] and use torch's DistributedDataParallel to parallelize across 4 Nvidia A100 GPUs. We train for 6 epochs over the course of 6 days. In order to mitigate known quality issues with OAS (e.g. missing portions near the N-terminus), after pretraining we briefly fine-tune our model on the paired subset of OAS which is of higher quality. This effectively realigns the distributional properties of the model and promotes valid antibody generation. Importantly, during fine-tuning we also modify the cross-attention mechanism to operate on max-pooled embeddings over the sequence dimension of the encoder output. This deviates from previous work that addressed the variable length nature of natural language by sampling from empirical length distributions during inference.[18] For our case, we found that the use of pooled embeddings led to improved sequence quality during sampling especially when used in conjunction with diffusion-generated encodings. After fine-tuning our language model, we freeze the parameters. As an aside, we note that the autoregressive nature of BART gives us access to exact likelihoods instead of the quasi-likelihoods of encoder-only architectures like BERT. This is especially advantageous in the computation of evo-velocities for repertoire analysis since it can properly handle correlated indels and mutations. [28; 8] We will explore this in future work. ### AntiBARTy Diffusion Once the language model has been frozen, we work with the normalized, continuous latent space offered by the encoder to train a property-conditional denoising diffusion probabilistic model (DDPM) [10]. Our diffusion model uses a U-Net backbone [27] with a depth of 3 layers and fuses the autoencoded input with learned class and time embeddings during upsampling. It contains roughly 3M parameters. We jointly train the conditional and unconditional model using the AntiBARTy embeddings from the previous AntiBARTy fine-tuning set (unconditional) and those from a property dataset (conditional) of in-silico solubility scores calculated using Protein-Sol [7] for a subset of VH chains in the paired OAS subset. We identify low (\(<0.45\)) and high (\(>0.7\)) solubility classes and use roughly 20k samples from each class for training. During training we mask out the property class embeddings with a probability of 0.1 and to address the conditional/unconditional class imbalance we upsample the conditional training set by a factor of 10. A schematic of the full AntiBARTy architecture is provided in Fig. 1. Once trained, we may effectively generate new antibodies by drawing a random latent vector from a standard multivariate normal distribution, denoising a la classifier-free guidance with a chosen guidance strength [9], and decoding with our AntiBARTy decoder. The decoder input is initialized with a <heavy> token. During decoding, we opted to greedily decode using Gumbel sampling with temperature 0.1. [12] ## 4 Results To evaluate the quality of sequences generated by our language model and investigate the effect of fine tuning on the model, we fine-tuned a separate version of AntiBARTy to generate heavy or light chains only conditioning on the <heavy> or <light> token in the encoder. Sampling 1k sequences, we observe that the model can successfully recapitulate the statistics of the high quality training set despite the abundance of N-terminal truncations in the full dataset used during pretraining, see Fig 1(a). As a coarse quality check, we pass these sequences through ANARCI [5] and observe that they are all classified appropriately as heavy or light chains with the distributions of assigned germline species matching that of the train set (predominately human but occasionally non-human for the light chains). All generated sequences are unique from each other and more detailed comparisons regarding distance from the train set are discussed in our evaluation of AntiBARTy Diffusion below. To evaluate the property-guided de novo design capabilities of AntiBARTy Diffusion, we use it to generate low/high solubility sequences as determined by Protein-Sol [7] using the approach described above. In Fig 1(b) we plot the solubility distributions of 5k unguided and 5k guided low/high solubility class-conditional samples. The unconditional distribution matches that of the full train Figure 1: Schematic of the architecture used for AntiBARTy Diffusion distribution (not shown) and low/high-solubility samples exhibit marked differences in solubility over the unconditioned distribution. Interestingly a larger guidance strength was required for the low-solubility generated samples in order to produce a shift in solubility of similar magnitude to their high solubility counterparts. ANARCI quality check confirms all synthetic sequences as heavy chains with more than 99.9% assigned a human germline. Similarly more than 99.9% of the generated samples are unique and don't already exist in the solubility training set. We subsequently focus on the high-solubility guided and unconditioned samples. For each sample, we find the closest sequence (in the Levenshtein-sense) from the solubility dataset and plot the distance distributions in Fig 1(c). In order to explore mode coverage we use UMAP to embed our diffusion-generated encodings into 2D and plot the unconditional (Fig 2(a)) and high-solubility guided (Fig 2(b)) samples on top of the training set colored by solubility. We find that the high-solubility guided samples can successfully avoid the low-solubility modes and aggregate in the high solubility regions of the latent space. ## 5 Conclusion We proposed AntiBARTy Diffusion for property-guided de novo antibody design and demonstrated its success in property guidance for in-silico solubilities. In future work, we plan to experimentally validate our approach and employ it for B-cell receptor repertoire analysis. We also plan to extend our method to enable infilling and the modification of existing antibodies using a combination of order-agnostic decoding [4] and more sophisticated control mechanisms. [13; 36; 20] Figure 3: AntiBARTy Diffusion-generated embeddings of unconditional (a) and high solubility conditional (b) samples projected into 2D using UMAP. The training set is also shown and colored according to solubility. Figure 2: (a) Length distributions of heavy-chain conditional and light-chain conditional samples from AntiBARTy. (b) Distribution of unconditional and low/high-solubility conditional samples from AntiBARTy Diffusion. (c) Distribution of the distance of each AntiBARTy Diffusion sample to the closest sequence in the solubility training set.
2309.14646
Concentration of dimension in extremal points of left-half lines in the Lagrange spectrum
We prove that for any $\eta$ that belongs to the closure of the interior of the Markov and Lagrange spectra, the sets $k^{-1}((-\infty,\eta])$ and $k^{-1}(\eta)$, which are the sets of irrational numbers with best constant of Diophantine approximation bounded by $\eta$ and exactly $\eta$ respectively, have the same Hausdorff dimension. We also show that, as $\eta$ varies in the interior of the spectra, this Hausdorff dimension is a strictly increasing function.
Carlos Gustavo Moreira, Christian Camilo Silva Villamil
2023-09-26T03:55:43Z
http://arxiv.org/abs/2309.14646v2
# Concentration of dimension in extremal points of left-half lines in the Lagrange spectrum ###### Abstract. We prove that for any \(\eta\) that belongs to the closure of the interior of the Markov and Lagrange spectra, the sets \(k^{-1}((-\infty,\eta])\) and \(k^{-1}(\eta)\), which are the sets of irrational numbers with best constant of Diophantine approximation bounded by \(\eta\) and exactly \(\eta\) respectively, have the same Hausdorff dimension. We also show that, as \(\eta\) varies in the interior of the spectra, this Hausdorff dimension is a strictly increasing function. Key words and phrases:Hausdorff dimension, horseshoes, Lagrange spectrum, surface diffeomorphisms The first author was partially supported by CNPq and FAPERJ The second author was partially supported by CNPq and Viana's Louis D. prize. Given \(\alpha\in\mathbb{R}\setminus\mathbb{Q}\), set \[k(\alpha) = \sup\left\{k>0:\left|\alpha-\frac{p}{q}\right|<\frac{1}{kq^{2}}\text { has infinitely many rational solution }\frac{p}{q}\right\}\] \[= \limsup_{p\in\mathbb{Z},q\in\mathbb{N},p,q\to\infty}|q(q\alpha-p)| ^{-1}\in\mathbb{R}\cup\{\infty\}\] for the best constant of Diophantine approximations of \(\alpha\). The _classical Lagrange spectrum_ is the set \[L=\{k(\alpha):\alpha\in\mathbb{R}\setminus\mathbb{Q},k(\alpha)<\infty\},\] and the _classical Markov spectrum_ is the set \[M=\left\{\left(\inf_{(x,y)\in\mathbb{Z}^{2}-\{(0,0)\}}|q(x,y)|\right)^{-1}< \infty:q(x,y)=ax^{2}+bxy+cy^{2},b^{2}-4ac=1\right\}\] that consists of the reciprocal of the minimal values over non-trivial integer vectors \((x,y)\in\mathbb{Z}^{2}-\{(0,0)\}\) of indefinite binary quadratic forms \(q(x,y)\) with unit discriminant. Perron gave in [19] the following dynamical characterizations of these classical spectra in terms of symbolic dynamical systems: Given a bi-infinite sequence \(\theta=(\theta_{n})_{n\in\mathbb{Z}}\in(\mathbb{N}^{*})^{\mathbb{Z}}\), let \[\lambda_{i}(\theta):=[0;a_{i+1},a_{i+2},\dots]+a_{i}+[0;a_{i-1},a_{i-2},\dots].\] If the Markov value \(m(\theta)\) of \(\theta\) is \(m(\theta):=\sup_{i\in\mathbb{Z}}\lambda_{i}(\theta)\) and the Lagrange value \(\ell(\theta)\) is \(\ell(\theta):=\limsup_{i\to\infty}\lambda_{i}(\theta).\) Then the Markov spectrum is the set \[M=\{m(\theta)<\infty:\theta\in(\mathbb{N}^{*})^{\mathbb{Z}}\}\] and the Lagrange spectrum is the set \[L=\{\ell(\theta)<\infty:\theta\in(\mathbb{N}^{*})^{\mathbb{Z}}\}.\] It follows from these characterizations that \(M\) and \(L\) are closed subsets of \(\mathbb{R}\) and that \(L\subset M\). Markov showed in [11] that \[L\cap(-\infty,3)=M\cap(-\infty,3)=\{k_{1}=\sqrt{5}<k_{2}=2\sqrt{2}<k_{3}=\frac {\sqrt{221}}{5}<...\},\] where \(k_{n}^{2}\in\mathbb{Q}\) for every \(n\in\mathbb{N}\) and \(k_{n}\to 3\) when \(n\to\infty\). M. Hall in cf.[6] proved that \[C_{4}+C_{4}=[\sqrt{2}-1,4(\sqrt{2}-1)],\] where for each positive integer \(N\), \(C_{N}\) is the set of numbers in \([0,1]\) in whose continued fractions the coefficients are bounded by \(N\), i.e., \(C_{N}=\{x=[0;a_{1},...,a_{n},...]\in[0,1]:\) \(a_{i}\leq N,\ \forall i\geq 1\}.\) Together with Perron characterizations, this implies that \(L\) and \(M\) contain the whole half-line \([6,+\infty).\) Freiman in [5] determined the precise beginning of Hall's ray (the biggest half-line contained in \(L\)), which is \[\frac{2221564096+283748\sqrt{462}}{491993569}=4.52782956616\ldots\] The first author in [15] proved several results on the geometry of the Markov and Lagrange spectra, for example that the map \(d:\mathbb{R}\rightarrow[0,1],\) given by \[d(\eta)=HD(L\cap(-\infty,\eta))=HD(M\cap(-\infty,\eta))\] is continuous, surjective and such that \(d(3)=0\) and \(d(\sqrt{12})=1\). Moreover, that \[d(\eta)=\min\{1,2D(\eta)\},\] where \(D(\eta)=HD(k^{-1}(-\infty,\eta))=HD(k^{-1}(-\infty,\eta])\) is a continuous surjective function from \(\mathbb{R}\) to \([0,1).\) And also, the limit \[\lim_{\eta\rightarrow\infty}HD(k^{-1}(\eta))=1.\] Recently in [17] was given the estimate \[t_{1}^{*}:=\sup\{s\in\mathbb{R}:d(s)<1\}=3.334384...\] In particular, any \(t\in\mathbb{R}\) that belongs to the interior of the Markov and Lagrange spectra, must satisfy \(t>t_{1}^{*}.\) Now, let \(\varphi:S\to S\) be a diffeomorphism of a \(C^{\infty}\) compact surface \(S\) with a mixing horseshoe \(\Lambda\) and let \(f:S\rightarrow\mathbb{R}\) be a differentiable function. For \(x\in S\), following the above characterization of the classical spectra, we define the _Lagrange value_ of \(x\) associated to \(f\) and \(\varphi\) as being the number \(\ell_{\varphi,f}(x)=\limsup_{n\rightarrow\infty}f(\varphi^{n}(x))\) and also the _Markov value_ of \(x\) associated to \(f\) and \(\varphi\) as the number \(m_{\varphi,f}(x)=\sup_{n\in\mathbb{Z}}f(\varphi^{n}(x)).\) The sets \[L_{\varphi,f}(\Lambda)=\{\ell_{\varphi,f}(x):x\in\Lambda\}\] and \[M_{\varphi,f}(\Lambda)=\{m_{\varphi,f}(x):x\in\Lambda\}\] are called _Lagrange Spectrum_ of \((\varphi,f,\Lambda)\) and _Markov Spectrum_ of \((\varphi,f,\Lambda).\) It turns out that dynamical Markov and Lagrange spectra associated to hyperbolic dynamics are closely related to the classical Markov and Lagrange spectra. Several results on the Markov and Lagrange dynamical spectra associated to horseshoes in dimension \(2\) which are analogous to previously known results on the classical spectra were obtained recently: in [18] it is shown that typical dynamical spectra associated to horseshoes with Hausdorff dimensions larger than one have non-empty interior (as the classical ones). In [16] it is shown that typical Markov and Lagrange dynamical spectra associated to horseshoes have the same minimum, which is an isolated point in both spectra and is the image by the function of a periodic point of the horseshoe. In [12], in the context of _conservative_ diffeomorphism it is proven (as a generalization of the results in [4]) that for typical choices of the dynamic and of the function, the intersections of the corresponding dynamical Markov and Lagrange spectra with half-lines \((-\infty,t)\) have the same Hausdorff dimensions, and this defines a continuous function of \(t\) whose image is \([0,\min\{1,D\}]\), where \(D\) is the Hausdorff dimension of the horseshoe. For more information and results on classical and dynamical Markov and Lagrange spectra, we refer to the books [3] and [9]. In this paper, we use that dynamical Markov and Lagrange spectra associated to conservative horseshoes in surfaces are natural generalizations of the classical Markov and Lagrange spectra. In fact, classical Markov and Lagrange spectra are not compact sets, so they cannot be dynamical spectra associated to horseshoes. However, in [7] is showed that, for any \(N\geq 2\) with \(N\neq 3\), the initial segments of the classical spectra until \(\sqrt{N^{2}+4N}\) (i.e., \(M\cap(-\infty,\sqrt{N^{2}+4N}]\) and \(L\cap(-\infty,\sqrt{N^{2}+4N}]\)) coincide with the sets \(M(N)\) and \(L(N)\), given, in the notation we used in Perron's characterization of \(M\) and \(L\) by \[M(N)=m(\Sigma(N))=\{m(\theta):\theta\in\Sigma(N)\}\] and \[L(N)=\ell(\Sigma(N))=\{\ell(\theta):\theta\in\Sigma(N)\}\] where \(\Sigma(N):=\{1,2,\ldots,N\}^{\mathbb{Z}}\). It is proved also that \(M(N)\) and \(L(N)\) are dynamical Markov and Lagrange spectra associated to a smooth real function \(f\) and to a horseshoe \(\Lambda(N)\) defined by a smooth conservative diffeomorphism \(\varphi\), and also that they are naturally associated to continued fractions with coefficients bounded by N. Here we use this relation between classical and dynamical spectra in order to understand better the fractal geometry (Hausdorff dimension) of the preimage of half-lines by the function \(k\). We can state our main result as: **Theorem 1.1**.: _Define \(T:=int(L)=int(M)\). For any \(\eta\in\overline{T}\), \(D(\eta)=HD(k^{-1}(\eta))\) i.e._ \[HD(k^{-1}((-\infty,\eta)))=HD(k^{-1}((-\infty,\eta]))=HD(k^{-1}(\eta)).\] _Even more,_ * _if_ \(\eta\) _is accumulated from the left by points of_ \(T\)_, then_ \[D(\eta)>D(t),\ \ \forall t<\eta\] * _if_ \(\eta\) _is accumulated from the right by points of_ \(T\)_, then_ \[D(\eta)<D(t),\ \ \forall t>\eta.\] _In particular, \(D|_{X}\) is strictly increasing, where \(X\) is \(T\) or any interval contained in \(\overline{T}\)._ ## 2. Preliminaries ### Continued fractions and regular Cantor sets The continued fraction expansion of an irrational number \(\alpha\) is denoted by \[\alpha=[a_{0};a_{1},a_{2},\dots]=a_{0}+\frac{1}{a_{1}+\frac{1}{a_{2}+\frac{1}{ \ddots}}},\] so that the Gauss map \(G:(0,1)\to[0,1)\), \(G(x)=\frac{1}{x}-\left\lfloor\frac{1}{x}\right\rfloor\) acts on continued fraction expansions by \[G([0;a_{1},a_{2},\dots])=[0;a_{2},\dots].\] For an irrational number \(\alpha=\alpha_{0}\in(0,1)\), the continued fraction expansion \(\alpha=[0;a_{1},\dots]\) is recursively obtained by setting \(a_{n}=\lfloor\alpha_{n}\rfloor\) and \(\alpha_{n+1}=\frac{1}{\alpha_{n}-a_{n}}=\frac{1}{G^{n}(\alpha_{0})}\). The rational approximations \[\frac{p_{n}}{q_{n}}:=[0;a_{1},\dots,a_{n}]\in\mathbb{Q}\] of \(\alpha\) satisfy the recurrence relations \[p_{n}=a_{n}p_{n-1}+p_{n-2}\text{ and }q_{n}=a_{n}q_{n-1}+q_{n-2},\ \ n\geq 0 \tag{2.1}\] with the convention that \(p_{-2}=q_{-1}=0\) and \(p_{-1}=q_{-2}=1\). If \(0<a_{j}\leq N\) for all \(j\), it follows that \[\frac{p_{n}}{N+1}\leq p_{n-1}\leq p_{n}\text{ and }\frac{q_{n}}{N+1}\leq q_{n-1 }\leq q_{n}\ \ n\geq 1.\] Given a finite sequence \((a_{1},a_{2},\dots,a_{n})\in(\mathbb{N}^{*})^{n}\), we define \[I(a_{1},a_{2},\dots,a_{n})=\{x\in[0,1]:x=[0;a_{1},a_{2},\dots,a_{n},\alpha_{n+1 }],\ \alpha_{n+1}\geq 1\}\] then by 2.1, \(I(a_{1},a_{2},\dots,a_{n})\) is the interval with extremities \([0;a_{1},a_{2},\dots,a_{n}]=\frac{p_{n}}{q_{n}}\) and \([0;a_{1},a_{2},\dots,a_{n}+1]=\frac{p_{n}+p_{n-1}}{q_{n}+q_{n-1}}\) and so \[|I(a_{1},a_{2},\dots,a_{n})|=\left|\frac{p_{n}}{q_{n}}-\frac{p_{n}+p_{n-1}}{q_ {n}+q_{n-1}}\right|=\frac{1}{q_{n}(q_{n}+q_{n-1})},\] because \(p_{n}q_{n-1}-p_{n-1}q_{n}=(-1)^{n-1}\). Also, for \((a_{0},a_{1},\dots,a_{n})\in(\mathbb{N}^{*})^{n+1}\) we set \[I(a_{0};a_{1},\dots,a_{n})=\{x\in[0,1]:x=[a_{0};a_{1},a_{2},\dots,a_{n},\alpha _{n+1}],\ \alpha_{n+1}\geq 1\},\] clearly as \(I(a_{0};a_{1},\dots,a_{n})=a_{0}+I(a_{1},a_{2},\dots,a_{n})\), we have \[|I(a_{0};a_{1},\dots,a_{n})|=|I(a_{1},a_{2},\dots,a_{n})|. \tag{2.2}\] An elementary result for comparing continued fractions is the following lemma **Lemma 2.1**.: _Let \(\alpha=[a_{0};a_{1},\dots,a_{n},a_{n+1},\dots]\) and \(\tilde{\alpha}=[a_{0};a_{1},\dots,a_{n},b_{n+1},\dots]\), then:_ * \(|\alpha-\tilde{\alpha}|<1/2^{n-1}\), * _If_ \(a_{n+1}\neq b_{n+1}\)_,_ \(\alpha>\tilde{\alpha}\) _if and only if_ \((-1)^{n+1}(a_{n+1}-b_{n+1})>0\)_._ The next lemma is from [15] (see lemma A.1) **Lemma 2.2**.: _If \(a_{0},a_{1},a_{2}\dots,a_{n},a_{n+1},\dots\) and \(b_{n+1},b_{n+2},\dots\) are positive integers bounded by \(N\in\mathbb{N}\) and \(a_{n+1}\neq b_{n+1}\) then_ \[|[a_{0};a_{1},a_{2}\dots,a_{n},a_{n+1},\dots]-[a_{0};a_{1},a_{2} \dots,a_{n},b_{n+1},\dots]| > c(N)/q_{n-1}^{2}\] \[> c(N)|I(a_{1},a_{2},\dots,a_{n})|\] _for some positive constant \(c(N)\)._ For the sequel, the following application of lemma 2.1 will also be useful **Lemma 2.3**.: _Given \(R,N\in\mathbb{N}\), let \(\beta^{1},\beta^{2},\beta^{3}\in\Sigma(N)^{+}:=\{1,2,\dots,N\}^{\mathbb{N}}\) such that \([0;\beta^{1}]<[0;\beta^{2}]<[0;\beta^{3}]\). If for two sequences \(\alpha=(\alpha_{n})_{n\in\mathbb{Z}}\) and \(\tilde{\alpha}=(\tilde{\alpha}_{n})_{n\in\mathbb{Z}}\) in \(\Sigma(N)\) it is true that \(\alpha_{0},\dots,\alpha_{2R+1}=\tilde{\alpha}_{0},\dots,\tilde{\alpha}_{2R+1}\). Then for all \(j\leq 2R+1\) we have_ \[\lambda_{0}(\sigma^{j}(\dots,\alpha_{-2},\alpha_{-1};\alpha_{0}, \dots,\alpha_{2R+1},\beta^{2}))<\max\{m(\dots,\alpha_{-2},\alpha_{-1};\alpha_ {0},\dots,\alpha_{2R+1},\beta^{1}),\] \[m(\dots,\tilde{\alpha}_{-2},\tilde{\alpha}_{-1};\tilde{\alpha}_ {0},\dots,\tilde{\alpha}_{2R+1},\beta^{3})\}+1/2^{R-1}.\] Proof.: It is just an application of lemma 2.1. Indeed, for \(j\leq R+1\) \[\lambda_{0}(\sigma^{j}(\dots,\alpha_{-1};\alpha_{0},\dots,\alpha_ {2R+1},\beta^{2}))<\lambda_{0}(\sigma^{j}(\dots,\alpha_{-1};\alpha_{0},\dots, \alpha_{2R+1},\beta^{1}))+1/2^{R-1}\] \[\leq\max\{m(\dots,\alpha_{-1};\alpha_{0},\dots,\alpha_{2R+1}, \beta^{1}),m(\dots,\tilde{\alpha}_{-1};\tilde{\alpha}_{0},\dots,\tilde{\alpha} _{2R+1},\beta^{3})\}+1/2^{R-1}.\] For \(R+1<j\leq 2R+1\), if \([\alpha_{j};\dots,\alpha_{2R+1},\beta^{2}]<[\tilde{\alpha}_{j};\dots,\tilde{ \alpha}_{2R+1},\beta^{3}]\) \[\lambda_{0}(\sigma^{j}(\dots,\alpha_{-2},\alpha_{-1};\alpha_{0}, \dots,\alpha_{2R+1},\beta^{2}))<\lambda_{0}(\sigma^{j}(\dots,\tilde{\alpha}_{- 1};\tilde{\alpha}_{0},\dots,\tilde{\alpha}_{2R+1},\beta^{3}))+1/2^{R}\] \[\leq\max\{m(\dots,\alpha_{-1};\alpha_{0},\dots,\alpha_{2R+1}, \beta^{1}),m(\dots,\tilde{\alpha}_{-1};\tilde{\alpha}_{0},\dots,\tilde{\alpha} _{2R+1},\beta^{3})\}+1/2^{R}.\] And for \(R+1<j\leq 2R+1\), if \([\alpha_{j};\dots,\alpha_{2R+1},\beta^{2}]<[\alpha_{j};\dots,\alpha_{2R+1}, \beta^{1}]\) \[\lambda_{0}(\sigma^{j}(\dots,\alpha_{-1};\alpha_{0},\dots,\alpha_ {2R+1},\beta^{2}))<\lambda_{0}(\sigma^{j}(\dots,\alpha_{-1};\alpha_{0},\dots, \alpha_{2R+1},\beta^{1}))\] \[\leq\max\{m(\dots,\alpha_{-1};\alpha_{0},\dots,\alpha_{2R+1}, \beta^{1}),m(\dots,\tilde{\alpha}_{-1};\tilde{\alpha}_{0},\dots,\tilde{\alpha} _{2R+1},\beta^{1})\}.\] Then we have proved the result. We end this subsection with one definition **Definition 2.4**.: A set \(K\subset\mathbb{R}\) is called a \(C^{1+\alpha}\)-_regular Cantor set_, \(\alpha>0\), if there exists a collection \(\mathcal{P}=\{I_{1},I_{2},...,I_{r}\}\) of compacts intervals and a \(C^{1+\alpha}\)-expanding map \(\psi\), defined in a neighbourhood of \(\cup_{1\leq j\leq r}I_{j}\) such that * \(K\subset\cup_{1\leq j\leq r}I_{j}\) and \(\cup_{1\leq j\leq r}\partial I_{j}\subset K\), * For every \(1\leq j\leq r\) we have that \(\psi(I_{j})\) is the convex hull of a union of \(I_{r}\)'s, for \(l\) sufficiently large \(\psi^{l}(K\cap I_{j})=K\) and \[K=\bigcap_{n\geq 0}\psi^{-n}(\bigcup_{1\leq j\leq r}I_{j}).\] More precisely, we also say that the triple \((K,\mathcal{P},\psi)\) is a \(C^{1+\alpha}\)-regular Cantor set. For example, in our context of sets of continued fractions. Let, as before, \(G\) be the Gauss map and \(C_{N}=\{x=[0;a_{1},a_{2},...]:a_{i}\leq N,\forall i\geq 1\}\). Then, \[C_{N}=\bigcap_{n\geq 0}G^{-n}(I_{N}\cup...\cup I_{1}),\] where \(I_{j}=[a_{j},b_{j}]\) and \(a_{j}=[0;j,\overline{1,N}]\) and \(b_{j}=[0;j,\overline{N,1}].\) That is, \(C_{N}\) is a regular Cantor set. ### Results on Dynamical Markov and Lagrange spectra Let \(\varphi:S\to S\) be a diffeomorphism of a \(C^{\infty}\) compact surface \(S\) with a mixing horseshoe \(\Lambda\) and let \(f:S\rightarrow\mathbb{R}\) be a differentiable function. Fix a Markov partition \(\{R_{a}\}_{a\in\mathcal{A}}\) with sufficiently small diameter consisting of rectangles \(R_{a}\sim I_{a}^{s}\times I_{a}^{u}\) delimited by compact pieces \(I_{a}^{s}\), \(I_{a}^{u}\), of stable and unstable manifolds of certain points of \(\Lambda\), see [1] theorem 2, page 172. Recall that the stable and unstable manifolds of \(\Lambda\) can be extended to locally invariant \(C^{1+\alpha}\) foliations in a neighborhood of \(\Lambda\) for some \(\alpha>0\). Using these foliations it is possible define projections \(\pi_{a}^{u}:R_{a}\to I_{a}^{s}\times\{i_{a}^{u}\}\) and \(\pi_{a}^{s}:R_{a}\rightarrow\{i_{a}^{s}\}\times I_{a}^{u}\) of the rectangles into the connected components \(I_{a}^{s}\times\{i_{a}^{u}\}\) and \(\{i_{a}^{s}\}\times I_{a}^{u}\) of the stable and unstable boundaries of \(R_{a}\), where \(i_{a}^{u}\in\partial I_{a}^{u}\) and \(i_{a}^{s}\in\partial I_{a}^{s}\) are fixed arbitrarily. In this way, we have the unstable and stable Cantor sets \[K^{u}=\bigcup_{a\in\mathcal{A}}\pi_{a}^{s}(\Lambda\cap R_{a})\text{ and }K^{s}= \bigcup_{a\in\mathcal{A}}\pi_{a}^{u}(\Lambda\cap R_{a}).\] In fact \(K^{u}\) and \(K^{s}\) are \(C^{1+\alpha}\) dynamically defined, associated to some expanding maps \(\psi_{s}\) and \(\psi_{u}\). The stable and unstable Cantor sets, \(K^{s}\) and \(K^{u}\), respectively, are closely related to the fractal geometry of the horseshoe \(\Lambda\). For instance, it is well-known that, \[HD(\Lambda)=HD(K^{s})+HD(K^{u})\] and that in the conservative case \[HD(K^{s})=HD(K^{u}).\] The study of the intersection of the spectra with half-lines is related to the study of fractal dimensions of the set \[\Lambda_{t}=\bigcap_{n\in\mathbb{Z}}\varphi^{-n}(\{y\in\Lambda:f(y)\leq t\}) =\{x\in\Lambda:m_{\varphi,f}(x)=\sup_{n\in\mathbb{Z}}f(\varphi^{n}(x))\leq t\}\] for \(t\in\mathbb{R}\). Following this, we also consider the subsets \(\Lambda_{t}\) through its projections on the stable and unstable Cantor sets of \(\Lambda\) \[K_{t}^{u}=\bigcup_{a\in\mathcal{A}}\pi_{a}^{s}(\Lambda_{t}\cap R_{a})\text{ and }K_{t}^{s}=\bigcup_{a\in\mathcal{A}}\pi_{a}^{u}(\Lambda_{t}\cap R_{a}).\] In [12] is showed the following result **Theorem 2.5**.: _Let \(r\geq 2\) and \(\varphi\in\text{Diff}^{2}(S)\) a conservative diffeomorphism preserving a smooth form \(\omega\) and take \(\Lambda\) a mixing horseshoe of \(\varphi\). If \(f\in C^{r}(S,\mathbb{R})\) satisfies that \(\forall\ z\in\Lambda,\ \nabla f(z)\neq 0\), then the functions_ \[t\mapsto HD(K_{t}^{u})\ \text{and}\ t\mapsto HD(K_{t}^{s})\] _are equal and continuous. Even more, one has_ \[HD(\Lambda_{t})=2HD(K_{t}^{u}).\] ### The horseshoe \(\Lambda(N)\) Given an integer \(N\geq 2\), write \(\tilde{C}_{N}=\{1,2,...,N\}+C_{N}\) and define \[\Lambda(N)=C_{N}\times\tilde{C}_{N}.\] If \(x=[0;a_{1},a_{2},...]\) and \(y=[a_{0};a_{-1},a_{-2},...]\) then we take \(\varphi:\Lambda(N)\rightarrow\Lambda(N)\) given by \[\varphi(x,y) = (G(x),a_{1}+1/y)\] \[= ([0;a_{2},a_{3},...],a_{1}+[0;a_{0},a_{-1},...]).\] Also, equip \(\Lambda(N)\) with the real map \(f(x,y)=x+y\). We note that \(\varphi\) can be extended to a \(C^{\infty}\)-diffeomorphism on a diffeomorphic copy of the 2-dimensional sphere \(\mathbb{S}^{2}\). Notice also that \(\varphi\) is conjugated to the restriction to \(C_{N}\times C_{N}\) of the map \(\psi:(0,1)\times(0,1)\rightarrow[0,1)\times(0,1)\) given by \[\psi(x,y)=\left(G(x),\frac{1}{y+\lfloor 1/x\rfloor}\right)\] and following [2] and [20] we know that \(\psi\) has an invariant measure equivalent to the Lebesgue measure. In particular, \(\varphi\) also has an invariant measure equivalent to the Lebesgue measure and then \(\varphi\) is conservative. Indeed, if \(\mathcal{S}=\{(x,y)\in\mathbb{R}^{2}|0<x<1,0<y<1/(1+x)\}\) and \(T:\mathcal{S}\rightarrow\mathcal{S}\) is given by \[T(x,y)=(G(x),x-x^{2}y),\] then \(T\) preserves the Lebesgue measure in the plane. If \(h:\mathcal{S}\rightarrow[0,1)\times(0,1)\) is given by \(h(x,y)=(x,y/(1-xy))\) then \(h\) is a conjugation between \(T\) and \(\psi\) (and thus \(\psi\) preserves the smooth measure \(h_{*}\)(Leb). For \(\Lambda(N)\) we have the Markov partition \(\{R_{a}\}_{a\in\mathcal{A}}\) where \(\mathcal{A}=\{1,2,\ldots,N\}\) and \(R_{a}\) is such that \(R_{a}\cap\Lambda(N)=C_{N}\times(C_{N}+a)=C_{N}\times C_{N}+(0,a)\). It is clear then that \(\varphi|_{\Lambda_{N}}\) is topologically conjugated to \(\sigma:\Sigma(N)\rightarrow\Sigma(N)\) (via a map \(\Pi:\Lambda(N)\rightarrow\Sigma(N)\)), and that in sequences, \(f\) becomes \(\tilde{f}:\Sigma(N)\rightarrow\mathbb{R}\) given by \[\tilde{f}(\theta)=[0;a_{1}(\theta),a_{2}(\theta),...]+a_{0}(\theta)+[0;a_{-1}( \theta),a_{-2}(\theta),...]=\lambda_{0}(\theta),\] where \(\theta=(a_{i}(\theta))_{i\in\mathbb{Z}}\), and so \[L_{\varphi,f}(\Lambda(N))=L(N)\ \ \text{and}\ \ M_{\varphi,f}(\Lambda(N))=M(N).\] In this context, let \(\alpha=(a_{s_{1}},a_{s_{1}+1},...,a_{s_{2}})\in\mathcal{A}^{s_{2}-s_{1}+1}\) any word where \(s_{1},s_{2}\in\mathbb{Z},\ s_{1}<s_{2}\) and fix \(s_{1}\leq s\leq s_{2}\). Define then \[R(\alpha;s):=\bigcap\limits_{m=s_{1}-s}^{s_{2}-s}\varphi^{-m}(R_{a_{m+s}}).\] Note that if \(x\in R(\alpha;s)\cap\Lambda(N)\) then the symbolic representation of \(x\) is in the way \((...a_{s_{1}}...a_{s-1};a_{s},a_{s+1}...a_{s_{2}}...)\) where on the right of the \(;\) is the \(0\)-th position. Finally, let us consider \(A_{N}=[0;\overline{N,1}]\) and \(B_{N}=[0;\overline{1,N}]\). As \[NA_{N}+A_{N}B_{N}=1\ \text{and}\ B_{N}+B_{N}A_{N}=1,\] we have \( A_{N}=\dfrac{B_{N}}{N}.\) Thus \( B_{N}=\frac{-N+\sqrt{N^{2}+4N}}{2}\), \( A_{N}=\frac{-N+\sqrt{N^{2}+4N}}{2N}\) and then \[\max f|_{\Lambda(N)}=2B_{N}+N=\sqrt{N^{2}+4N},\ \min f|_{\Lambda(N)}=2A_{N}+1= \frac{\sqrt{N^{2}+4N}}{N}.\] ## 3. Proof of the result ### Connection of subhorseshoes For the next, it will be useful to give the following definition **Definition 3.1**.: Given \(\Lambda^{1}\) and \(\Lambda^{2}\) subhorseshoes of a horseshoe \(\Lambda\) and \(t\in\mathbb{R}\), we said that \(\Lambda^{1}\)_connects_ with \(\Lambda^{2}\) or that \(\Lambda^{1}\) and \(\Lambda^{2}\)_connect_ before \(t\) if there exist a subhorseshoe \(\tilde{\Lambda}\subset\Lambda\) and some \(q<t\) with \(\Lambda^{1}\cup\Lambda^{2}\subset\tilde{\Lambda}\subset\Lambda_{q}\). **Lemma 3.2**.: _Suppose \(\Lambda^{1}\) and \(\Lambda^{2}\) are subhorseshoes of \(\Lambda\) and for some \(x,y\in\Lambda\) we have \(x\in W^{u}(\Lambda^{1})\cap W^{s}(\Lambda^{2})\) and \(y\in W^{u}(\Lambda^{2})\cap W^{s}(\Lambda^{1})\). If for some \(t\in\mathbb{R}\), it is true that_ \[\Lambda^{1}\cup\Lambda^{2}\cup\mathcal{O}(x)\cup\mathcal{O}(y)\subset \Lambda_{t},\] _then for every \(\epsilon>0\), \(\Lambda^{1}\) and \(\Lambda^{2}\) connect before \(t+\epsilon\)._ Proof.: Take a Markov partition \(\mathcal{P}\) for \(\Lambda\) with diameter small enough such that \(\max f|\bigcup\limits_{P\in\mathcal{R}}P<t+\epsilon\), where \(\mathcal{R}=\{P\in\mathcal{P}:P\cap(\Lambda^{1}\cup\Lambda^{2}\cup\mathcal{O} (x)\cup\mathcal{O}(y))\neq\emptyset\}\) and consider \[\Lambda_{\mathcal{R}}=\bigcap\limits_{n\in\mathbb{Z}}\varphi^{-n}(\bigcup \limits_{P\in\mathcal{R}}P).\] Evidently \(\Lambda^{1}\cup\Lambda^{2}\cup\mathcal{O}(x)\cup\mathcal{O}(y)\subset\Lambda _{\mathcal{R}}\subset\Lambda_{t+\epsilon}\), then the lemma will be proved if we show that \(\Lambda^{1}\) and \(\Lambda^{2}\) form part of the same transitive component of \(\Lambda_{\mathcal{R}}\). Let \(x_{1}\in\Lambda^{1},\ \ x_{2}\in\Lambda^{2}\) and \(\rho_{1},\rho_{2}>0\). Take \[\eta=\frac{1}{2}\min\{\rho_{1},\rho_{2},\min\{d(P,Q):P,Q\in\mathcal{R}\ \text{and}\ P\neq Q\}\}.\] By the shadowing lemma there exist \(0<\delta\leq\eta\) such that every \(\delta\)-pseudo orbit of \(\Lambda\) is \(\eta\)-shadowed by the orbit of some element of \(\Lambda\). On the other hand, as \(\varphi|_{\Lambda^{1}}\) is transitive and \(x\in W^{u}(\Lambda^{1})\) there exist \(y_{1}\in\Lambda^{1}\cap B(x_{1},\delta)\) and \(N_{1},M_{1}\in\mathbb{N}\) such that \(d(\varphi^{M_{1}}(y_{1}),\varphi^{-N_{1}}(x))<\delta\) and analogously as \(\varphi|_{\Lambda^{2}}\) is transitive and \(x\in W^{s}(\Lambda^{2})\) there exist \(y_{2}\in\Lambda^{2}\) and \(N_{2},M_{2}\in\mathbb{N}\) such that \(d(\varphi^{N_{2}}(x),y_{2})<\delta\) and \(d(x_{2},\varphi^{M_{2}}(y_{2}))<\delta\). Define then the \(\delta\)-pseudo orbit: \[\ldots,\varphi^{-1}(y_{1});y_{1},\varphi(y_{1}),\ldots,\varphi^{M_{1}-1}(y_{1 }),\varphi^{-N_{1}}(x),\ldots,\varphi^{N_{2}-1}(x),y_{2},\varphi(y_{2}),\ldots\] Then there exists \(w\in\Lambda\) that \(\eta\)-shadowed that orbit. Moreover as the \(\delta\)-pseudo orbit have all its terms in \(\bigcup\limits_{P\in\mathcal{R}}P\) and \(\eta\leq\frac{1}{2}\min\{d(P,Q):P,Q\in\mathcal{R}\) and \(P\neq Q\}\) we have also \(\mathcal{O}(w)\subset\bigcup\limits_{P\in\mathcal{R}}P\) ; that is, \(w\in\Lambda_{\mathcal{R}}\) and furthermore \[w\in B(x_{1},\rho_{1})\quad\text{and}\quad\varphi^{M_{1}+N_{1}-1+N_{2}+M_{2}} (w)\in B(x_{2},\rho_{2}).\] The proof that there exists \(w\in B(x_{2},\rho_{2})\) and \(M\in\mathbb{N}\) such that \(\varphi^{M}(w)\in B(x_{1},\rho_{1})\) is analog. **Corollary 3.3**.: _Suppose \(\Lambda^{1}\) and \(\Lambda^{2}\) are subhorseshoes of \(\Lambda\) with \(\Lambda^{1}\cup\Lambda^{2}\subset\Lambda_{t}\) for some \(t\in\mathbb{R}\). If \(\Lambda^{1}\cap\Lambda^{2}\neq\emptyset\), then for every \(\epsilon>0\), \(\Lambda^{1}\) and \(\Lambda^{2}\) connects before \(t+\epsilon\)._ Proof.: If \(\Lambda^{1}\cap\Lambda^{2}\neq\emptyset\), then every \(w\in\Lambda^{1}\cap\Lambda^{2}\) satisfies \(w\in W^{u}(\Lambda^{1})\cap W^{s}(\Lambda^{2})\) and \(w\in W^{u}(\Lambda^{2})\cap W^{s}(\Lambda^{1})\) and then we have the conclusion. **Corollary 3.4**.: _Let \(\Lambda^{1}\), \(\Lambda^{2}\) and \(\Lambda^{3}\) subhorseshoes of \(\Lambda\) and \(t\in\mathbb{R}\). If \(\Lambda^{1}\) connects with \(\Lambda^{2}\) before \(t\) and \(\Lambda^{2}\) connects with \(\Lambda^{3}\) before \(t\). Then also \(\Lambda^{1}\) connects with \(\Lambda^{3}\) before \(t\)._ Proof.: By hypothesis we have two subhorseshoes \(\Lambda^{1,2}\) and \(\Lambda^{2,3}\) and \(q_{1},q_{2}<t\) with \[\Lambda^{1}\cup\Lambda^{2}\subset\Lambda^{1,2}\subset\Lambda_{q_{1}}\text{ and }\ \Lambda^{2}\cup\Lambda^{3}\subset\Lambda^{2,3}\subset\Lambda_{q_{2}}.\] Applying corollary 3.3 to \(\Lambda^{1,2}\) and \(\Lambda^{2,3}\), with \(\tilde{t}=\max\{q_{1},q_{2}\}\) and \(\epsilon=t-\tilde{t}\) we have the result. ### Dimension estimates Fix an integer \(m\geq 1\) and consider the horseshoe \[\Lambda:=\Lambda(m+3)=C(m+3)\times\tilde{C}(m+3)\] equipped with the diffeomorphism \(\varphi\) and the map \(f\) given in the previous section. Also, consider \[\eta\in(m+1+[0;\overline{1}]+[0;1,m+2,\overline{1,m+3}],m+4)\cap\overline{T}\] which is accumulated from the left by points of \(T\). Given \(t\in(m+1+[0;\overline{1}]+[0;1,m+2,\overline{1,m+3}],\eta)\cap T\) and \(0<\epsilon<\eta-t\), take \(\ell(t,\epsilon)\in\mathbb{N}\) sufficiently large such that for the set \[C(t,\epsilon)=\{\alpha=(a_{0},a_{1}\cdots,a_{2\ell(t,\epsilon)})\in\{1,2, \cdots,m+3\}^{2\ell(t,\epsilon)+1}:R(\alpha;\ell(t,\epsilon))\cap\Lambda_{t+ \epsilon/4}\neq\emptyset\}\] if \(\alpha\in C(t,\epsilon)\) and \(z,y\in R(\alpha;\ell(t,\epsilon))\) then \(|f(x)-f(y)|<\epsilon/4\). Define \[P(t,\epsilon):=\bigcap\limits_{n\in\mathbb{Z}}\varphi^{-n}(\bigcup\limits_{ \alpha\in C(t,\epsilon)}R(\alpha;\ell(t,\epsilon))).\] Note that by construction, \(\Lambda_{t+\epsilon/4}\subset P(t,\epsilon)\subset\Lambda_{t+\epsilon/2}\) and being \(P(t,\epsilon)\) a hyperbolic set of finite type, it admits a decomposition \[P(t,\epsilon)=\bigcup_{x\in\mathcal{X}}\tilde{\Lambda}_{x}\] where \(\mathcal{X}\) is a finite index set and for \(x\in\mathcal{X}\), \(\tilde{\Lambda}_{i}\) is a subhorseshoe or a transient set i.e a set of the form \(\tau=\{x\in M:\alpha(x)\subset\tilde{\Lambda}_{i_{1}}\) and \(\ \omega(x)\subset\tilde{\Lambda}_{i_{2}}\}\) where \(\tilde{\Lambda}_{i_{1}}\) and \(\tilde{\Lambda}_{i_{2}}\) with \(i_{1},i_{2}\in\mathcal{X}\) are subhorseshoes. As for every transient \(\tau\) set as before, we have \[HD(\tau)=HD(K^{s}(\tilde{\Lambda}_{i_{1}}))+HD(K^{u}(\tilde{\Lambda}_{i_{2}}))\] and for every subhorseshoe \(\tilde{\Lambda}_{i}\), being \(\varphi\) conservative, one has \[HD(\tilde{\Lambda}_{i})=HD(K^{s}(\tilde{\Lambda}_{i}))+HD(K^{u}(\tilde{\Lambda }_{i}))=2HD(K^{u}(\tilde{\Lambda}_{i}))\] therefore \[HD(P(t,\epsilon))=\max_{x\in\mathcal{X}}HD(\tilde{\Lambda}_{x})=\max_{ \begin{subarray}{c}x\in\mathcal{X}:\ \tilde{\Lambda}_{x}\ is\\ subhorseshoe\end{subarray}}HD(\tilde{\Lambda}_{x}). \tag{3.1}\] Now, in [15] was proved for \(s\leq\max f|_{\Lambda}\) that \[D(s)=HD(k^{-1}(-\infty,s])=HD(K^{u}_{s})\] and by theorem 2.5, we have \[HD(K^{u}_{s})=\frac{1}{2}HD(\Lambda_{s}).\] Then, for some \(x\in\mathcal{X}\), \(HD(\tilde{\Lambda}_{x})\geq 1\) because \(\Lambda_{t}\subset P(t,\epsilon)\) and \[t^{*}_{1}=\sup\{s\in\mathbb{R}:\min\{1,HD(\Lambda_{s})\}<1\}=\sup\{s\in \mathbb{R}:HD(\Lambda_{s})<1\}<t.\] We will show that any subhorseshoe contained in \(P(t,\epsilon)\) with Hausdorff dimension greater or equal than \(1\) connects with the periodic orbit \(\xi\), given by the kneading sequence \((1)_{i\in\mathbb{Z}}\), before any time bigger than \(t+\epsilon/2\). To do that, take any \(\delta>0\) and write \[\tilde{P}(t,\epsilon)=\bigcup_{\begin{subarray}{c}x\in\mathcal{X}:\ \tilde{\Lambda}_{x}\ is \\ subhorseshoe\end{subarray}}\tilde{\Lambda}_{x}=\bigcup_{i\in\mathcal{I}} \tilde{\Lambda}_{i}\cup\bigcup_{i\in\mathcal{J}}\tilde{\Lambda}_{j}\] where \[\mathcal{I}=\{i\in\mathcal{X}:\tilde{\Lambda}_{i}\text{ is a subhorseshoe and it connects with }\xi\text{ before }t+\epsilon/2+\delta\}\] and \[\mathcal{J}=\{j\in\mathcal{X}:\tilde{\Lambda}_{j}\text{ is a subhorseshoe and it doesn't connect with }\xi\text{ before }t+\epsilon/2+\delta\}.\] By proposition 3.2, given \(j\in\mathcal{J}\) as \(\tilde{\Lambda}_{j}\cup\xi\subset\Lambda_{t+\epsilon/2}\) we cannot have at the same time the existence of two points \(x\in W^{u}(\tilde{\Lambda}_{j})\cap W^{s}(\xi)\) and \(y\in W^{u}(\xi)\cap W^{s}(\tilde{\Lambda}_{j})\) such that \(\mathcal{O}(x)\cup\mathcal{O}(y)\subset\Lambda_{t+\epsilon/2+\delta/2}\). Without loss of generality suppose that there is no \(x\in W^{u}(\tilde{\Lambda}_{j})\cap W^{s}(\xi)\) with \(m_{\varphi,f}(x)\leq t+\epsilon/2+\delta/2\) (the argument for the other case is similar). We will show that this condition forces the possible letters that may appear in the sequences that determine the unstable Cantor set of \(\tilde{\Lambda}_{j}\). Let us begin fixing \(R\in\mathbb{N}\) large enough such that \(1/2^{R-1}<\delta/2\) and consider the set \(\mathcal{C}_{2R+1}=\{I(a_{0};a_{1},\dots,a_{2R+1}):I(a_{0};a_{1},\dots,a_{2R+1} )\cap K^{u}(\tilde{\Lambda}_{j})\neq\emptyset\}\), clearly \(\mathcal{C}_{2R+1}\) is a covering of \(K^{u}(\tilde{\Lambda}_{j})\). We will give a mechanism to construct coverings \(\mathcal{C}_{k}\) with \(k\geq 2R+1\) that can be used to _efficiently_ cover \(K^{u}(\tilde{\Lambda}_{j})\) as \(k\) goes to infinity. Indeed, if for some \(k\geq 2R+1\), and \(I(a_{0};a_{1},\dots,a_{k})\in\mathcal{C}_{k}\), \((a_{0},a_{1},\dots,a_{k})\) has continuations with forced first letter. That is, for every \(\alpha=(\alpha_{n})_{n\in\mathbb{Z}}\in\Pi(\tilde{\Lambda}_{j})\) with \(\alpha_{0},\alpha_{1},\dots,\alpha_{k}=a_{0},a_{1},\dots,a_{k}\) one has \(\alpha_{k+1}=a_{k+1}\) for some fixed \(a_{k+1}\), then we can refine the original cover \(\mathcal{C}_{k}\), by replacing the interval \(I(a_{0};a_{1},\dots,a_{k})\) with the interval \(I(a_{0};a_{1},\dots,a_{k},a_{k+1})\). On the other hand, if \((a_{0},a_{1},\dots,a_{k})\) has two continuations with different initial letter, said \(\gamma_{k+1}=(a_{k+1},a_{k+2},\dots)\) and \(\beta_{k+1}=(a_{k+1}^{*},a_{k+2}^{*},\dots)\) with \(a_{k+1}\neq a_{k+1}^{*}\). Take \(\alpha=(\alpha_{n})_{n\in\mathbb{Z}}\in\Pi(\tilde{\Lambda}_{j})\) and \(\tilde{\alpha}=(\tilde{\alpha}_{n})_{n\in\mathbb{Z}}\in\Pi(\tilde{\Lambda}_{j})\), such that \(\alpha=(\dots,\alpha_{-2},\alpha_{-1};a_{0},a_{1},\dots,a_{k},\gamma_{k+1})\) and \(\tilde{\alpha}=(\dots,\tilde{\alpha}_{-2},\tilde{\alpha}_{-1};a_{0},a_{1}, \dots,a_{k},\beta_{k+1})\). If \(a_{k+1}=i\) then, necessarily either \(a_{k+1}^{*}=i+1\) or \(a_{k+1}^{*}=i-1\) because if for example \(a_{k+1}+1<a_{k+1}^{*}\) we can set \(s=a_{k+1}+1\) and therefore by lemma 3.5 as \([0;\beta_{k+1}]<[0;s,\overline{1}]<[0;\gamma_{k+1}]\), we would have for all \(j\leq k\) \[\lambda_{0}(\sigma^{j}(\dots,\tilde{\alpha}_{-2},\tilde{\alpha}_{ -1};\tilde{\alpha}_{0},\dots,\tilde{\alpha}_{k},s,\overline{1})) \leq \max\{m(\dots,\alpha_{-1};\alpha_{0},\dots,\alpha_{k},\gamma_{k+1}),\] \[m(\dots,\tilde{\alpha}_{-1};\tilde{\alpha}_{0},\dots,\tilde{ \alpha}_{k},\beta_{k+1})\}+1/2^{R-1}\] \[< t+\epsilon/2+\delta/2.\] For \(j=k+1\), \[\lambda_{0}(\sigma^{j}(\dots,\tilde{\alpha}_{-2},\tilde{\alpha}_ {-1};\tilde{\alpha}_{0},\dots,\tilde{\alpha}_{k},s,\overline{1})) = [0;\tilde{\alpha}_{k},\dots,\tilde{\alpha}_{0},\tilde{\alpha}_{-1 },\dots]+s+[0;\overline{1}]\] \[< [0;\tilde{\alpha}_{k},\dots,\tilde{\alpha}_{0},\tilde{\alpha}_{-1 },\dots]+s+1\] \[< [0;\tilde{\alpha}_{k},\dots,\tilde{\alpha}_{0},\tilde{\alpha}_{-1 },\dots]+a_{k+1}^{*}\] \[+[0;a_{k+2}^{*},a_{k+3}^{*},\dots]\] \[= \lambda_{0}(\sigma^{k+1}(\dots,\tilde{\alpha}_{-1};\tilde{\alpha} _{0},\dots,\tilde{\alpha}_{k},\beta_{k+1}))\] \[\leq m(\dots,\tilde{\alpha}_{-1};\tilde{\alpha}_{0},\dots,\tilde{ \alpha}_{k},\beta_{k+1})\] \[\leq t+\epsilon/2\] and for \(j>k+1\), clearly \[\lambda_{0}(\sigma^{j}(\dots,\tilde{\alpha}_{-2},\tilde{\alpha}_{-1};\tilde{ \alpha}_{0},\dots,\tilde{\alpha}_{k},s,\overline{1}))<3<t+\epsilon/2.\] Then taking \(x=\Pi^{-1}((\dots,\tilde{\alpha}_{-2},\tilde{\alpha}_{-1};\tilde{\alpha}_{0}, \dots,\tilde{\alpha}_{k},s,\overline{1}))\) one would have \[x\in W^{u}(\tilde{\Lambda}_{j})\cap W^{s}(\xi)\mbox{ and }m_{\varphi,f}(x)\leq t+ \epsilon/2+\delta/2\] that is a contradiction. The case \(a_{k+1}-1>a_{k+1}^{*}\) is quite similar. Now, suppose \(a_{k+1}=i\) and \(a_{k+1}^{*}=i+1\). We affirm that \(a_{k+2}=1\) because in other case by lemma 3.5, as \([0;\beta_{k+1}]<[0;i,\overline{1}]<[0;\gamma_{k+1}]\), we would have again for all \(j\leq k\) \[\lambda_{0}(\sigma^{j}(\ldots,\tilde{\alpha}_{-2},\tilde{\alpha}_{-1};\tilde{ \alpha}_{0},\ldots,\tilde{\alpha}_{k},i,\overline{1}))<t+\epsilon/2+\delta/2.\] For \(j>k+1\), one more time \[\lambda_{0}(\sigma^{j}(\ldots,\tilde{\alpha}_{-2},\tilde{\alpha}_{-1};\tilde{ \alpha}_{0},\ldots,\tilde{\alpha}_{k},i,\overline{1}))<t+\epsilon/2\] and for \(j=k+1\), \[\lambda_{0}(\sigma^{j}(\ldots,\tilde{\alpha}_{-2},\tilde{\alpha}_ {-1};\tilde{\alpha}_{0},\ldots,\tilde{\alpha}_{k},i,\overline{1})) = [0;\tilde{\alpha}_{k},\ldots,\tilde{\alpha}_{0},\tilde{\alpha}_{ -1},\ldots]+i+[0;\overline{1}]\] \[< [0;\tilde{\alpha}_{k},\ldots,\tilde{\alpha}_{0},\tilde{\alpha}_ {-1},\ldots]+i+1\] \[< [0;\tilde{\alpha}_{k},\ldots,\tilde{\alpha}_{0},\tilde{\alpha}_ {-1},\ldots]+a_{k+1}^{*}\] \[+[0;a_{k+2}^{*},a_{k+3}^{*},\ldots]\] \[= \lambda_{0}(\sigma^{k+1}(\ldots,\tilde{\alpha}_{-1};\tilde{\alpha }_{0},\ldots,\tilde{\alpha}_{k},\beta_{k+1}))\] \[\leq m(\ldots,\tilde{\alpha}_{-1};\tilde{\alpha}_{0},\ldots,\tilde{ \alpha}_{k},\beta_{k+1})\] \[\leq t+\epsilon/2.\] Then for \(x=\Pi^{-1}((\ldots,\tilde{\alpha}_{-2},\tilde{\alpha}_{-1};\tilde{\alpha}_{0},\ldots,\tilde{\alpha}_{k},i,\overline{1}))\) one would get the contradiction \[x\in W^{u}(\tilde{\Lambda}_{j})\cap W^{s}(\xi)\mbox{ and }m_{\varphi,f}(x)\leq t+ \epsilon/2+\delta/2.\] Even more, we have \(a_{k+3}\in\{m+1,m+2,m+3\}\) because if \(a_{k+3}=\ell\leq m\), then \([0;\beta_{k+1}]<[0;i,1,\ell+1,\overline{1}]<[0;\gamma_{k+1}]\) and by lemma 3.5 we would have for all \(j\leq k\) \[\lambda_{0}(\sigma^{j}(\ldots,\tilde{\alpha}_{-2},\tilde{\alpha}_{-1};\tilde{ \alpha}_{0},\ldots,\tilde{\alpha}_{k},i,1,\ell+1,\overline{1}))<t+\epsilon/2+ \delta/2.\] For \(j=k+1\), \[\lambda_{0}(\sigma^{j}(\ldots,\tilde{\alpha}_{-2},\tilde{\alpha}_ {-1};\tilde{\alpha}_{0},\ldots,\tilde{\alpha}_{k},i,1,\ell+1,\overline{1})) = [0;\tilde{\alpha}_{k},\ldots,\tilde{\alpha}_{0},\tilde{\alpha}_{ -1},\ldots]+i+\] \[[0;1,\ell+1,\overline{1}]\] \[< [0;\tilde{\alpha}_{k},\ldots,\tilde{\alpha}_{0},\tilde{\alpha}_{ -1},\ldots]+a_{k+1}^{*}\] \[+[0;a_{k+2}^{*},a_{k+3}^{*},\ldots]\] \[= \lambda_{0}(\sigma^{k+1}(\ldots,\tilde{\alpha}_{-1};\tilde{\alpha }_{0},\ldots,\tilde{\alpha}_{k},\beta_{k+1}))\] \[\leq m(\ldots,\tilde{\alpha}_{-1};\tilde{\alpha}_{0},\ldots,\tilde{ \alpha}_{k},\beta_{k+1})\] \[\leq t+\epsilon/2\] and for \(j>k+1\), \[\lambda_{0}(\sigma^{j}(\ldots,\tilde{\alpha}_{-2},\tilde{\alpha}_{-1};\tilde{ \alpha}_{0},\ldots,\tilde{\alpha}_{k},i,1,\ell+1,\overline{1}))<m+1+[0; \overline{1}]+[0;1,m+2,\overline{1,m+3}]<t+\epsilon/2\] then taking \(x=\Pi^{-1}((\ldots,\tilde{\alpha}_{-2},\tilde{\alpha}_{-1};\tilde{\alpha}_{0},\ldots,\tilde{\alpha}_{k},i,1,\ell+1,\overline{1}))\) one would have \[x\in W^{u}(\tilde{\Lambda}_{j})\cap W^{s}(\xi)\mbox{ and }m_{\varphi,f}(x)\leq t+ \epsilon/2+\delta/2\] that is again a contradiction. In a similar way, we must have \(a_{k+2}^{*}\in\{m+1,m+2,m+3\}\) because if \(a_{k+2}^{*}=\ell\leq m\), then \([0;\beta_{k+1}]<[0;i+1,\ell+1,\overline{1}]<[0;\gamma_{k+1}]\) and as before we would have for all \(j\leq k\) \[\lambda_{0}(\sigma^{j}(\ldots,\tilde{\alpha}_{-2},\tilde{\alpha}_{-1};\tilde{ \alpha}_{0},\ldots,\tilde{\alpha}_{k},i+1,\ell+1,\overline{1}))<t+\epsilon/2+ \delta/2,\] for \(j=k+1\), \[\lambda_{0}(\sigma^{j}(\ldots,\tilde{\alpha}_{-2},\tilde{\alpha}_ {-1};\tilde{\alpha}_{0},\ldots,\tilde{\alpha}_{k},i+1,\ell+1,\overline{1})) = [0;\tilde{\alpha}_{k},\ldots,\tilde{\alpha}_{0},\tilde{\alpha}_{- 1},\ldots]+i+1+\] \[[0;\ell+1,\overline{1}]\] \[< [0;\tilde{\alpha}_{k},\ldots,\tilde{\alpha}_{0},\tilde{\alpha}_ {-1},\ldots]+a_{k+1}^{*}\] \[+[0;a_{k+2}^{*},a_{k+3}^{*},\ldots]\] \[= \lambda_{0}(\sigma^{k+1}(\ldots,\tilde{\alpha}_{-1};\tilde{ \alpha}_{0},\ldots,\tilde{\alpha}_{k},\beta_{k+1}))\] \[\leq m(\ldots,\tilde{\alpha}_{-1};\tilde{\alpha}_{0},\ldots,\tilde{ \alpha}_{k},\beta_{k+1})\] \[\leq t+\epsilon/2\] and for \(j>k+1\), \[\lambda_{0}(\sigma^{j}(\ldots,\tilde{\alpha}_{-2},\tilde{\alpha}_{-1};\tilde{ \alpha}_{0},\ldots,\tilde{\alpha}_{k},i+1,\ell+1,\overline{1}))<m+1+[0; \overline{1}]+[0;1,m+2,\overline{1,m+3}]<t+\epsilon/2\] that let us get a contradiction again. In particular, in this case, we can refine the cover \(\mathcal{C}_{k}\) by replacing the interval \(I(a_{0};a_{1},\ldots,a_{k})\) with the six intervals \(I(a_{0};a_{1},\ldots,a_{k},i,1,m+1),I(a_{0};a_{1},\ldots,a_{k},i,1,m+2),I(a_{ 0};a_{1},\ldots,a_{k},i,1,m+3),I(a_{0};a_{1},\ldots,a_{k},i+1,m+1),I(a_{0};a_ {1},\ldots,a_{k},i+1,m+2)\) and \(I(a_{0};a_{1},\ldots,a_{k},i+1,m+3)\) for one and only one \(i=1,\ldots,m+2\). Observe that, in fact, some of the intervals considered in the last paragraph, may not be possible. For example, if \(\eta=m+3\) then \(t+\epsilon/2<m+3\); therefore the letter \(m+3\) cannot appear in the kneading sequence of any point of \(\tilde{\Lambda}_{j}\). But this will not affect our argument. Indeed, we affirm that this procedure doesn't increase the \(0.49\)-sum, \(H_{0.49}(\mathcal{C}_{k})=\sum\limits_{I\in\mathcal{C}_{k}}|I|^{0.49}\) of the cover \(\mathcal{C}_{k}\) of \(K^{u}(\tilde{\Lambda}_{j})\). That is, by 2.2 we need to prove that \[\sum_{j=m+1}^{m+3}|I(a_{1},\ldots,a_{k},i,1,j)|^{0.49}+\sum_{j=m+1}^{m+3}|I(a_ {1},\ldots,a_{k},i+1,j)|^{0.49}<|I(a_{1},\ldots,a_{k})|^{0.49}\] or \[\sum_{j=m+1}^{m+3}\left(\frac{|I(a_{1},\ldots,a_{k},i,1,j)|}{|I(a_{1},\ldots,a _{k})|}\right)^{0.49}+\sum_{j=m+1}^{m+3}\left(\frac{|I(a_{1},\ldots,a_{k},i+1,j)|}{|I(a_{1},\ldots,a_{k})|}\right)^{0.49}<1 \tag{3.2}\] where \(i=1,\ldots,m+2\). In this direction, we have the following lemmas **Lemma 3.5**.: _Given \(a_{0},a_{1},\ldots,a_{n},a,b,c\in\{1,\ldots,m+3\}\) we have_ \[\frac{|I(a_{1},\ldots,a_{n},a,b)|}{|I(a_{1},\ldots,a_{n})|}=\frac{1+r}{(ab+1+ br)(ab+a+1+(b+1)r)}\] _and_ \[\frac{|I(a_{1},\ldots,a_{n},a,b,c)|}{|I(a_{1},\ldots,a_{n})|}=\frac{1+r}{(abc+c+ a+(bc+1)r)(abc+c+a+ab+1+(bc+b+1)r)}\] _where \(r\in(0,1)\)._ Proof.: Recall that the length of \(I(b_{1},\ldots,b_{m})\) is \[|I(b_{1},\ldots,b_{m})|=\frac{1}{q_{m}(q_{m}+q_{m-1})},\] where \(q_{s}\) is the denominator of \([0;b_{1},\ldots,b_{s}]\). And that we also have the recurrence formula \[q_{s+2}=b_{s+2}q_{s+1}+q_{s}.\] Using this two and three times respectively, we have \[|I(a_{1},\ldots,a_{n},a,b)|=\frac{1}{((ab+1)q_{n}+bq_{n-1})((ab+a+1)q_{n}+(b+1 )q_{n-1})}\] and \[|I(a_{1},\ldots,a_{n},a,b,c)|=\frac{1}{((abc+c+a)q_{n}+(bc+1)q_{n- 1})((abc+c+a+ab+1)q_{n}+(bc+b+1)q_{n-1})}\] so, we conclude \[\frac{|I(a_{1},\ldots,a_{n},a,b)|}{|I(a_{1},\ldots,a_{n})|} = \frac{q_{n}(q_{n}+q_{n-1})}{((ab+1)q_{n}+bq_{n-1})((ab+a+1)q_{n}+ (b+1)q_{n-1})}\] \[= \frac{1+r}{(ab+1+br)(ab+a+1+(b+1)r)}\] and \[\frac{|I(a_{1},\ldots,a_{n},a,b,c)|}{|I(a_{1},\ldots,a_{n})|} = \frac{q_{n}(q_{n}+q_{n-1})}{((abc+c+a)q_{n}+(bc+1)q_{n-1})((abc+c+ a+ab+1)q_{n}+(bc+b+1)q_{n-1})}\] \[= \frac{1+r}{(abc+c+a+(bc+1)r)(abc+c+a+ab+1+(bc+b+1)r)}\] with \(r=\frac{q_{n-1}}{q_{n}}\in(0,1)\). **Lemma 3.6**.: _Fix \(x,y,z,w>0\), then_ \[\frac{d}{dr}\left(\frac{1+r}{(x+yr)(z+wr)}\right)=\frac{(x-y)(z-w)-yw(r+1)^{2 }}{(ywr^{2}+(xw+yz)r+xz)^{2}}<\frac{(x-y)(z-w)-yw}{(ywr^{2}+(xw+yz)r+xz)^{2}}\] _for \(r\geq 0\)._ Proof.: It's a straightforward computation. Using the previous lemmas, that \(i\geq 1\), \(m\geq 1\), \(r\in(0,1)\) and for \(j\in\{m+1,m+2,m+3\}\) \[(2j+1-(j+1))(2j+3-(j+2))-(j+1)(j+2)=j(j-1)-(j+1)(j+2)<0,\] for the first sum one has \[\sum_{j=m+1}^{m+3}\left(\frac{|I(a_{1},\ldots,a_{k},i,1,j)|}{|I(a _{1},\ldots,a_{k})|}\right)^{0.49} = \sum_{j=m+1}^{m+3}\left(\frac{1+r}{(ij+j+i+(j+1)r)(ij+j+2i+1+(j+2 )r)}\right)^{0.49}\] \[\leq \sum_{j=m+1}^{m+3}\left(\frac{1+r}{(2j+1+(j+1)r)(2j+3+(j+2)r)} \right)^{0.49}\] \[< \sum_{j=m+1}^{m+3}\left(\frac{1}{(2j+1)(2j+3)}\right)^{0.49}\] \[\leq \left(\frac{1}{5\times 7}\right)^{0.49}+\left(\frac{1}{7\times 9 }\right)^{0.49}+\left(\frac{1}{9\times 11}\right)^{0.49}\] \[< 0.412\] and for the second sum \[\sum_{j=m+1}^{m+3}\left(\frac{|I(a_{1},\ldots,a_{k},i+1,j)|}{|I(a _{1},\ldots,a_{k})|}\right)^{0.49} = \sum_{j=m+1}^{m+3}\left(\frac{1+r}{((i+1)j+1+jr)((i+1)j+i+2+(j+1 )r)}\right)^{0.49}\] \[\leq \sum_{j=m+1}^{m+3}\left(\frac{1+r}{(2j+1+jr)(2j+3+(j+1)r)}\right) ^{0.49}\] \[< \sum_{j=m+1}^{m+3}\left(\frac{2}{(2j+1)(2j+3)}\right)^{0.49}\] \[\leq \left(\frac{2}{5\times 7}\right)^{0.49}+\left(\frac{2}{7\times 9 }\right)^{0.49}+\left(\frac{2}{9\times 11}\right)^{0.49}\] \[< 0.579\] that proves 3.2 and so let us conclude that \(HD(K^{u}(\tilde{\Lambda}_{j}))\leq 0.49\). Finally, as we are in the conservative setting \[HD(\tilde{\Lambda}_{j})=2HD(K^{u}(\tilde{\Lambda}_{j}))<0.99.\] Fix \(\delta=\epsilon/2\). By definition, for \(i\in\mathcal{I}\), \(\tilde{\Lambda}_{i}\) connects with \(\xi\) before \(t+\epsilon\), then we can apply proposition 4 at most \(|\mathcal{I}|-1\) times to see that there exists a subhorseshoe \(\tilde{\Lambda}(t,\epsilon)\subset\Lambda\) and some \(q(t,\epsilon)<t+\epsilon\) such that \[\bigcup_{i\in\mathcal{I}}\tilde{\Lambda}_{i}\subset\tilde{\Lambda}(t,\epsilon )\subset\Lambda_{q(t,\epsilon)}.\] Now, remember that for any subhorseshoe \(\tilde{\Lambda}\subset\Lambda\) being locally maximal we have \[W^{s}(\tilde{\Lambda})=\bigcup_{y\in\tilde{\Lambda}}W^{s}(y)\ \ \mbox{and}\ \ W^{u}( \tilde{\Lambda})=\bigcup_{y\in\tilde{\Lambda}}W^{u}(y).\] Then, there exists an \(y\in\tilde{\Lambda}\) with \(\lim_{n\to\infty}d(f(\varphi^{n}(x)),f(\varphi^{n}(y)))=0\) for every \(x\in\Lambda\), such that \(\omega(x)\subset\tilde{\Lambda}\), and so \(\ell_{\varphi,f}(x)=\ell_{\varphi,f}(y)\). Using this, we have \[\ell_{\varphi,f}(P(t,\epsilon))=\ell_{\varphi,f}(\tilde{P}(t,\epsilon))= \bigcup_{i\in\mathcal{I}}\ell_{\varphi,f}(\tilde{\Lambda}_{i})\cup\bigcup_{j \in\mathcal{J}}\ell_{\varphi,f}(\tilde{\Lambda}_{j}).\] On the other hand \[HD(\bigcup_{j\in\mathcal{J}}\ell_{\varphi,f}(\tilde{\Lambda}_{j}))=\max_{j\in \mathcal{J}}HD(\ell_{\varphi,f}(\tilde{\Lambda}_{j}))\leq\max_{j\in\mathcal{J }}HD(f(\tilde{\Lambda}_{j}))\leq\max_{j\in\mathcal{J}}HD(\tilde{\Lambda}_{j})<1\] so \(int(\bigcup_{j\in\mathcal{J}}\ell_{\varphi,f}(\tilde{\Lambda}_{j}))=\emptyset\). Also, as was proved in lemma 5.2 of [12], for \(\tilde{t}\leq\max f|_{\Lambda}\) \[L\cap(-\infty,\tilde{t})=\bigcup_{s<\tilde{t}}\ell_{\varphi,f}(\Lambda_{s}),\] therefore \[t\in int(m_{\varphi,f}(\Lambda_{t+\epsilon/4})) = int(M\cap(-\infty,t+\epsilon/4))=int(L\cap(-\infty,t+\epsilon/4))\] \[= int(\bigcup_{s<t+\epsilon/4}\ell_{\varphi,f}(\Lambda_{s}))=int( \ell_{\varphi,f}(\Lambda_{t+\epsilon/4}))\] \[\subset int(\ell_{\varphi,f}(P(t,\epsilon)))\] and then, we must have \[t<\sup(\bigcup_{i\in\mathcal{I}}\ell_{\varphi,f}(\tilde{\Lambda}_{i}))\leq \sup(\ell_{\varphi,f}(\tilde{\Lambda}(t,\epsilon)))\leq\sup f(\tilde{\Lambda} (t,\epsilon))=\max f|_{\tilde{\Lambda}(t,\epsilon)}.\] We have then proved the following result **Proposition 3.7**.: _Given \(t\in(m+1+[0;\overline{1}]+[0;1,m+2,\overline{1,m+3}],\eta)\cap T\) and \(\epsilon<\eta-t\) there exist some \(q(t,\epsilon)<t+\epsilon\) and a subhorseshoe \(\tilde{\Lambda}(t,\epsilon)\subset\Lambda_{q(t,\epsilon)}\) with \(HD(\tilde{\Lambda}(t,\epsilon))\geq 1\) such that_ 1. \(HD(\Lambda_{t})\leq HD(\tilde{\Lambda}(t,\epsilon))\)__ 2. _for every subhorseshoe_ \(\tilde{\Lambda}\subset\Lambda_{t}\) _with_ \(HD(\tilde{\Lambda})\geq 0.99\) _one has_ \(\tilde{\Lambda}\subset\tilde{\Lambda}(t,\epsilon)\)__ 3. \(t<\max f|_{\tilde{\Lambda}(t,\epsilon)}\)_._ ### Putting unstable Cantor sets into \(k^{-1}(\eta)\) Let \(\eta\in(m+1+[0;\overline{1}]+[0;1,m+2,\overline{1,m+3}],m+4)\cap\overline{T}\) accumulated from the left by points of \(T\) and \(\epsilon>0\) such that \(m+1+[0;\overline{1}]+[0;1,m+2,\overline{1,m+3}]<\eta-\epsilon\). Take any strictly increasing sequence \(\{t_{n}\}_{n\geq 0}\) of points of \(T\) such that \(t_{0}>\eta-\epsilon\) and \(\lim\limits_{n\to\infty}t_{n}=\eta\). Proposition 3.7 let us find a sequence of subhorsehoes \(\{\Lambda^{n}\}_{n\geq 0}=\{\tilde{\Lambda}(t_{n},(t_{n+1}-t_{n})/2)\}_{n\geq 0}\) with the following properties 1. \(HD(\Lambda_{t_{n}})\leq HD(\Lambda^{n})\) 2. \(\Lambda^{n}\subset\Lambda^{n+1}\) 3. \(t_{n}<\max f|_{\Lambda^{n}}<t_{n+1}\). Now, we will construct a local homeomorphism \(\theta:K^{u}(\Lambda^{0})\to k^{-1}(\eta)\) with local Holder inverse and exponent arbitrarily close to one. Given \(n\geq 0\), being \(\Lambda^{n}\) a mixing horseshoe (because \(\xi\subset\Lambda^{n}\)), we can find some \(c(n)\in\mathbb{N}\) such that given two letters \(a\) and \(b\) in the alphabet \(\mathcal{A}(\Lambda^{n})\) of \(\Lambda^{n}\) there exists some finite word of size \(c(n)\): \((a_{1},\ldots,a_{c(n)})\) (in the letters of \(\mathcal{A}(\Lambda^{n})\)) such that \((a,a_{1},\ldots,a_{c(n)},b)\) is admissible; given \(a\) and \(b\) consider always a fixed \((a_{1},\ldots,a_{c(n)})\) as before. Also, as \(\Lambda^{n}\) is a subhorseshoe of \(\Lambda\), it is the invariant set in some rectangles determined for a set of words of size \(2p(n)+1\) for some \(p(n)\in\mathbb{N}\). Now, take \(n\geq 1\) and consider the kneading sequence \(\{x_{r}^{n}\}_{r\in\mathbb{Z}}\) of some point \(x_{n}\in\Lambda^{n}\) such that \(f(x_{n})=\max f|_{\Lambda^{n}}\). Also take \(r(n)>p(n+1)+p(n)+p(n-1)\) big enough such that for any \(\alpha=(a_{0},a_{1}\cdots,a_{2r(n)})\in\{1,2,\cdots,m+3\}^{2r(n)+1}\) and \(z,y\in R(\alpha;r(n))\) we have \(|f(x)-f(y)|<\min\{(t_{n+1}-\max f|_{\Lambda^{n}})/2,(\max f|_{\Lambda^{n}}-t_ {n})/2\}.\) Finally, set \(s(n)=\sum_{k=1}^{n}2r(k)+2c(k)+1\). Given \(a=[a_{0};a_{1},a_{2},\ldots]\in K^{u}(\Lambda^{0})\) for \(n\geq 1\) set \(a^{(n)}:=(a_{s(n)!+1},\ldots,a_{s(n+1)!})\), so one has \[a=[a_{0};a_{1},a_{2},\ldots]=[a_{0};a_{1},\ldots,a_{s(1)!},a^{(1)},a^{(2)}, \ldots,a^{(n)},\ldots].\] Define then \[\theta(a):=[a_{0};a_{1},\ldots,a_{s(1)!},h_{1},a^{(1)},h_{2},a^{(2)},\ldots,h _{n},a^{(n)},h_{n+1},\ldots]\] where \[h_{n}=(c_{1}^{n},x_{-r(n)}^{n},\ldots,x_{-1}^{n},x_{0}^{n},x_{1}^{n},\ldots,x _{r(n)}^{n},c_{2}^{n})\] and \(c_{1}^{n}\) and \(c_{2}^{n}\) are words in the original alphabet \(\mathcal{A}=\{1,\ldots,m+3\}\) with \(|c_{1}^{n}|=|c_{2}^{n}|=c(n)\) such that \((a_{0},a_{1},\ldots,a_{s(1)!},h_{1},a^{(1)},h_{2},\ldots,h_{n},a^{(n)})\) appears in the kneading sequence of some point of \(\Lambda^{n}\). It is easy to see using the construction of \(\theta\) that for every \(a\in K^{u}(\Lambda^{0})\), \(k(\theta(a))=\eta\), so we have defined the map \[\theta:K^{u}(\Lambda^{0}) \to k^{-1}(\eta)\] \[a \to \theta(a)\] that is clearly continuous and injective. On the other hand, given any small \(\rho>0\) because of the growth of the factorial map, we have \(|\tilde{a}_{1}-\tilde{a}_{2}|=O(|\theta(\tilde{a}_{1})-\theta(\tilde{a}_{2})|^{1-\rho})\) for any \(\tilde{a}_{1},\tilde{a}_{2}\in K^{u}(\tilde{\Lambda}_{i})\) and \(|\tilde{a}_{1}-\tilde{a}_{2}|\) small. Indeed, if \(\tilde{a}_{1}\) and \(\tilde{a}_{2}\) are such that the letters in their continued fraction expressions are equal up to the \(s\)-nth letter and \(n\in\mathbb{N}\) is maximal such that \(s(n)!<s\) then because \(|h_{k}|=2r(k)+2c(k)+1\); \(\theta(\tilde{a}_{1})\) and \(\theta(\tilde{a}_{2})\) coincide exactly in their first \[s+\sum_{k=1}^{n}2r(k)+2c(k)+1=s+s(n)\] letters. Now, if \(\alpha\) and \(\beta\) are finite words of positive integers bounded by \(N\in\mathbb{N}\) and \(I_{N}(\alpha)\) is the convex hull of \(I(\alpha)\cap C_{N}\). The so-called bounded distortion property let us conclude that for some constant \(C_{N}>1\) \[C_{N}^{-1}|I_{N}(\alpha)||I_{N}(\beta)|\leq|I_{N}(\alpha\beta)|\leq C_{N}|I_{N }(\alpha)||I_{N}(\beta)|\] also, for some positive constants \(\lambda_{1},\lambda_{2}<1\), one has \[C_{N}^{-1}\lambda_{1}^{|\alpha|}\leq|I_{N}(\alpha)|\leq C_{N}\lambda_{2}^{| \alpha|}\] So, if \(s\) is big such that \(s(n)/(s+s(n))<\frac{\rho\log\lambda_{2}}{\log\lambda_{1}-4\log C_{m+3}}\), using lemma 2.2, we have for some constant \(\tilde{C}(m+3)\) \[|\theta(\tilde{a}_{1})-\theta(\tilde{a}_{2})|^{1-\rho} \geq \tilde{C}(m+3)^{1-\rho}|I(a_{1},\ldots,a_{s(1)!},h_{1},a^{(1)}, \ldots,a^{(n-1)},h_{n},a_{s(n)!+1},\ldots,a_{s})|^{1-\rho}\] \[\geq \tilde{C}(m+3)^{1-\rho}|I_{m+3}(a_{1},\ldots,a_{s(1)!},h_{1},a^{( 1)},\ldots,a^{(n-1)},h_{n},a_{s(n)!+1},\ldots,a_{s})|^{1-\rho}\] \[= \tilde{C}(m+3)^{1-\rho}|I_{m+3}(a_{1},\ldots,a_{s(1)!},h_{1},a^{( 1)},\ldots,a^{(n-1)},h_{n},a_{s(n)!+1},\ldots,a_{s})|\] \[|I_{m+3}(a_{1},\ldots,a_{s(1)!},h_{1},a^{(1)},\ldots,a^{(n-1)},h _{n},a_{s(n)!+1},\ldots,a_{s})|^{-\rho}\] \[\geq \frac{1}{C_{m+3}^{2n}}\tilde{C}(m+3)^{1-\rho}|I_{m+3}(a_{1}, \ldots,a_{s(1)!})||I_{m+3}(a^{(1)})|\ldots|I_{m+3}(a^{(n-1)})|\] \[|I_{m+3}(a_{s(n)!+1},\ldots,a_{s})||I_{m+3}(h_{1})|\ldots|I_{m+3} (h_{n})|\] \[|I_{m+3}(a_{1},\ldots,a_{s(1)!},h_{1},a^{(1)},\ldots,a^{(n-1)},h _{n},a_{s(n)!+1},\ldots,a_{s})|^{-\rho}\] \[\geq \frac{1}{C_{m+3}^{3n}}\tilde{C}(m+3)^{1-\rho}|I_{m+3}(a_{1},a_{2},\ldots,a_{s})||I_{m+3}(h_{1})|\ldots|I_{m+3}(h_{n})|\] \[|I_{m+3}(a_{1},\ldots,a_{s(1)!},h_{1},a^{(1)},\ldots a^{(n-1)},h _{n},a_{s(n)!+1},\ldots,a_{s})|^{-\rho}\] \[|I_{m+3}(a_{1},\ldots,a_{s(1)!},h_{1},a^{(1)},\ldots a^{(n-1)},h _{n},a_{s(n)!+1},\ldots,a_{s})|^{-\rho}\] \[\geq \tilde{C}(m+3)^{1-\rho}|I_{m+3}(a_{1},a_{2},\ldots,a_{s})|e^{( \log\lambda_{1}-4\log C_{m+3})s(n)}\] \[|I_{m+3}(a_{1},\ldots,a_{s(1)!},h_{1},a^{(1)},\ldots a^{(n-1)},h _{n},a_{s(n)!+1},\ldots,a_{s})|^{-\rho}\] \[\geq \tilde{C}(m+3)^{1-\rho}|I_{m+3}(a_{1},a_{2},\ldots,a_{s})|e^{\rho(s +s(n))\log\lambda_{2}}\] \[|I_{m+3}(a_{1},\ldots,a_{s(1)!},h_{1},a^{(1)},\ldots a^{(n-1)},h_{n },a_{s(n)!+1},\ldots,a_{s})|^{-\rho}\] \[\geq \frac{\tilde{C}(m+3)^{1-\rho}}{C_{m+3}^{\rho}}|I_{m+3}(a_{1},a_{2 },\ldots,a_{s})|\] \[|I_{m+3}(a_{1},\ldots,a_{s(1)!},h_{1},a^{(1)},\ldots,a^{(n-1)},h_ {n},a_{s(n)!+1},\ldots,a_{s})|^{\rho}\] \[|I_{m+3}(a_{1},\ldots,a_{s(1)!},h_{1},a^{(1)},\ldots a^{(n-1)},h_ {n},a_{s(n)!+1},\ldots,a_{s})|^{-\rho}\] \[\geq \frac{\tilde{C}(m+3)^{1-\rho}}{C_{m+3}^{\rho}}|\tilde{a}_{1}- \tilde{a}_{2}|.\] Therefore the map \(\theta^{-1}:\theta(K^{u}(\Lambda^{0}))\to K^{u}(\Lambda^{0})\) is locally a Holder map with exponent \(1-\rho\) and then \[HD(K^{u}(\Lambda^{0}))=HD(\theta^{-1}(\theta(K^{u}(\Lambda^{0}) ))) \leq 1/(1-\rho)HD(\theta(K^{u}(\Lambda^{0})))\] \[\leq 1/(1-\rho)HD(k^{-1}(\eta)).\] Letting \(\rho\) go to zero, we obtain \[HD(K^{u}(\Lambda^{0}))\leq HD(k^{-1}(\eta)).\] Now, as we indicated before, for \(s\leq\max f|_{\Lambda}\) one has \[HD(k^{-1}(-\infty,s])=\frac{1}{2}HD(\Lambda_{s}),\] therefore \[HD(k^{-1}(-\infty,\eta-\epsilon])=\frac{1}{2}HD(\Lambda_{\eta- \epsilon}) \leq \frac{1}{2}HD(\Lambda_{t_{0}})\leq\frac{1}{2}HD(\Lambda^{0})\] \[= HD(K^{u}(\Lambda^{0}))\leq HD(k^{-1}(\eta)).\] Letting \(\epsilon\) tend to zero we have \[HD(k^{-1}(-\infty,\eta])\leq HD(k^{-1}(\eta))\] and as the other inequality is clearly true, the first part of the result is proven for \(\eta\in(m+1+[0;\overline{1}]+[0;1,m+2,\overline{1,m+3}],m+4)\cap\overline{T}\) which is accumulated from the left by points of \(T\). For the second part of the theorem, we need the following lemma whose proof is essentially the same as the proof of lemma 2.5 of [8]. **Lemma 3.8**.: _Given \((K,\mathcal{P},\psi)\) a \(C^{\alpha}\)-regular Cantor set, if \(\mathcal{P}^{{}^{\prime}}\neq\mathcal{P}\) is a finite sub collection of \(\mathcal{P}\) that is also a Markov partition of \(\psi\), then the Cantor set determined by \(\psi\) and \(\mathcal{P}^{{}^{\prime}}\)_ \[\tilde{K}=\bigcap_{n\geq 0}\psi^{-n}\left(\bigcup_{I\in\mathcal{P}^{{}^{ \prime}}}I\right)\] satisfies \(HD(\tilde{K})<HD(K)\)._ **Corollary 3.9**.: _Let \(\Lambda\) be a mixing horseshoe associated with a \(C^{2}\)-diffeomorphism \(\varphi:S\to S\) of some surface \(S\). Then for any proper mixing subhorsehoe \(\tilde{\Lambda}\subset\Lambda\)_ \[HD(\tilde{\Lambda})<HD(\Lambda).\] Proof.: Refine the original Markov partition \(\mathcal{P}\) of \(\Lambda\) in such a way that some \(\mathcal{P}^{{}^{\prime}}\subset\mathcal{P}\), \(\mathcal{P}^{{}^{\prime}}\neq\mathcal{P}\) is a Markov partition for \(\tilde{\Lambda}\). Use the lemma 3.8 with the maps \(\psi_{s}\) and \(\psi_{u}\) that define the stable and unstable Cantor sets, in order to obtain \[HD(\tilde{\Lambda})=HD(K^{s}(\tilde{\Lambda}))+HD(K^{u}(\tilde{\Lambda}))<HD(K ^{s}(\Lambda))+HD(K^{u}(\Lambda))=HD(\Lambda).\] Given any \(t<\eta\), take \(n\in\mathbb{N}\) big enough such that \(t<t_{n}\). Now, as \(\max f|_{\Lambda^{n}}<t_{n+1}\) and \(t_{n+1}<\max f|_{\Lambda^{n+1}}\) then \(\Lambda^{n}\) is a proper subhorseshoe of \(\Lambda^{n+1}\), therefore \[HD(k^{-1}(-\infty,t])=\frac{1}{2}HD(\Lambda_{t}) \leq \frac{1}{2}HD(\Lambda^{n})<\frac{1}{2}HD(\Lambda^{n+1})\] \[\leq \frac{1}{2}HD(\Lambda_{t_{n+2}})\leq\frac{1}{2}HD(\Lambda_{\eta})\] \[= HD(k^{-1}(-\infty,\eta]).\] Then the map is strictly monotone. As \(m\geq 1\) was arbitrary, we have the result for \(\eta\in(2+[0;\overline{1}]+[0;1,3,\overline{1,4}],\infty)\cap\overline{T}=(3,4 209...,\infty)\cap\overline{T}\) which is accumulated from the left by points of \(T\). For \(\eta\in(t_{1}^{*},3,4209...]\cap\overline{T}\) accumulated from the left by points of \(T\), consider the horseshoe \(\Lambda=\Lambda(2)\) (note that \(\max f|_{\Lambda(2)}=\sqrt{12}>3,4209...\)). As before, given \(t\in(t_{1}^{*},\eta)\cap T\), \(0<\epsilon<\eta-t\) and \(\delta>0\) we consider the set \[\tilde{P}(t,\epsilon)=\bigcup_{\begin{subarray}{c}x\in\mathcal{X}:\ \tilde{\Lambda}_{x}\ is \\ subhorseshoe\end{subarray}}\tilde{\Lambda}_{x}=\bigcup_{i\in\mathcal{I}} \tilde{\Lambda}_{i}\cup\bigcup_{i\in\mathcal{J}}\tilde{\Lambda}_{j}\] where for \(i\in\mathcal{I}\), \(\tilde{\Lambda}_{i}\) connects with \(\xi\) before \(t+\epsilon/2+\delta\) and for \(j\in\mathcal{J}\), \(\tilde{\Lambda}_{j}\) doesn't connect with \(\xi\) before \(t+\epsilon/2+\delta\). One more time, given \(j\in\mathcal{J}\), we will suppose that there is no \(x\in W^{u}(\tilde{\Lambda}_{j})\cap W^{s}(\xi)\) with \(m_{\varphi,f}(x)\leq t+\epsilon/2+\delta/2\). Following the above procedure, given \(k\in\mathbb{N}\) large enough we construct the covers \(\mathcal{C}_{k}\) of \(K^{u}(\tilde{\Lambda}_{j})\) in such a way that given \(I(a_{0};a_{1},\dots,a_{k})\in\mathcal{C}_{k}\), if \((a_{0},a_{1},\dots,a_{k})\) has continuations with forced first letter \(a_{k+1}\) we replace the interval \(I(a_{0};a_{1},\dots,a_{k})\) with the interval \(I(a_{0};a_{1},\dots,a_{k},a_{k+1})\). On the other hand, if \((a_{0},a_{1},\dots,a_{k})\) has two continuations with different initial letter, said \(\gamma_{k+1}=(1,a_{k+2},\dots)\) and \(\beta_{k+1}=(2,a_{k+2}^{*},\dots)\). Take \(\alpha=(\alpha_{n})_{n\in\mathbb{Z}}\in\Pi(\tilde{\Lambda}_{j})\) and \(\tilde{\alpha}=(\tilde{\alpha}_{n})_{n\in\mathbb{Z}}\in\Pi(\tilde{\Lambda}_{j})\), such that \(\alpha=(\dots,\alpha_{-2},\alpha_{-1};a_{0},a_{1},\dots,a_{k},\gamma_{k+1})\) and \(\tilde{\alpha}=(\ldots,\tilde{\alpha}_{-2},\tilde{\alpha}_{-1};a_{0},a_{1},\ldots, a_{k},\beta_{k+1})\). We affirm that \(a_{k+2}=1\) because in other case by lemma 3.5, as \([0;\beta_{k+1}]<[0;\overline{1}]<[0;\gamma_{k+1}]\), we would have for all \(j\leq k\) \[\lambda_{0}(\sigma^{j}(\ldots,\tilde{\alpha}_{-2},\tilde{\alpha}_{-1};\tilde{ \alpha}_{0},\ldots,\tilde{\alpha}_{k},\overline{1}))<t+\epsilon/2+\delta/2,\] and for \(j\geq k+1\) \[\lambda_{0}(\sigma^{j}(\ldots,\tilde{\alpha}_{-2},\tilde{\alpha}_{-1};\tilde{ \alpha}_{0},\ldots,\tilde{\alpha}_{k},\overline{1}))<3<t+\epsilon/2.\] Then for \(x=\Pi^{-1}((\ldots,\tilde{\alpha}_{-2},\tilde{\alpha}_{-1};\tilde{\alpha}_{0}, \ldots,\tilde{\alpha}_{k},i,\overline{1}))\) one would get the contradiction \[x\in W^{u}(\tilde{\Lambda}_{j})\cap W^{s}(\xi)\mbox{ and }m_{\varphi,f}(x)\leq t+ \epsilon/2+\delta/2.\] In a similar way, we must have \(a_{k+2}^{*}=2\) because if \(a_{k+2}^{*}=1\), then \([0;\beta_{k+1}]<[0;2,2,\overline{1}]<[0;\gamma_{k+1}]\) and by lemma 3.5 we would have for all \(j\leq k\) \[\lambda_{0}(\sigma^{j}(\ldots,\tilde{\alpha}_{-2},\tilde{\alpha}_{-1};\tilde{ \alpha}_{0},\ldots,\tilde{\alpha}_{k},2,2,\overline{1}))<t+\epsilon/2+\delta/2,\] for \(j=k+1\), \[\lambda_{0}(\sigma^{j}(\ldots,\tilde{\alpha}_{-2},\tilde{\alpha} _{-1};\tilde{\alpha}_{0},\ldots,\tilde{\alpha}_{k},2,2,\overline{1})) = [0;\tilde{\alpha}_{k},\ldots,\tilde{\alpha}_{0},\tilde{\alpha}_{- 1},\ldots]+2+\] \[[0;2,\overline{1}]\] \[< [0;\tilde{\alpha}_{k},\ldots,\tilde{\alpha}_{0},\tilde{\alpha}_{- 1},\ldots]+a_{k+1}^{*}\] \[+[0;a_{k+2}^{*},a_{k+3}^{*},\ldots]\] \[= \lambda_{0}(\sigma^{k+1}(\ldots,\tilde{\alpha}_{-1};\tilde{\alpha }_{0},\ldots,\tilde{\alpha}_{k},\beta_{k+1}))\] \[\leq m(\ldots,\tilde{\alpha}_{-1};\tilde{\alpha}_{0},\ldots,\tilde{ \alpha}_{k},\beta_{k+1})\] \[\leq t+\epsilon/2\] and for \(j>k+1\), \[\lambda_{0}(\sigma^{j}(\ldots,\tilde{\alpha}_{-2},\tilde{\alpha}_{-1};\tilde{ \alpha}_{0},\ldots,\tilde{\alpha}_{k},2,2,\overline{1}))<2+[0;\overline{1}]+[0 ;2,\overline{2,1}]=3,0406...<t+\epsilon/2\] then taking \(x=\Pi^{-1}((\ldots,\tilde{\alpha}_{-2},\tilde{\alpha}_{-1};\tilde{\alpha}_{0}, \ldots,\tilde{\alpha}_{k},2,2,\overline{1}))\) one would have \[x\in W^{u}(\tilde{\Lambda}_{j})\cap W^{s}(\xi)\mbox{ and }m_{\varphi,f}(x)\leq t+ \epsilon/2+\delta/2\] that is again a contradiction. In particular, in this case, we can refine the cover \(\mathcal{C}_{k}\) by replacing the interval \(I(a_{0};a_{1},\ldots,a_{k})\) with the intervals \(I(a_{0};a_{1},\ldots,a_{k},1,1)\) and \(I(a_{0};a_{1},\ldots,a_{k},2,2)\). By lemma 3.5 we have the inequality \[\left(\frac{|I(a_{1},\ldots,a_{k},1,1)|}{|I(a_{1},\ldots,a_{k})|} \right)^{0.49}+\left(\frac{|I(a_{1},\ldots,a_{k},2,2)|}{|I(a_{1},\ldots,a_{k})| }\right)^{0.49}=\] \[\left(\frac{1+r}{(2+r)(3+2r)}\right)^{0.49}+\left(\frac{1+r}{(5+2 r)(7+3r)}\right)^{0.49}<\] \[\left(\frac{2}{2\times 3}\right)^{0.49}+\left(\frac{2}{5\times 7} \right)^{0.49}<0.9\] that let us conclude that \(HD(\tilde{\Lambda}_{j})<0.99\) again. The rest of the proof follows the same lines as the previous one. Finally, if \(\eta\in\overline{T}\) is accumulated from the right by points of \(T\), as before, we can consider (depending on the region to which \(\eta\) belongs) some horseshoe \(\Lambda=\Lambda(\eta)\). Take any strictly decreasing sequence \(\{t_{n}\}_{n\geq 1}\) of points of \(T\) such that \(\lim\limits_{n\to\infty}t_{n}=\eta\) and \(t_{1}<\max f|_{\Lambda}\), also \(\epsilon>0\) small enough such that \(HD(\Lambda_{\eta-\epsilon})>0.99\) and take any \(t_{0}\in(\eta-\epsilon,\eta)\). The techniques we developed allow us construct then, a sequence \(\{\Lambda^{n}\}_{n\geq 0}\) of subhorseshoes of \(\Lambda\) with the following properties 1. \(\max f|_{\Lambda^{0}}<\eta\) 2. \(\max f|_{\Lambda^{1}}<\max f|_{\Lambda}\) 3. \(t_{n+1}<\max f|_{\Lambda^{n+1}}<t_{n},\ \forall n\geq 1\) 4. \(HD(\Lambda_{t_{n}})\leq HD(\Lambda^{n}),\ \forall n\geq 0\) 5. \(\Lambda^{0}\subset\Lambda^{n+1}\subset\Lambda^{n},\ \ \forall n\geq 1\). Therefore, we can define a map \[\theta:K^{u}(\Lambda^{0}) \to k^{-1}(\eta)\] \[a \to \theta(a)\] given by \[\theta(a)=[a_{0};a_{1},\dots,a_{s(1)!},h_{1},a^{(1)},h_{2},a^{(2)},\dots,h_{n},a^{(n)},h_{n+1},\dots]\] where \[a=[a_{0};a_{1},a_{2},\dots]=[a_{0};a_{1},\dots,a_{s(1)!},a^{(1)},a^{(2)},\dots,a^{(n)},\dots]\] and the sequences \(\{s(n)\}_{n\geq 1}\) and \(\{h_{n}\}_{n\geq 1}\) are defined as before and are such that \((a^{(n)},h_{n+1},a^{(n+1)},h_{n+2},\dots)\) appears in the kneading sequence of some point of \(\Lambda^{n+1}\). It is easy to see using the construction of \(\theta\) that for every \(a\in K^{u}(\Lambda^{0})\), \(k(\theta(a))=\eta\) and arguing as before, that \(\theta\) is a local homeomorphism with local Holder inverse and exponent arbitrarily close to one. That let us show that \(HD(k^{-1}(-\infty,\eta])=HD(k^{-1}(\eta))\). For the second part, corollary 3.9 let us conclude, one more time that \(HD(\Lambda^{n+1})<HD(\Lambda^{n})\), for \(n\geq 1\) and then that \(HD(k^{-1}(-\infty,\eta])<HD(k^{-1}(-\infty,t])\), \(\forall t>\eta.\) As we wanted to see.
2309.09171
On the Connection Between Riemann Hypothesis and a Special Class of Neural Networks
The Riemann hypothesis (RH) is a long-standing open problem in mathematics. It conjectures that non-trivial zeros of the zeta function all have real part equal to 1/2. The extent of the consequences of RH is far-reaching and touches a wide spectrum of topics including the distribution of prime numbers, the growth of arithmetic functions, the growth of Euler totient, etc. In this note, we revisit and extend an old analytic criterion of the RH known as the Nyman-Beurling criterion which connects the RH to a minimization problem that involves a special class of neural networks. This note is intended for an audience unfamiliar with RH. A gentle introduction to RH is provided.
Soufiane Hayou
2023-09-17T05:50:12Z
http://arxiv.org/abs/2309.09171v1
# On the Connection Between Riemann Hypothesis ###### Abstract The Riemann hypothesis (\(\mathcal{RH}\)) is a long-standing open problem in mathematics. It conjectures that non-trivial zeros of the zeta function all lie on the line \(\text{Re}(z)=1/2\). The extent of the consequences of \(\mathcal{RH}\) is far-reaching and touches a wide spectrum of topics including the distribution of prime numbers, the growth of arithmetic functions, the growth of Euler's totient, etc. In this note, we revisit and extend an old analytic criterion of the \(\mathcal{RH}\) known as the Nyman-Beurling criterion which connects the \(\mathcal{RH}\) to a minimization problem that involves a special class of neural networks. This note is intended for an audience unfamiliar with \(\mathcal{RH}\). A gentle introduction to \(\mathcal{RH}\) is provided. ## 1 Introduction The Riemann hypothesis conjectures that the non-trivial zeros of the Riemann zeta function are located on the line \(\text{Re}(z)=\frac{1}{2}\) in the complex plane \(\mathbb{C}\). This is a long-standing open problem in number theory first formulated by (Riemann, 1859). The Riemann zeta function was first defined for complex numbers \(z\) with a real part greater than \(1\) by \(\zeta(z)=\sum_{n=1}^{\infty}\frac{1}{n^{z}},z\in\mathbb{C},\text{Re}(z)>1\). However, it is the extension of the zeta function \(\zeta\) to the whole complex plane \(\mathbb{C}\) that is considered in the statement of \(\mathcal{RH}\). This extension is called the _analytic continuation_ of the zeta function (details are provided in Appendix A). There is strong empirical evidence that \(\mathcal{RH}\) holds. Recent numerical verification by Platt and Trudgian (2021) showed that \(\mathcal{RH}\) is at least true in the region \(\{z=a+ib\in\mathbb{C}:a\in(0,1),b\in(0,\gamma]\}\) where \(\gamma=3\cdot 10^{12}\), meaning that all zeros of the zeta function with imaginary parts in \((0,\gamma]\) have a real part equal to \(\frac{1}{2}\). Several other theoretical insights seem to support \(\mathcal{RH}\) ;we invite the reader to check Appendix A for a short summary of relevant results and insights. In this note, we are interested in an specific criterion of the \(\mathcal{RH}\), i.e. an equivalent statement of \(\mathcal{RH}\). This criterion is known as the Nyman-Beurling criterion (Nyman, 1950; Beurling, 1955) which states that \(\mathcal{RH}\) holds if and only if a special class of functions is dense in \(L_{2}(0,1)\). This class of functions can be seen as a special kind of neural networks with one dimensional input. In this note, we show that the sufficient condition can be easily extended to \(L_{2}((0,1)^{d})\). Specifically, we introduce a new class of neural networks and show that \(\mathcal{RH}\) implies the density of this class in \(L_{2}((0,1)^{d})\) for any \(d\geq 2\). The necessary condition in general dimension \(d\geq 2\) remains an open question. ## 2 Riemann Hypothesis The Riemann zeta function was originally defined for complex numbers \(z\) with a real part greater than \(1\) by \[\zeta(z)=\sum_{n=1}^{\infty}\frac{1}{n^{z}},\quad z\in\mathbb{C},\text{Re}(z)>1. \tag{1}\] The above definition of Riemann zeta function excludes the region of interest \(\{z\in\mathbb{C}:\text{Re}(z)=\frac{1}{2}\}\) since the series in Eq. (3) diverge when \(|z|<1\). Indeed, \(\mathcal{RH}\) is stated for the an extension of the zeta function on the whole complex plane \(\mathbb{C}\). This extension is called the analytic continuation, and it is unique by the Identity theorem (Walz, 2017). To give the reader some intuition of how such extension is defined, let us show how we can extend \(\zeta\) to the region \(\{z\in\mathbb{C}:\text{Re}(z)>0\}\). Observe that the function \(\zeta\) satisfies the following identity \[(1-2^{1-z})\zeta(z)=\sum_{n=1}^{\infty}\frac{1}{n^{z}}-2\sum_{n=1}^{\infty} \frac{1}{(2n)^{z}}=\sum_{n=1}^{\infty}\frac{(-1)^{n+1}}{n^{z}},\] where the right hand side is defined for any complex number \(z\) such that \(\text{Re}(z)>0\). Using similar techniques, we can show that for any \(z\in\mathbb{C}\) such that \(\text{Re}(z)\in(0,1)\), \[\zeta(z)=2^{z}\pi^{z-1}\sin\left(\frac{\pi z}{2}\right)\Gamma(1-z)\zeta(1-z), \tag{2}\] which helps extend \(\zeta\) to complex numbers with negative real part. A step by step explanation of the analytic continuation of the \(\zeta\) function is provided in Appendix A. Zeros of the \(\zeta\) function.From Eq. (2), we have \(\zeta(-2k)=0\) for any integer \(k\geq 1\). The negative even integers \(\{-2k\}_{k\geq 1}\) are thus called _trivial zeros_ of the Riemann zeta function since the result follows from the simple fact that \(\sin\left(-\pi k\right)=0\) for all integers \(k\geq 1\). The other zeros of \(\zeta\) are called non-trivial zeros, and their properties remain poorly understood. The \(\mathcal{RH}\) conjectures that they all lie on a the line \(\text{Re}(z)=\frac{1}{2}\). Riemann Hypothesis (\(\mathcal{RH}\))._All non-trivial zeros of \(\zeta\) have a real part equal to \(\frac{1}{2}\)._ Whether \(\mathcal{RH}\) holds is still an open question. The consequences of the Riemann hypothesis are various (see Appendix A) and numerous equivalent results exist in the literature. In the next section, we re-visit an old analytic criterion of \(\mathcal{RH}\)that involves a special type of functions that can be seen as single layer neural networks. ### A _Neural Network_ Criterion for \(\mathcal{RH}\) For \(p>1,d\in\mathbb{N}\backslash\{0\}\), and some set \(S\subset\mathbb{R}^{d}\), let \(L_{p}(S)\) denote the set of real-valued functions \(f\) defined on \(S\) such as \(|f|^{p}\) is Lebesgue integrable, i.e. \(L_{p}(S)=\{f:S\to\mathbb{R}:\int_{S}|f|^{p}d\mu<\infty\},\) where \(\mu\) is the Lebesgue measure on \(\mathbb{R}^{d}\). We denote by \(\|.\|_{p}\) the standard Lebesgue norm defined by \(\|f\|_{p}=\left(\int_{S}|f|^{p}d\mu\right)^{1/p}\) for \(f\in L_{p}(S)\). For some \(k\geq 1\), let \(I_{k}\stackrel{{ def}}{{=}}(0,1)^{k}=(0,1)\times\cdots\times(0,1)\) where the product contains \(k\) terms. Let \(\rho\) denote the fractional part function given by \(\rho(x)=x-\lfloor x\rfloor,\) for \(x\in\mathbb{R}\). Consider the following class of functions defined on the interval \(I_{1}\) \[\mathcal{N}=\{f(x)=\sum_{i=1}^{m}c_{i}\rho\left(\frac{\beta_{i}}{x}\right),x \in I_{1}:m\geq 1,c\in\mathbb{R}^{m},\beta\in I_{m},c^{T}\beta=0\}.\] In machine learning nomenclature, \(\mathcal{N}\) consists of single-layer neural networks with a constrained parameter space and a specific non-linearity (or activation function) that depends on the fractional part \(\rho\). The parameters \((c,\beta)\) belong to the set \(\{c\in\mathbb{R}^{m},\beta\in(0,1)^{m},c^{T}\beta=0\}\). The values \((\rho(\beta_{i}/x))_{1\leq i\leq m}\) act as the neurons (post-activations) in the neural network. In Fig. 1, we depict neuron values for different choices of \(\beta_{i}\). The graphs show fluctuations when \(x\) is close to \(0\) which should be expected since the function \(x\to\rho(\beta_{i}/x)\) fluctuates indefinitely between \(0\) and \(1\) as \(x\) goes to zero, whenever \(\beta_{i}\neq 0\). In figure Fig. 1 (right), we show an example of a function from the class \(\mathcal{N}\) given by \(f(x)=\rho(0.7/x)-\rho(0.3/x)-4\rho(0.1/x)\). We observe that \(f\) is a step function which might be surprising at first glance. However, it is easy to see that \(\mathcal{N}\) consists only of step functions. This is due to the constraint on the parameters \(c,\beta\), and the fact that \(\rho(x)=x-\lfloor x\rfloor\). Now, we are ready to state the main results that draw an interesting connection between \(\mathcal{RH}\) and the class \(\mathcal{N}\). **Theorem 1** (Nyman (1950)): _The \(\mathcal{RH}\) is true if and only if \(\mathcal{N}\) is dense in \(L_{2}(I_{1})\)._ Beurling (1955) later extended this result by showing that for any \(p>1\), the \(\zeta\) function has no zeroes in the set \(\{z\in\mathbb{C}:\text{Re}(z)>1/p\}\) if and only if the set \(\mathcal{N}\) is dense in \(L_{p}(I_{1})\). **Theorem 2** (Beurling (1955)): _The Riemann zeta function is free from zeros in the half plane \(Re(z)>\frac{1}{p},1<p<\infty\), if and only if \(\mathcal{N}\) is dense in \(L_{p}(I_{1})\)._ The intuition behind this connection is rather simple. The number of fluctuations of the function \(x\to\rho(\beta/x)\) near \(0\) is closely related to the \(\zeta\) function. To understand the machinery of the proofs of Theorem 1 and Theorem 2, we provide a sketch of the proof by Beurling (1955) for the sufficient condition in Appendix B. Using the same techniques, we derive the following result on zero-free regions of the zeta function. **Lemma 1** (Nyman-Beurling zero-free regions): _Let \(f\in\mathcal{N}\) and \(\delta=\|1-f\|_{2}\) be the distance between the constant function \(1\) on \(I_{1}\) and \(f\). Then, the region \(\{z\in\mathbb{C},\text{Re}(z)>\frac{1}{2}\left(1+\delta^{2}|z|^{2}\right)\}\) is free of zeroes of the Riemann zeta function \(\zeta\)._ The condition that \(\mathcal{N}\) should be dense in \(L_{2}(I_{1})\) can be replaced by the following weaker condition: the constant function \(1\) on \(I_{1}\) can be approximated up to an arbitrary accuracy with functions from \(\mathcal{N}\). This is because from the constant function \(1\), one can construct an approximation of any step-wise function, which in turn can approximate any function in \(L_{2}(I_{1})\). A discussion on the empirical implications of Theorem 1 is provided in Appendix B. In the next section, we show that the sufficient condition of Theorem 2 can be easily generalized to networks with multi-dimensional inputs, i.e. the case \(d\geq 1\). ## 3 A sufficient condition in the multi-dimensional case Let \(d\geq 1\) and consider the following class of neural networks with inputs in \(I_{d}\), \[\mathcal{N}_{d}=\{f(x)=\sum_{j=1}^{d}\sum_{i=1}^{m}c_{i,j}\rho\left(\frac{\beta_{ i,j}}{x_{j}}\right),x\in I_{d}:m\geq 1,c\in\mathbb{R}^{d\times m},\beta\in I_{d \times m},c^{T}\beta=0\},\] where \(c=(c_{1,1},c_{2,1},\ldots,c_{m,1},\ldots,c_{m,d})^{\top}\in\mathbb{R}^{md}\) is the flattened vector of \((c_{\cdot,j})_{1\leq j\leq d}\). Notice that we recover the Nyman-Beurling class \(\mathcal{N}\) when \(d=1\). Using this class, we can generalize the zero-free region result given by Theorem 1 to a multi-dimensional setting in the case \(p=2\).1 Footnote 1: The choice of \(p=2\) is arbitray, and similar result to that of Theorem 2 can be obtained for any \(p>1\). **Lemma 2** (zero-free regions for general \(d\geq 1\)): _Let \(d\geq 1\) and \(f\in\mathcal{N}_{d}\). Let \(\delta=\|1-f\|_{2}\) be the \(L_{2}\) distance between the constant function \(1\) on \(I_{d}\) and \(f\). Then, the region \(\{z\in\mathbb{C},\mathrm{Re}(z)>\frac{1}{2}\left(1+\delta^{\frac{d}{d}}|z|^{2} \right)\}\) is free of zeroes of the Riemann zeta function \(\zeta\)._ In Fig. 2, we depict the zero-free regions from Lemma 2. The smaller the constant \(\delta\), the larger the region. The multi-dimensional input case (\(d\geq 2\)) can therefore be interesting if we can better approximate the constant function \(1\) with functions from \(\mathcal{N}_{d}\). More precisely, the result of Lemma 2 is relevant if for some \(d\geq 2\), we could find \(\delta\) such that \(\delta^{2/d}<\delta_{1}\), where \(\delta_{1}\) is the approximation error in the one-dimensional case \(d=1\). In this case, the zero-free region obtained with \(d\geq 2\) will be larger than the one obtained with \(d=1\). We refer the reader to Section 4 for a more-in depth discussion about the empirical implications of the multi-dimensional case. Notice that if \(\delta\) can be chosen arbitrarily small, then the zero-free region in Lemma 2 can be extended to the whole half-plane \(\{\mathrm{Re}(z)>1/2\}\). This is a generalization of the sufficient condition of Theorem 2 in the multi-dimensional case. **Corollary 3** (Sufficient condition for \(d\geq 1\)): _Let \(d\geq 1\). Assume that the class \(\mathcal{N}_{d}\) is dense in \(L_{2}(I_{d})\). Then, the region \(\{\mathrm{Re}(z)>1/2\}\) is free of the zeroes of the Riemann zeta function \(\zeta\)._ ### Open problem: The necessary condition for \(d\geq 2\) By considering the class \(\mathcal{N}_{d}\), we generalized the sufficient condition of Beurling's criterion in the multi-dimensional input case \(d\geq 2\). However, it is unclear whether a similar necessary condition holds. Proving that \(\mathcal{RH}\) implies the density of \(\mathcal{N}_{d}\) in \(L_{2}(I_{d})\) is challenging. A function \(f\in\mathcal{N}_{d}\) can be expressed as \(f(x)=\sum_{i=1}^{d}f_{i}(x_{i})\) for \(x=(x_{1},\ldots,x_{d})^{\top}\in I_{d}\), and \(f_{i}\) are functions with one-dimensional inputs. This special additive form of functions from \(\mathcal{N}_{d}\) makes it harder to use arguments similar to the one-dimensional case (Theorem 2) to prove density results. Figure 2: Zero-free regions of the form \(\{\mathrm{Re}(z)>\frac{1}{2}(1+\Delta|z|^{2})\}\) as stated in Theorem 1, Theorem 2, and Theorem 4. ## 4 Discussion on the Implications and Limitations In this section, we discuss some empirical implications of Lemma 1 and Lemma 2. Probabilistic zero-free regions.Notice that Lemmas 1 and 2 require access to the distance \(\|1-f\|_{2}\) which is generally intractable. However, we can approximate this quantity using Monte Carlo samples and obtain high probability bounds for this norm. Hence, the best we can do with such criterion is to verify the non-existence of zeroes of \(\zeta\) in some region _with high probability_. Indeed, using Hoeffding's inequality, we have the following result. **Lemma 4**: _Let \(d\geq 1\), \(N\geq 1\) and \(X_{1},X_{2},\ldots,X_{N}\) be iid uniform random variables on \(I_{d}\). Let \(f\in\mathcal{N}_{d}\) (where for \(d=1\), we denote \(\mathcal{N}_{d}=\mathcal{N}\)) such that \(f(x)=\sum_{j=1}^{d}\sum_{i=1}^{m}c_{i,j}\rho\left(\frac{\beta_{i,j}}{x_{j}}\right)\) for all \(x\in I_{d}\), for some \(m\geq 1,\beta\in I_{m\times d},c\in\mathbb{R}^{m\times d}\). Then, for any \(\alpha\in(0,1)\), we have with probability at least \(1-\alpha\), the region \(R_{N}\stackrel{{\text{def}}}{{=}}\{\text{Re}(z)>\frac{1}{2}\left( 1+\Delta_{N}(f)^{1/d}|z|^{2}\right)\}\) is free of the zeroes of \(\zeta\), where \(\Delta_{N}(f)=\frac{1}{N}\sum_{i=1}^{N}(1-f(x_{i}))^{2}+(1+\|c\|_{1}^{2})\sqrt {\frac{2\log(2/\alpha)}{N}}\), with \(\|c\|_{1}=\sum_{i=1}^{m}|c_{i}|\)._ The proof follows from a simple application of Hoeffding's concentration inequality to control the deviations of the empirical risk \(N^{-1}\sum_{i=1}^{N}(1-f(x_{i}))^{2}\). Hoeffding's lemma requires that the random variables \((1-f(x_{i}))^{2}\) are bounded, which is straightforward since \((1-f(x_{i}))^{2}\leq 2(1+f(x_{i})^{2})\leq 2(1+\|c\|_{1}^{2})\) almost surely. The result of Theorem 4 has an important implication on the choice of the sample size. Indeed, to have the coefficient \(\delta_{N}^{1/d}\) of order \(\epsilon\) with high probability, a necessary condition is that \(N=\mathcal{O}(\epsilon^{2d})\). When is the multi-dimensional variant better than the one-dimensional criterion?For some \(d\geq 2\), it is straightforward that the multi-dimensional criterion given in Lemma 2 is better than the one given in Lemma 1 only if \(\inf_{f\in\mathcal{N}_{d}}\|1-f\|_{2}^{1/d}<\inf_{f\in\mathcal{N}}\|1-f\|_{2}\). Under this condition, the zero-free region is larger with \(d\geq 2\). For empirical verification of the \(\mathcal{RH}\) and for same probability threshold \(\alpha\), Lemma 4 implies that the multi-dimensional setting is better than the one-dimensional counterpart whenever \(\inf_{f\in\mathcal{N}_{d}}\Delta_{N}(f)^{1/d}<\inf_{f\in\mathcal{N}}\Delta_{N}(f)\). We discuss the feasibility of such conditions in the next paragraph. What does it take to improve upon existing numerical verifications of \(\mathcal{RH}\)?The high probability zero-free regions from Lemma 4 are of the form \(\{\text{Re}(z)>\frac{1}{2}(1+\Delta|z|^{2})\}\) for some constant \(\Delta>0\). Using a different analytical criterion of the \(\mathcal{RH}\), Platt and Trudgian (2021) showed that the region \(\{a+ib:a\in(0,1),b\in(0,\gamma],\gamma\approx 3\cdot 10^{12}\}\) is free of the zeroes of \(\zeta\). Hence, using Lemma 4 to improve this result requires that the region \(R_{N}\cap\{a+ib,a\in(0,1),b\in(0,\gamma]\}\) contains complex numbers \(z\) with imaginary part larger than order \(10^{12}\). Let \(z=a+ib\in\mathbb{C}\). Having \(z\in R_{N}\) implies that \(b^{2}<-a^{2}+\Delta_{N}(f)^{-1/d}(2a-1)\). For the region of interest where \(a\in(0,1)\), and assuming that \(\Delta_{N}(f)\) is small enough, the right-hand side is of order \(\Delta_{N}(f)^{-1/d}(2a-1)\) which is maximized for \(a=1\) and equal to \(\Delta_{N}(f)^{-1/d}\). Thus, to improve upon existing work (Platt and Trudgian, 2021) (at least with some high probability certificate), we need to have \(\Delta_{N}(f)^{-1/d}\) of order \(10^{12}\), which means that \(\Delta_{N}(f)^{1/d}\) should be at least of order \(10^{-12}\). This requires a the minimize of a the empirical risk \(N^{-1}\sum_{i=1}^{N}(1-f(x_{i}))^{2}\) with a minimum sample size of order \(10^{24}\) which is unfeasible with the current compute resources.
2302.14534
Spacerini: Plug-and-play Search Engines with Pyserini and Hugging Face
We present Spacerini, a tool that integrates the Pyserini toolkit for reproducible information retrieval research with Hugging Face to enable the seamless construction and deployment of interactive search engines. Spacerini makes state-of-the-art sparse and dense retrieval models more accessible to non-IR practitioners while minimizing deployment effort. This is useful for NLP researchers who want to better understand and validate their research by performing qualitative analyses of training corpora, for IR researchers who want to demonstrate new retrieval models integrated into the growing Pyserini ecosystem, and for third parties reproducing the work of other researchers. Spacerini is open source and includes utilities for loading, preprocessing, indexing, and deploying search engines locally and remotely. We demonstrate a portfolio of 13 search engines created with Spacerini for different use cases.
Christopher Akiki, Odunayo Ogundepo, Aleksandra Piktus, Xinyu Zhang, Akintunde Oladipo, Jimmy Lin, Martin Potthast
2023-02-28T12:44:10Z
http://arxiv.org/abs/2302.14534v2
# Spacerini: Plug-and-play Search Engines with Pysserini and Huugging Face ###### Abstract. We present Spacerini, a modular framework for seamless building and deployment of interactive search applications, designed to facilitate the qualitative analysis of large scale research datasets. Spacerini integrates features from both the Pysserini toolkit and the Hugging Face ecosystem to ease the indexing text collections and deploy them as search engines for ad-hoc exploration and to make the retrieval of relevant data points quick and efficient. The user-friendly interface enables searching through massive datasets in a no-code fashion, making Spacerini broadly accessible to anyone looking to qualitatively audit their text collections. This is useful both to IR researchers aiming to demonstrate the capabilities of their indexes in a simple and interactive way, and to NLP researchers looking to better understand and audit the failure modes of large language models. The framework is open source and available on GitHub: [https://github.com/castorini/hf-spacerini](https://github.com/castorini/hf-spacerini), and includes utilities to load, pre-process, index, and deploy local and web search applications. A portfolio of applications created with Spacerini for a multitude of use cases can be found by visiting [https://hf.co/spacerini](https://hf.co/spacerini). information retrieval, search interface, text auditing + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote † copyrighted: none + Footnote †: copyrighted: none + Footnote † copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + FootnoteFootnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + FootnoteFootnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + FootnoteFootnote † copyrighted: none + Footnote † copyrighted: none + FootnoteFootnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + FootnoteFootnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + FootnoteFootnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + FootnoteFootnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + FootnoteFootnote † copyrighted: none + FootnoteFootnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + FootnoteFootnote † copyrighted: none + FootnoteFootnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + FootnoteFootnote † copyrighted: none + FootnoteFootnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + FootnoteFootnote † copyrighted: none + Footnote † copyrighted: none + FootnoteFootnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + FootnoteFootnote † copyrighted: none + FootnoteFootnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + FootnoteFootnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + FootnoteFootnote †: none + FootnoteFootnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + FootnoteFootnote † copyrighted: none + Footnote † copyrighted: none + Footnote † copyrighted: none + FootnoteFootnote † copyrighted: none + Footnote † copyrighted: none + FootnoteFootnote † copyrighted: none + Footnote † copyrighted: none + FootnoteFootnote † copyrighted: none + Footnote † copyrighted: none + FootnoteFootnote † copyrighted: none + FootnoteFootnote † copyrighted: none + FootnoteFootnote † copyrighted: none + FootnoteFootnote † copyrighted: none + FootnoteFootnote † copyrighted: none + FootnoteFootnote † copyrighted: none + FootnoteFootnote † copyrighted: none + FootnoteFootnote † copyrighted: none + Footnote † copyrighted: none + FootnoteFootnote †: none + FootnoteFootnote † copyrighted: none + FootnoteFootnote † copyrighted: none + FootnoteFootnote † copyrighted: none + FootnoteFootnote † copyrighted: none + FootnoteFootnote † copyrighted: none + FootnoteFootnote † copyrighted: none + Footnote † copyrighted: none Background Large scale, predominantly web-mined text datasets have been proliferating in NLP recently, giving rise to publications [10, 17, 25, 27] which often contain interesting analyses of the specific datasets being presented, however, usually lack any comparison to existing resources beyond basic metrics such as sizes of the datasets or languages they contain. As discussed in Section 1, in the face of an increased scrutiny of the models trained on datasets in question, the topic of data understanding and governance has been gaining more traction, being accepted as an important part of research. Efforts such as those of Mitchell et al. [21] contribute frameworks for more standardised and reproducible metrics and measurements of datasets, and we position ourselves as a complementary continuation of their work, focusing on a more curatorial and qualitative assessment that might not readily fit under the umbrella of "measurements". We therefore aim to fill the gap in the evaluation landscape by facilitating qualitative, rather than quantitative analysis of large scale data collections. Similarly to the authors of Gradio [1], a Python package for fast development of Machine Learning demos, we believe that the accessibility of data and model analysis tools is crucial to building both the understanding of and the trust in the underlying resources. The potential of relevance-based interfaces to massive textual corpora, the creation of which can be facilitated by leveraging toolkits such as Pyserini [20], has previously been tapped into by the researchers at the Allen Institute of AI who propose a C4 [27] search engine1. Similar interfaces have also been found useful in more specialised domains, e.g. in COVID-related datasets [33], news quotes [30], or medical literature [24]. However, while these solutions are undeniably useful, they remain very contextual: dataset-specific, and project-specific. We believe Spacerini to be the first generalizable tool which proposes an end-to-end pipeline automating the route from raw text to qualitative analysis. Footnote 1: [https://c4-search.apps.allenai.org/](https://c4-search.apps.allenai.org/) Footnote 2: gradio-demo.py ## 3. Spacerini Spacerini is a modular framework that integrates Pyserini with the Hugging Face ecosystem to streamline the process of going from any Hugging Face text dataset--either local or hosted on the Hugging Face Hub--to a search interface driven by a Pyserini index that can be deployed for free on the Hugging Face Hub. In what follows, we deconstruct an example script3 to showcase the different features enabled by Spacerini. When run end-to-end, the script pulls a dataset from the Hugging Face hub, pre-processes it, indexes it, creates a gradio-based search interface and pushes that to the Web as a Hugging Face Spaces demo. This is only meant as a feature-complete demo, and we don't expect most people to want to integrate every step into their workflows, but rather to cherry-pick and decide what best to use depending on context. Footnote 3: [https://ft.co/datasets](https://ft.co/datasets) ### Loading Data All our workflows are backed by the Hugging Face datasets library [19], itself based on the extremely efficient Apache Arrow format. Datasets is a mature library which provides a standardized interface to any tabular dataset, in particular, to tens of thousands of community datasets hosted on the Hugging Face Hub4. The datasets library gives fine-grained control over the lifecycle of tabular datasets, which we choose to abstract away through a set of opinionated data loading functions that cover the use cases we deem relevant to information retrieval. We also add new functionality, such as the ability to load any document dataset from the ir_datasets library using for example the following one-liner to load MS MARCO as a Hugging Face datasets.Dataset object using a function from the data subpackage: Footnote 4: [https://ft.co/datasets](https://ft.co/datasets) We include wrappers to load database tables, pandas DataFrames [26, 32], and text datasets on disk, as well as the ability to load any dataset either in memory-mapped mode or in streaming mode: the former makes it possible to handle larger-than-memory datasets, and the latter larger-than-disk datasets that can be streamed from a remote location such as the Hugging Face Hub. ### Pre-processing Spacerini also provides a preprocess subpackage which offers a range of customizable pre-processing options for preparing datasets. This module includes a sharding utility that enables the partitioning of large datasets into smaller, more manageable chunks for efficient parallel processing. ``` fromspacerini.preprocessimportshard_dataset shard_dataset( hf_dataset-hf_dest, shard_size=1GB", column_to_index="text", shards_paths="msmarco-shards", ) ``` Figure 1. Search interface for the Extreme Summarization (xsum) dataset deployed on Hugging Face Spaces using Spacerini. ### Indexing Spacerini's index subpackage leverages Pyserini to provide very efficient Lucene indexing and allow users to easily and quickly index large datasets, either sharded in the pre-processing step, or any text format accepted by Pyserini, and streaming text datasets, such as those returned by Spacerini's data subpackage. This subpackage also exposes several tokenization options using existing language-specific analyzers 4 as well as Hugging Face subword tokenizers [22]. Footnote 4: [https://lucene.apache.org/core/9.5_0/analysis/common/index.html](https://lucene.apache.org/core/9.5_0/analysis/common/index.html) ``` fromspacerini.indeximportindex_json_shards index_json_shards( shards_path="msmarco-shards", index_path="app/index", ) ``` ### Template-based Search Interfaces Having indexed a collection, one can easily spin up a frontend using the frontend subpackage and one of many provided templates5. These are built using cookiecutter,6 a Python template library for software projects. We provide a few batteries-included frontend templates based both the Gradio7 and Streamlit 8 demo app frameworks, both of which are natively supported by Hugging Face Spaces. Figure 1 showcases a search engine built using one of our Streamlit templates. Footnote 5: [https://github.com/cacostinu/hf-spacerini/tree/main/templates](https://github.com/cacostinu/hf-spacerini/tree/main/templates) Footnote 6: [https://github.com/cookiecutter/cookiecutter](https://github.com/cookiecutter/cookiecutter) ``` fromspacerini.frontendimportcreate_app cookiecutter_vars={"dest_text_field":"text, "metadata_field":"docid", "space_title":"MSMRCoSearch", "local_app":"app"} create_app{ template-"gradio-vanilla", extra_context_dict=cookiecutter_vars, output_dir=".", } ``` ### Deployment to Hugging Face Spaces The local apps developed in the previous subsection can then be pushed to Hugging Face Spaces and hosted there for free. One can then further customize the running app from the browser, for example to add functionality not provided by the chosen template. ``` fromspacerini.frontendimportcreate_space_from_local create_space_from_local( space_slug="msmarco-passage-search, organization="spacerini, space_sdk="gradio", local_dir=LOCAL_APP, delete_affect_push=True, } ``` ### Sharing Indexes as Hugging Face Datasets Orthogonal to the workflow presented so far, is the ability to upload Lucene indexes to the Hugging Face Hub using shareable dataset repositories and enabling reproducible retrieval experiments. ``` fromspacerini.indeximportpush_index_to_hubpush_index_to_hub( dataset_slug="lucene-english-analyzer-msmarco", index_path="index", ) ``` Any index can then just as easily be downloaded for local use: ``` fromspacerini.indeximportload_index_from_hub index_path=load_index_from_hub("lucene-fr-analyzer-") ``` ### Search and Pagination Search features are provided by the search subpackage and leverage the memory-mapping feature of Arrow tables to load the entire table of results--no matter how big--only materializing the specific shard that corresponds to the requested result page. ``` fromspacerini.searchimportresult_indices,result_page ix=result_indices( "LoremIpsum", num_results=1_000, INDEX_PATH, ) last_results_page=result_page( hf_deset, ix, page=-1, results_per_page=20, ).to_pandas() ``` ## 4 Use Cases and Examples Spacerini is designed to enable qualitative analysis of large-scale textual corpora without the need for extensive engineering work. The tool can be used in dataset auditing campaigns, such as those carried out by Kreutzer et al. [15] or in data annotation efforts. It may be applied to inspect failures of large scale language model predictions and find potential sources of memorized generations. AI Ethics researchers can employ the tool to find evidence supporting their hypotheses about the content of the models' training datasets. Given its tight integration with Pyserini, Spacerini can also be leveraged by IR researchers to experiment with modifications of their retrieval pipelines in user studies (e.g. by exposing retrieval options such as BM25 and RM3 parameters in the the user interface) or to deploy demos of their working prototypes. Reproducibility for IR experiments is further enhanced thanks to the index sharing abilities introduced in Section 3.6. Spacernii can also be leveraged by Digital Humanists, Archivists and Librarians looking to index their collections. Indeed, GLAM (Galleries, libraries, archives, and museums) collections are increasingly being made available as datasets. Furthermore, there is a growing interest in the digital humanities in training and using languages models, as demonstrated by the success of projects such as the _BERT for Humans Project_9. This makes it especially pressing to have easy access ways to critically examine the data that goes into such models. In the context of the digital and computational humanities, indexing data relevant to these efforts is often not an easy task, and often project-based and contingent upon precarious funding arrangements. Having a project-agnostic tool like Spacernii could prove valuable to this community and a useful addition to toolkits such as the GLAM Workbench [28]. Footnote 9: [https://www.berforthumanists.org/](https://www.berforthumanists.org/) Footnote 10: [https://docs.alephdata.org/](https://docs.alephdata.org/) Given its low engineering barrier of entry, Spacernii can be a good addition to IR courses with a practical component, where students are tasked with developing search engines, by providing an easy-to-deploy interface for their developed retrieval systems. Spacernii can also be leveraged by organizers of shared tasks such as MIRACL [34] and Touche [6], who want to help participants explore the datasets without forcing them to download large volumes of data. It can also be used as a platform for participants to deploy working prototypes with a unified interface. Spacernii can help data journalists and digital investigative journalists index, explore, and understand open data, in a similar vein to the functionality provided by the Aleph suite.11 Providing technical tools to data journalists is a crucial in uncovering matters of public interest, as was evident by role played by the collaborative use of the Neo4j graph database in unraveling the corrupt network surrounding tax havens [7]. Footnote 11: [https://theveloper.morelli.com/en-US/docs/Web/Web_Components](https://theveloper.morelli.com/en-US/docs/Web/Web_Components) Finally, three features of Hugging Face Spaces make them especially attractive for users: (1) they can leverage private datasets, meaning that one can provide search access to a dataset without sharing the underlying data,(2) they can be seamlessly embedded into HTML, specifically Gradio-based Spaces which can be embedded as _Web Components_12 so that users can easily integrate a Spacernii-based search feature into their own sites13, and (3) Gradio-based Spaces expose a FastAPI14 endpoint that can be queried to access the functionality of the space, making deployed search engines accessible through HTTP calls. Footnote 12: For example: [https://scikit.github.io/search-engine/](https://scikit.github.io/search-engine/) Footnote 13: [https://github.com/dimogloglistari](https://github.com/dimogloglistari) Footnote 14: [https://hf-proxy-cf.effarig.sitem/docs/hubspaces-overview/hardware-resources](https://hf-proxy-cf.effarig.sitem/docs/hubspaces-overview/hardware-resources) ## 5. Limitations and Future Plans The main limitation of the off-the-shelf variant of Spacernii is the disk space limit imposed by Hugging Face Spaces, which is currently set to 50 GB.15 While not enough to accommodate entire corpora such as ROOTS or The Pile, such large datasets are most often amalgamations of constituents datasets which can each be studied independently. This limit has no bearing on Spacernii search apps deployed locally. Should users still want to get more disk space for their Spaces-hosted indexes, they are welcome to apply for community grants offered by Hugging Face. Footnote 15: [https://www.berforthumanists.org/](https://www.berforthumanists.org/) Planned improvements to the library include automating the creation of dataset cards (Or rather "index cards") when pushing an index to the Hugging Face Hub, more support for dense indexing features provided by Pyserini, as well as more fine-grained support of document tokenization. We also look forward to community contributions both to the codebase and to the frontend templates. ## 6. Conclusion We presented Spacernii, a modular framework that enables the quick and free deployment and serving of template-based search indexes as interactive applications. The need for such a tool is especially pressing as large language models have come to consume inordinate amounts of text data, reinforcing the need for a qualitative exploration and understanding of datasets to assess them in a way that is impenetrable to quantitative analyses alone. Spacernii leverages features from both the Pyserini toolkit and the Hugging Face ecosystem to facilitate the creation and hosting of user-friendly search systems for text datasets. Users can easily index their collections and deploy them as ad-hoc search interfaces, making the retrieval of relevant data points a quick and efficient process. The user-friendly interface enables non-technical users to effectively search massive datasets, making Spacernii a valuable tool for anyone looking to audit their text collections qualitatively. The framework is open-source and available on GitHub: [https://github.com/castorini/hf-spacernii](https://github.com/castorini/hf-spacernii). Finally, we emphasize that Spacernii is a first step in the direction of systematic dataset auditing, and more work is still needed to create standardized structures that leverage tools such as Spacernii to properly document the different axes of interest that are appropriate for a given usage context. ## Acknowledgments We are grateful to Daniel van Strien, Yuvraj Sharma, Omar Sanseviero, Julien Chaumond, Lucain Pouget, Abubakar Abid, and Leandro von Werra for their tireless support and invaluable advice throughout this project.
2305.19909
The effect of spatial sampling on the resolution of the magnetostatic inverse problem
In magnetoencephalography, linear minimum norm inverse methods are commonly employed when a solution with minimal a priori assumptions is desirable. These methods typically produce spatially extended inverse solutions, even when the generating source is focal. Various reasons have been proposed for this effect, including intrisic properties of the minimum norm solution, effects of regularization, noise, and limitations of the sensor array. In this work, we express the lead field in terms of the magnetostatic multipole expansion and develop the minimum-norm inverse in the multipole domain. We demonstrate the close relationship between numerical regularization and explicit suppression of spatial frequencies of the magnetic field. We show that the spatial sampling capabilities of the sensor array and regularization together determine the resolution of the inverse solution. For the purposes of stabilizing the inverse estimate, we propose the multipole transformation of the lead field as an alternative or complementary means to purely numerical regularization.
Jussi Nurminen, Andrey Zhdanov, Wan Jin Yeo, Joonas Iivanainen, Julia Stephen, Amir Borna, Jim McKay, Peter D. D. Schwindt, Samu Taulu
2023-05-31T14:44:22Z
http://arxiv.org/abs/2305.19909v1
# The effect of spatial sampling on the resolution of ###### Abstract In magnetoencephalography, linear minimum norm inverse methods are commonly employed when a solution with minimal a priori assumptions is desirable. These methods typically produce spatially extended inverse solutions, even when the generating source is focal. Various reasons have been proposed for this effect, including intrisic properties of the minimum norm solution, effects of regularization, noise, and limitations of the sensor array. In this work, we express the lead field in terms of the magnetostatic multipole expansion and develop the minimum-norm inverse in the multipole domain. We demonstrate the close relationship between numerical regularization and explicit suppression of spatial frequencies of the magnetic field. We show that the spatial sampling capabilities of the sensor array and regularization together determine the resolution of the inverse solution. For the purposes of stabilizing the inverse estimate, we propose the multipole transformation of the lead field as an alternative or complementary means to purely numerical regularization. ## 1 Introduction Non-invasive neuroimaging is the preferred method for investigation of anatomy and dynamics of the human brain, as invasive recordings are only feasible during certain clinical procedures. However, images obtained in non-invasive fashion are always limited to reconstruction based on measurements external to the head. Such reconstructions are prone to errors and imprecision due to e.g. measurement noise, instrument errors, and the assumptions made in the reconstruction algorithms. Magnetencephalography (MEG) (_1_) allows the measurement of magnetic fields generated by neural currents with a high temporal resolution, but the spatial resolution of the reconstructed currents is limited by the mechanisms mentioned above (_2_). Interestingly, even in the absence of noise and instrumentation errors, the spatial resolution of the reconstructed images would still be limited by discrete spatial sampling of the magnetic field. The sampling capabilities of multichannel MEG systems have been investigated from different points of view. Ahonen et al. (_3_) considered the spatial aliasing of the recorded magnetic field in the presence of measurement noise. Kemppainen and Ilmoniemi (_4_) and Nenonen et al. (_5_) applied total information based on Shannon's theory of communication to compare MEG sensor arrays. Vrba et al. (6) and Tierney et al. (7) studied the effect of the number of channels on source localization. Iivanainen et al. (8) analyzed the effects of sampling on the spatial frequency content of the measured fields, and derived an information maximizing algorithm for optimizing sensor placement. For estimating the source distribution generating the MEG signal, methods based on the minimum norm estimate (MNE) (9, 10) are widely used. They are known to result in significant blurring of point-like source distributions, which may be quantified in terms of e.g. point-spread functions and spatial resolution or dispersion. Previous work has attributed the limited spatial resolution of the MNE inverse to various causes such as regularization, noise, the sampling provided by the sensor array, and the intrinsic properties of the inverse method (11-15). However, the relative contributions of these factors remain unknown. MNE-based methods depend on the properties of the MEG lead field matrix, which expresses the signal space topography of each elementary cortical source; the estimate is then a combination of these elementary sources (9). In this work, we apply the magnetostatic multipole expansion (16) to transform the lead field from the sensor domain to the multipole domain, where field components are ordered hierarchically by their spatial frequency. In this way, the spatial properties of the lead field matrix can explicitly be controlled and related to the sampling capabilities of the sensor array. We contrast this approach with the traditionally employed numerical regularization of the lead field matrix, which corresponds to implicit spatial filtering. ## 2 Multipole expansion of the magnetic field As a foundation, we describe the basic properties of the magnetostatic multipole expansion (16, 17). In neuromagnetic measurements, we can define a bounded region that contains the magnetic field sources of interest. Outside this region, assumed to be free of sources, the magnetic field can be expanded as \[\vec{B}(\vec{r})=\sum_{l=1}^{\infty}\sum_{m=-l}^{l}\alpha_{lm}\frac{\vec{\nu }_{lm}}{r^{l+2}}, \tag{1}\] where \(\alpha_{lm}\) are the expansion coefficients, \(\vec{\nu}_{lm}\) are modified vector spherical harmonics (VSH) (18), and \(r\) is the distance from the chosen expansion origin. The integers \(l\) and \(m\) are the degree and order of the harmonic function, respectively. VSH terms with increasing \(l\) represent progressively higher spatial frequencies. Changing from continuous fields to a discrete sensor space representation, i.e. evaluating both sides of eq. 1 at a finite set of sensor locations, we obtain \[\boldsymbol{\phi}=\sum_{l=1}^{\infty}\sum_{m=-l}^{l}\alpha_{lm}\boldsymbol{a} _{lm}. \tag{2}\] Here \(\boldsymbol{\phi}\) is an \(N\)-dimensional signal space vector representing the field values measured by the \(N\) sensors. \(\boldsymbol{a}_{lm}\) are the \(N\)-dimensional multipole basis vectors, usually computed by integration of the continuous basis functions \(\vec{\nu}_{lm}/r^{l+2}\) over the sensor geometries. If we also limit the sum to \(l\leq L\), we may express eq. 2 in a matrix form as \[\boldsymbol{\phi}=\boldsymbol{S}\boldsymbol{x}, \tag{3}\] where \[\mathbf{S}=[\mathbf{a}_{1,-1}\ \mathbf{a}_{1,0}\ \mathbf{a}_{1,1}\ \ldots\ \mathbf{a}_{L,L}] \tag{4}\] \[\mathbf{x}=[\alpha_{1,-1}\ \alpha_{1,0}\ \alpha_{1,1}\ \ldots\ \alpha_{L,L}]\,^{ \intercal}. \tag{5}\] The estimated multipole components \(\mathbf{\hat{x}}\) corresponding to a given magnetic field measurement \(\mathbf{\phi}^{\prime}\) can then be obtained e.g. via the Moore-Penrose pseudoinverse as \[\mathbf{\hat{x}}=\mathbf{S}^{\dagger}\mathbf{\phi}^{\prime}. \tag{6}\] In this way, the measurement is expressed as a linear superposition of field components oscillating at progressively higher spatial frequencies. Due to the \(r^{-l-2}\) amplitude decay term in eq. 1, components with higher \(l\) value fall off faster with increasing distance from the source region. In practice, this means that components with increasingly high spatial frequencies become progressively weaker at the sensors. This is also the justification for truncating the representation of the signals to a finite value \(l\leq L\), beyond which the components cannot be reliably measured due to insufficient signal-to-noise ratio (SNR). Due to the typical geometry of the measurement, the VSH expansion is well suited for MEG. The expansion origin can be chosen so that all neural currents are located inside of a spherical volume whose radius is the distance from the expansion origin to the nearest sensor [16]. ## 3 The inverse problem ### The minimum norm pseudoinverse In the following, we denote the \(N\)-dimensional signal space vector comprising the signals from all sensors as \(\mathbf{\phi}\), and the \(M\)-dimensional vector comprising the amplitudes of all possible sources as \(\mathbf{s}\). The sources have fixed orientation and location, and are typically obtained by discretization of the cerebral source volume. Due to the linear superposition principle, the total signal space vector resulting from the \(M\) sources can be written as \[\mathbf{\phi}=\mathbf{\Gamma}\mathbf{s}, \tag{7}\] where \(\mathbf{\Gamma}\) is the \(N\times M\)-dimensional lead field matrix, containing the signal space vectors corresponding to the individual sources. The lead field matrix depends on the source locations and orientations, as well as the choice of the forward model, such as a spherical or a boundary-element model. The inversion of this forward equation seeks an estimate \(\mathbf{\hat{s}}\) of the true source distribution \(\mathbf{s}\). It is well known that the magnetic field outside the head, even if we are able to characterize it fully, gives us only limited information about the neuronal current distribution, and thus the inverse problem cannot be solved uniquely. Further, our measurement \(\mathbf{\phi}\) is always limited by noise and the instrument sampling capabilities. Broadly, there are two categories of inverse methods: the first category strongly restricts a priori assumptions on the source distribution (e.g. assuming a single focal source), while the second category does not. For the latter category of methods, the uniqueness of the inverse solution may be guaranteed in various ways, such as by minimizing the overall energy of the current distribution and the measured field. As pointed out in [12], in the absence of a priori information and noise weighting, several such methods all reduce to the well-known Moore-Penrose pseudoinverse, i.e. \[\mathbf{\hat{s}}=\mathbf{\Gamma}^{\dagger}\mathbf{\phi}=\mathbf{\Gamma}^{\intercal}\left(\mathbf{ \Gamma}\mathbf{\Gamma}^{\intercal}\right)^{-1}\mathbf{\phi}, \tag{8}\] where we assume that there are more sources than measurements (\(M>N\)). This solution has been termed the minimum-norm pseudoinverse (MNP). ### Regularization of the inverse For realistic measurement and source geometries, the minimum-norm pseudoinverse of eq. 8 turns out to be excessively sensitive to noise, necessitating regularization. The commonly employed Tikhonov regularized solution may be written as \[\mathbf{\hat{s}}=\mathbf{\Gamma}^{\intercal}\left(\mathbf{\Gamma}\mathbf{\Gamma}^{\intercal}+ \lambda\mathbf{I}\right)^{-1}\mathbf{\phi}, \tag{9}\] where \(\lambda\) is a regularization parameter to be determined. Another frequently used regularization method is the truncated singular value decomposition (TSVD). In fact, both methods can be expressed in terms of the singular value decomposition. Let \(U\Sigma V^{\intercal}\) be the singular decomposition of \(\mathbf{\Gamma}\), where \(U=[\mathbf{u}_{1},\ldots,\mathbf{u}_{N}]\) is a \(N\times N\) orthonormal matrix whose columns span the range of \(\Lambda\), \(\Sigma\) is a \(N\times M\) diagonal matrix of singular values \(\sigma_{i}\) and \(V=[\mathbf{v}_{1},\ldots,\mathbf{v}_{M}]\) is a \(M\times M\) orthonormal matrix. The pseudoinverse of eq. 8 may then be written as \[\mathbf{\hat{s}}=\sum_{i=1}^{N}\frac{\mathbf{u}_{i}^{\intercal}\mathbf{\phi}}{\sigma_{i}} \mathbf{v}_{i}. \tag{10}\] In the TSVD, the terms corresponding to the smallest singular values are eliminated by truncating the sum, i.e. \[\mathbf{\hat{s}}_{\text{TSVD}}=\sum_{i=1}^{K}\frac{\mathbf{u}_{i}^{\intercal}\mathbf{\phi }}{\sigma_{i}}\mathbf{v}_{i}, \tag{11}\] where the regularization parameter \(K\) represents the truncation point. On the other hand, the Tikhonov regularized solution can be shown to equal a weighted version of the sum: \[\mathbf{\hat{s}}_{\text{Tikhonov}}=\sum_{i=1}^{N}f_{i}\frac{\mathbf{u}_{i}^{\intercal }\mathbf{\phi}}{\sigma_{i}}\mathbf{v}_{i}, \tag{12}\] where the weighting factors \(f_{i}=\frac{\sigma_{i}^{2}}{\sigma_{i}^{2}+\lambda^{2}}\) depend on the regularization parameter \(\lambda\): terms corresponding to singular values significantly smaller than \(\lambda\) will be suppressed. The effect of both regularization methods is to suppress the contribution of terms corresponding to small singular values, which suffer from the largest inaccuracy. A disadvantage of numerical regularization methods is the somewhat arbitrary choice of the regularization parameter. The regularization parameter \(\lambda\) can be related to the assumed measurement SNR, but nevertheless is still a free parameter. The Tikhonov regularized solution (eq. 9) can be viewed as a special case of the generalized MNE where the source and noise covariance matrices are proportional to diagonal matrices i.e. the source amplitudes and noise at the sensors are independent Gaussian random variables with variances \(a^{2}\) and \(b^{2}\). In this case the regularization parameter \(\lambda\) is related to their ratio: \(\lambda=b^{2}/a^{2}\), the inverse of the assumed SNR. MNP (eq. 8), on the other hand, can be viewed as the limiting case of the Tikhonov regularized solution in the limit of infinite SNR (\(a^{2}/b^{2}\rightarrow\infty\)). ### The lead field in the multipole domain For traditional MEG sensor array geometries, the degrees of freedom obtainable from the measurement are considerably fewer than the number of sensors. Thus, the sensor-based lead field has significant redundancy. As an alternative, the lead field may also be expressed in the multipole domain. This may be accomplished either by directly computing the multipole component for each element of the source space, or alternatively by first computing the conventional sensor-based lead field and then transforming it according to \[\boldsymbol{\Gamma}_{x}=\boldsymbol{S}^{\dagger}\boldsymbol{\Gamma}. \tag{13}\] The columns of \(\boldsymbol{\Gamma}\), i.e. the forward fields, are thus expressed in terms of multipole components, rather than sensor amplitudes. The dimension of \(\boldsymbol{\Gamma}_{x}\) is \(N_{\mathrm{x}}\times M\), where \(N_{\mathrm{x}}\) is the number of multipole components which can be chosen to express the essential degrees of freedom contained in the measurement, typically \(N_{\mathrm{x}}\ll N\). Thus, \(\boldsymbol{\Gamma}_{x}\) is a more economical description of the source forward fields, while retaining all information obtainable by the sensor array. This is reflected by the typically significantly lower condition number of \(\boldsymbol{\Gamma}_{x}\). Analogously to eq. 8, a source estimate can be obtained via the multipole-domain lead field as \[\boldsymbol{\hat{s}}=\boldsymbol{\Gamma}_{x}^{\dagger}\boldsymbol{x}, \tag{14}\] where \(\boldsymbol{x}\) are the multipole coefficients corresponding to a measurement, typically obtained by eq. 6. This form could be directly used as a source estimate. However, we note that inserting eqs. 6 and 13 will lead to \[\boldsymbol{\hat{s}}=(\boldsymbol{S}^{\dagger}\boldsymbol{\Gamma})^{\dagger} \boldsymbol{S}^{\dagger}\boldsymbol{\phi}=\boldsymbol{\Gamma}^{\dagger} \boldsymbol{S}\boldsymbol{S}^{\dagger}\boldsymbol{\phi}. \tag{15}\] Since \(\boldsymbol{S}\boldsymbol{S}^{\dagger}\) is an orthogonal projection operator, the multipole domain source estimate is mathematically equivalent to first projecting \(\phi\) onto the range of \(\boldsymbol{S}\) and subsequently applying the conventional pseudoinverse of eq. 8. However, from the numerical point of view, \(\boldsymbol{\Gamma}_{x}\) typically has better inversion properties and thus it is advantageous to use it instead of \(\boldsymbol{\Gamma}\). Finally, we note that it is common to restrict the sensor-based inverse solution of eq. 8 to use a subset of sensors, e.g. in the case where some sensors are noisy or malfunctioning, or when a solution corresponding to a region of interest is desired. Similarly, we have the freedom to choose which multipole components to use in the inverse. Thus we can use a subset of \(\boldsymbol{x}^{\prime}\) of the estimated multipole coefficients and limit the lead field to these components, resulting in \[\boldsymbol{\hat{s}}=\boldsymbol{\Gamma}_{x}^{\prime}{}^{\dagger}\boldsymbol{ x}^{\prime}. \tag{16}\] For example, we might choose to only include spatial frequencies up to a certain threshold, or weight the components according to SNR. ### The resolution matrix and its properties For a general linear inverse method, the source estimate may be written \[\mathbf{\hat{s}}=\mathbf{M}\mathbf{\phi}, \tag{17}\] where \(\mathbf{M}\) is the linear estimator. Inserting eq. 7, we obtain \[\mathbf{\hat{s}}=\mathbf{M}\mathbf{\Gamma}\mathbf{s}\equiv\mathbf{\Omega}\mathbf{s}. \tag{18}\] where \(\mathbf{\Omega}=\mathbf{M}\mathbf{\Gamma}\) may be termed the resolution matrix, first applied in the context of biomagnetic inverse problems in [11]. This equation expresses the source estimate as a weighted combination of columns of \(\mathbf{\Omega}\). Thus, the columns can be interpreted as unit estimates, or equivalently as point-spread functions (PSFs) corresponding to each source. On the other hand, the estimate \(\mathbf{\hat{s}}(k)\) for the \(k\)th source is \[\mathbf{\hat{s}}(k)=\sum_{j}\mathbf{\Omega}_{kj}s_{j}, \tag{19}\] where \(\mathbf{\Omega}_{ij}\) are the elements of the resolution matrix. From this form, it is seen that the \(k\)th row of the matrix describes the (undesirable) contribution of other sources to \(\mathbf{\hat{s}}(k)\). In previous literature, the rows have been correspondingly termed resolution kernels, or sometimes cross-talk functions (CTFs) [11, 12, 13, 14]. For the simple minimum norm pseudoinverse, \[\mathbf{\Omega}=\mathbf{\Gamma}^{\dagger}\mathbf{\Gamma}, \tag{20}\] i.e., the resolution matrix is symmetrical, and thus the PSFs and CTFs for a given source are identical. This also holds for the Tikhonov regularized version of eq. 9, in which case we get \[\mathbf{\Omega}=\mathbf{\Gamma}^{\intercal}\left(\mathbf{\Gamma}\mathbf{\Gamma}^{\intercal}+ \lambda\mathbf{I}\right)^{-1}\mathbf{\Gamma}. \tag{21}\] The spatial extent of resolution matrix PSFs is of interest, since it determines the spatial blurring of the total source estimate. It has previously been quantified in terms of PSF spatial dispersion (SD), with slightly differing definitions [15, 19, 20, 14]. Here we define it for the \(k\)th source as \[\text{SD}(k)=\sqrt{\frac{\sum_{i=1}^{M}d_{ik}^{2}\mathbf{\Omega}_{ik}^{2}}{\sum_{ i=1}^{M}\mathbf{\Omega}_{ik}^{2}}}, \tag{22}\] where \(d_{ik}\) is the Euclidian distance between source nodes \(i\) and \(k\). Finally, as noted in the previous section, we can also apply the inverse in the multipole domain. Inserting eq. 13, we obtain a direct expression for the multipole-based resolution matrix as \[\mathbf{\Omega}_{x}=\mathbf{\Gamma}_{x}^{\dagger}\mathbf{\Gamma}_{x}=\mathbf{\Gamma}^{\dagger }(\mathbf{S}^{\dagger})^{\dagger}\mathbf{S}^{\dagger}\mathbf{\Gamma}=\mathbf{\Gamma}^{\dagger }\mathbf{SS}^{\dagger}\mathbf{\Gamma}, \tag{23}\] where we can also apply Tikhonov regularized inversion similarly to eq. 21, if necessary. Results ### Effect of spatial frequencies on the minimum norm solution We computed the MNE inverse solutions for forward fields of single sources in a single-layer boundary element model using the MNE-Python software package [21, 22]. The sources (N=7498) were placed approximately uniformly on the MRI-derived cortical surface of the MNE-Python _sample_ subject, using subsampling of the cortical surface. The source orientations were constrained according to the orientation of the local cortical surface normal. To represent the measurement of the magnetic fields, the radial components of the forward fields were computed at 1000 locations on a full spherical surface (R=120 mm) around the head. Note that compared to typical MEG sensor arrays, this array represents a relatively high fidelity of spatial sampling. We used it here to study the limits of attainable spatial resolution, without being significantly limited by the sampling of the magnetic field. From the computed lead fields, we determined the sensor-based and multipole-based resolution matrices according to eqs. 20 and 23. The multipole-based matrices were computed for maximum expansion degrees \(L=1-16\). The PSFs are the columns of the resolution matrices. The spatial dispersion values were computed from the PSFs according to eq. 22. Figure 1 shows the point spread function for the multipole-based inverse with different \(L\) values, as well as for the sensor-based inverse. As higher spatial frequencies are included in the inverse, the PSF becomes more focal, converging towards the sensor-based inverse which includes all spatial frequencies. To facilitate comparison, all the PSFs were computed using Tikhonov regularization with \(\lambda=10^{-11}\). Figure 2 shows the spatial dispersion of the point spread function for different cortical locations, as a function of the multipole expansion order. In accordance Figure 1: Point spread functions of the multipole and sensor based inverses for a representative superficial source. Note that due to the differing peak magnitudes of the PSFs, the plots are scaled individually according to their respective maxima. \(L\) indicates the spherical harmonics degrees used for computation of the multipole-based resolution matrices. SD indicates the spatial dispersion of the point spread function. with figure 1, spatial dispersion is reduced with increasing \(L\) cutoff and converges towards the sensor space result as higher spatial frequencies are included in the inverse. As in the previous step, Tikhonov regularization with \(\lambda=10^{-11}\) was used. ### Effect of regularization on the spatial frequency spectrum To illustrate the effect of the regularization parameter, we recomputed the PSFs of figure 1 with \(\lambda=10^{-8}\), as shown in figure 3. It is evident that including spatial frequencies beyond about \(L=7\) no longer improves the focality of the solution, since the stronger regularization effectively eliminates these frequencies. The effect of numerical regularization vs. \(L\)-filtering of the lead field is further illustrated in figure 4. Reducing the amount of numerical regularization has an effect very similar to increasing \(L\). Figure 3: Point spread functions of the multipole and sensor based inverses for a representative superficial source for \(\lambda=10^{-8}\). The plots are scaled individually according to their respective maxima. Figure 2: Spatial dispersion of the resolution matrix PSFs as a function of source location. \(L\) indicates the spherical harmonics degrees used for computation of the multipole-based resolution matrices. The plots are all equally scaled. ### The singular value decomposition of the lead field and the magnetostatic multipole components According to section 3.2, the lead field is spanned by its left-side singular vectors \(\mathbf{u}_{i}\). They may be viewed as elementary forward fields, from which any measurable field can be built. On the other hand, we have previously shown that any field can also be expressed in terms of the VSH basis vectors \(\mathbf{a}_{L,m}\). In fact, in turns out that \(\mathbf{u}_{i}\) and \(\mathbf{a}_{L,m}\) are highly similar. Figure 5 illustrates the first 20 \(\mathbf{a}_{L,m}\) and \(\mathbf{u}_{i}\) vectors for our measurement geometry. It is seen that decreasing singular values and increasing \(L\) values correspond to increasingly high spatial frequencies. These high frequencies decay the fastest and produce the weakest signals at the sensors. Accordingly, there is a close relationship between the traditional Tikhonov or TSVD regularized inverse solution, and the solution based on the the multipole-domain lead field. In the former, spatial frequencies are implicitly limited by the regularization parameter, which determines the SVD components included in the inverse. In the latter case, the spatial frequencies are explicitly determined by the VSH degree cutoff \(L\). ### The multipole transformation as physics-based regularization and the effect of SNR From the above discussion, it follows that we can use elimination of high spatial frequencies ("\(L\)-filtering") as a regularization method. If the lead field is limited Figure 4: Mean spatial dispersion of resolution matrix PSFs over the whole cortical source space, as a function of \(\lambda\) (for sensor-based inverse) and L (for multipole-based inverse). For multipole based inverse, no numerical regularization was applied. Figure 5: Left-hand singular vectors \(\mathbf{u}_{i}\) of the lead field matrix, corresponding to the 20 largest singular values. The plots are individually scaled. Figure 6: First 20 vector spherical harmonic (VSH) basis functions \(\mathbf{a}_{L,m}\) corresponding to the lowest spatial frequencies. The captions indicate the (L, m) values for each basis vector. The plots are individually scaled. to relatively low spatial frequencies, further numerical regularization may not be necessary. Thus, the multipole-based transformation of the lead field offers an alternative, physics-based method of regularizing the lead field matrix. To demonstrate this, we selected the same source whose PSF is illustrated in Fig. 1 and added uniform Gaussian noise to its forward field. Here we define the SNR as the ratio of signal vector norms \[\text{SNR}=\|\mathbf{\phi}\|/\|\mathbf{n}\|, \tag{24}\] where \(\mathbf{\phi}\) is the signal space vector corresponding to the source, and \(\mathbf{n}\) is a realization of Gaussian noise. Next, we performed the inverse in the multipole domain for various \(L\) values. The noisy multipole-domain signal is computed as \[\mathbf{x}_{n}=\mathbf{S}^{\dagger}(\mathbf{\phi}+\mathbf{n}) \tag{25}\] and the lead field as \[\mathbf{\Gamma}_{x}=\mathbf{S}^{\dagger}\mathbf{\Gamma}. \tag{26}\] The included spatial frequencies are determined by the choice of \(L\), which in turn determines \(\mathbf{S}\). The source estimate is then obtained by \[\mathbf{\hat{s}}=\mathbf{\Gamma}_{x}^{\dagger}\mathbf{x}_{n}. \tag{27}\] Note that here the direct pseudoinverse of \(\mathbf{\Gamma}_{x}\) is used (i.e. no numerical regularization) to evaluate the efficiency of \(L\)-filtering. The results are shown in fig. 7. We note that \(L=6\) yields a reasonable solution for all SNR values, though the focality is limited. \(L=8\) requires a SNR of 5 for a reasonable solution. For \(L=9\), SNR=10 is needed for an accurate solution, but the resulting solution is quite focal. In summary, inverse solutions limited to low \(L\) values are better conditioned, producing reasonable results even in the case of low SNR. However, due to the limited spatial frequencies, they are unable to reproduce focal sources. ## 5 Discussion The effect of conventional numerical regularization on MNE-type inverse solutions is to impose a limit on the spatial frequencies contained in the solution. The choice of regularization parameter sets an ultimate bound on the attainable focality of the inverse solution, by suppressing high spatial frequencies that are needed to fully characterize focal source distributions. On the other hand, noise imposes a practical bound on the regularization parameter, since insufficient regularization in case of noisy data will lead to spurious solutions. The magnetostatic multipole expansion provides an economical and spatially organized description of the measured fields, which has well-established applications in interference suppression and movement compensation [16, 23]. Its potential uses in source modelling have remained largely unexplored so far; however see [24], where the multipole expansion was utilized in the context of beamformers. Here we demonstrate the utility of multipole-based signal representation for inverse solutions that rely on lead field inversion. It is advantageous Figure 7: Inverse solutions without numerical regularization for a single source as a function of \(L\) and SNR. Plots are individually scaled according to their respective maxima. Some solutions appear very small in amplitude despite scaling, which is caused by an invisible spurious peak outside the displayed cortical region. To remove the effect of random noise topography, the same noise realization was used for the different SNR values, scaled according to SNR. to limit the modelled forward fields, in this case the lead field, to the spatial frequencies that can be reliably detected by the instrument. To this end, we have demonstrated that choosing a suitable spatial frequency cutoff and performing the inverse in the multipole domain may eliminate the need for numerical regularization. Thus, the multipole transformation may be interpreted as a physics-based regularization method that allows the exclusion of high spatial frequencies in an exact way. Any sensor array has a practical upper limit \(L\) of spatial frequencies that it is capable of measuring. The limit is determined not only by the sampling capabilities of the array and its calibration accuracy but also by its noise level, since high spatial frequencies are weakest at the sensors and beyond a certain limit will be buried in noise. For example, in the case of the commercially available 306-channel MEGIN (formerly Elekta) TRIUX instrument, it has been shown that \(L=8\) provides sufficient characterization even for the most superficial source configurations. Thus, the MEGIN MaxFilter software by default limits the signal spatial frequencies to \(L=8\). Our results indicate that the additional resolution provided by high spatial frequencies such as \(L>10\) would require extremely high SNR values, which are not obtainable in standard MEG studies with human subjects. However, emerging sensor arrays based on optically pumped magnetometers are likely to benefit from the inclusion of higher spatial frequencies in source modelling, due to the proximity of the sensors to the head and the correspondingly higher SNR. In future studies, it would also be desirable to explore inverse solutions optimally weighted by spatial frequency, or more broadly, by the degree and order of the multipole basis components. For example, the sensitivity of a sensor array to different multipole components varies according to array geometry; thus, giving more weight to components with the highest signal-to-noise would be expected to yield improved results in source reconstruction. ## 6 Acknowledgements This work was supported by grants R21-EB033577-01 and U01-EB028656-04 from the National Institutes of Health. S. Taulu's work is also funded in part by the Bezos Family Foundation and the R. B. and Ruth H. Dunn Charitable Foundation. Sandia National Laboratories is a multimission laboratory managed and operated by National Technology & Engineering Solutions of Sandia, LLC, a wholly owned subsidiary of Honeywell International Inc., for the U.S. Department of Energy National Nuclear Security Administration under contract DENA0003525. This paper describes objective technical results and analysis. Any subjective views or opinions that might be expressed in the paper do not necessarily represent the views of the U.S. Department of Energy and the United States Government. The content is solely the responsibility of the authors.
2302.14252
Compressed Decentralized Proximal Stochastic Gradient Method for Nonconvex Composite Problems with Heterogeneous Data
We first propose a decentralized proximal stochastic gradient tracking method (DProxSGT) for nonconvex stochastic composite problems, with data heterogeneously distributed on multiple workers in a decentralized connected network. To save communication cost, we then extend DProxSGT to a compressed method by compressing the communicated information. Both methods need only $\mathcal{O}(1)$ samples per worker for each proximal update, which is important to achieve good generalization performance on training deep neural networks. With a smoothness condition on the expected loss function (but not on each sample function), the proposed methods can achieve an optimal sample complexity result to produce a near-stationary point. Numerical experiments on training neural networks demonstrate the significantly better generalization performance of our methods over large-batch training methods and momentum variance-reduction methods and also, the ability of handling heterogeneous data by the gradient tracking scheme.
Yonggui Yan, Jie Chen, Pin-Yu Chen, Xiaodong Cui, Songtao Lu, Yangyang Xu
2023-02-28T02:24:43Z
http://arxiv.org/abs/2302.14252v1
Compressed Decentralized Proximal Stochastic Gradient Method for Nonconvex Composite Problems with Heterogeneous Data ###### Abstract We first propose a decentralized proximal stochastic gradient tracking method (DProxSGT) for nonconvex stochastic composite problems, with data heterogeneously distributed on multiple workers in a decentralized connected network. To save communication cost, we then extend DProxSGT to a compressed method by compressing the communicated information. Both methods need only \(\mathcal{O}(1)\) samples per worker for each proximal update, which is important to achieve good generalization performance on training deep neural networks. With a smoothness condition on the expected loss function (but not on each sample function), the proposed methods can achieve an optimal sample complexity result to produce a near-stationary point. Numerical experiments on training neural networks demonstrate the significantly better generalization performance of our methods over large-batch training methods and momentum variance-reduction methods and also, the ability of handling heterogeneous data by the gradient tracking scheme. Machine Learning, ICML ## 1 Introduction In this paper, we consider to solve nonconvex stochastic composite problems in a decentralized setting: \[\begin{split}&\min_{\mathbf{x}\in\mathbb{R}^{d}}\phi(\mathbf{x}) =f(\mathbf{x})+r(\mathbf{x}),\\ &\text{with }f(\mathbf{x})=\frac{1}{n}\sum_{i=1}^{n}f_{i}( \mathbf{x}),f_{i}(\mathbf{x})\!=\!\mathbb{E}_{\xi_{i}\sim\mathcal{D}_{i}}[F_{i} (\mathbf{x},\xi_{i})].\end{split} \tag{1}\] Here, \(\{\mathcal{D}_{i}\}_{i=1}^{n}\) are possibly _non-i.i.d data_ distributions on \(n\) machines/workers that can be viewed as nodes of a connected graph \(\mathcal{G}\), and each \(F_{i}(\cdot,\xi_{i})\) can only be accessed by the \(i\)-th worker. We are interested in problems that satisfy the following structural assumption. **Assumption 1** (Problem structure).: We assume that * \(r\) is closed convex and possibly nondifferentiable. * Each \(f_{i}\) is \(L\)-smooth in \(\mathrm{dom}(r)\), i.e., \(\|\nabla f_{i}(\mathbf{x})-\nabla f_{i}(\mathbf{y})\|\leq L\|\mathbf{x}- \mathbf{y}\|\), for any \(\mathbf{x},\mathbf{y}\in\mathrm{dom}(r)\). * \(\phi\) is lower bounded, i.e., \(\phi^{*}\triangleq\min_{\mathbf{x}}\phi(\mathbf{x})>-\infty\). Let \(\mathcal{N}=\{1,2,\ldots,n\}\) be the set of nodes of \(\mathcal{G}\) and \(\mathcal{E}\) the set of edges. For each \(i\in\mathcal{N}\), denote \(\mathcal{N}_{i}\) as the neighbors of worker \(i\) and itself, i.e., \(\mathcal{N}_{i}=\{j:(i,j)\in\mathcal{E}\}\cup\{i\}\). Every worker can only communicate with its neighbors. To solve (1) collaboratively, each worker \(i\) maintains a copy, denoted as \(\mathbf{x}_{i}\), of the variable \(\mathbf{x}\). With these notations, (1) can be formulated equivalently to \[\begin{split}&\min_{\mathbf{x}\in\mathbb{R}^{d\times n}}\frac{1}{n} \sum_{i=1}^{n}\phi_{i}(\mathbf{x}_{i}),\text{with }\phi_{i}(\mathbf{x}_{i})\triangleq f_{i}(\mathbf{x}_{i})+r(\mathbf{x}_{i}),\\ &\text{s.t. }\quad\mathbf{x}_{i}=\mathbf{x}_{j},\forall\,j \in\mathcal{N}_{i},\forall\,i=1,\ldots,n.\end{split} \tag{2}\] Problems with a _nonsmooth_ regularizer, i.e., in the form of (1), appear in many applications such as \(\ell_{1}\)-regularized signal recovery (Eldar & Mendelson, 2014; Duchi & Ruan, 2019), online nonnegative matrix factorization (Guan et al., 2012), and training sparse neural networks (Scardapane et al., 2017; Yang et al., 2020). When data involved in these applications are distributed onto (or collected by workers on) a decentralized network, it necessitates the design of decentralized algorithms. Although decentralized optimization has attracted a lot of research interests in recent years, most existing works focus on strongly convex problems (Scaman et al., 2017; Koloskova et al., 2019) or convex problems (Tsianos et al., 2012; Taheri et al., 2020) or smooth nonconvex problems (Bianchi & Jakubowicz, 2012; Di Lorenzo & Scutari, 2016; Wai et al., 2017; Lian et al., 2017; Zeng & Yin, 2018). Few works have studied _nonsmooth nonconvex_ decentralized _stochastic_ optimization like (2) that we consider. (Chen et al., 2021; Xin et al., 2021; Mancino-Ball et al., 2022) are among the exceptions. However, they either require to take many data samples for each update or assume a so-called mean-squared smoothness condition, which is stronger than the smoothness condition in Assumption 1(ii), in order to perform momentum-based variance-reduction step. Though these methods can have convergence (rate) guarantee, they often yield poor generalization performance on training deep neural networks, as demonstrated in (LeCun et al., 2012; Keskar et al., 2016) for large-batch training methods and in our numerical experiments for momentum variance-reduction methods. On the other side, many distributed optimization methods (Shamir and Srebro, 2014; Lian et al., 2017; Wang and Joshi, 2018) often assume that the data are i.i.d across the workers. However, this assumption does not hold in many real-world scenarios, for instance, due to data privacy issue that local data has to stay on-premise. Data heterogeneity can result in significant degradation of the performance by these methods. Though some papers do not assume i.i.d. data, they require certain data similarity, such as bounded stochastic gradients (Koloskova et al., 2019; Maheri et al., 2020) and bounded gradient dissimilarity (Tang et al., 2018; Assran et al., 2019; Tang et al., 2019; Vogels et al., 2020). To address the critical practical issues mentioned above, we propose a decentralized proximal stochastic gradient tracking method that needs only a single or \(O(1)\) data samples (per worker) for each update. With no assumption on data similarity, it can still achieve the optimal convergence rate on solving problems satisfying conditions in Assumption 1 and yield good generalization performance. In addition, to reduce communication cost, we give a compressed version of the proposed algorithm, by performing compression on the communicated information. The compressed algorithm can inherit the benefits of its non-compressed counterpart. ### Our Contributions Our contributions are three-fold. First, we propose two decentralized algorithms, one without compression (named DProxSGT) and the other with compression (named CDProxSGT), for solving _decentralized nonconvex nonsmooth stochastic_ problems. Different from existing methods, e.g., (Xin et al., 2021; Wang et al., 2021; Mancino-Ball et al., 2022), which need a very large batchsize and/or perform momentum-based variance reduction to handle the challenge from the nonsmooth term, DProxSGT needs only \(\mathcal{O}(1)\) data samples for each update, without performing variance reduction. The use of a small batch and a standard proximal gradient update enables our method to achieve significantly better generalization performance over the existing methods, as we demonstrate on training neural networks. To the best of our knowledge, CDProxSGT is the first decentralized algorithm that applies a compression scheme for solving nonconvex nonsmooth stochastic problems, and it inherits the advantages of the non-compressed method DProxSGT. Even applied to the special class of smooth nonconvex problems, CDProxSGT can perform significantly better over state-of-the-art methods, in terms of generalization and handling data heterogeneity. Second, we establish an optimal sample complexity result of DProxSGT, which matches the lower bound result in (Arjevani et al., 2022) in terms of the dependence on a target tolerance \(\epsilon\), to produce an \(\epsilon\)-stationary solution. Due to the coexistence of nonconvexity, nonsmoothness, big stochasticity variance (due to the small batch and no use of variance reduction for better generalization), and decentralization, the analysis is highly non-trivial. We employ the tool of Moreau envelope and construct a decreasing Lyapunov function by carefully controlling the errors introduced by stochasticity and decentralization. Third, we establish the iteration complexity result of the proposed compressed method CDProxSGT, which is in the same order as that for DProxSGT and thus also optimal in terms of the dependence on a target tolerance. The analysis builds on that of DProxSGT but is more challenging due to the additional compression error and the use of gradient tracking. Nevertheless, we obtain our results by making the same (or even weaker) assumptions as those assumed by state-of-the-art methods (Koloskova et al., 2019; Zhao et al., 2022). ### Notation For any vector \(\mathbf{x}\in\mathbb{R}^{d}\), we use \(\|\mathbf{x}\|\) for the \(\ell_{2}\) norm. For any matrix \(\mathbf{A}\), \(\|\mathbf{A}\|\) denotes the Frobenius norm and \(\|\mathbf{A}\|_{2}\) the spectral norm. \(\mathbf{X}=[\mathbf{x}_{1},\mathbf{x}_{2},\ldots,\mathbf{x}_{n}]\in\mathbb{R} ^{d\times n}\) concatenates all local variables. The superscript \(t^{\prime}\) will be used for iteration or communication. \(\nabla F_{i}(\mathbf{x}_{i}^{t},\xi_{i}^{t})\) denotes a local stochastic gradient of \(F_{i}\) at \(\mathbf{x}_{i}^{t}\) with a random sample \(\xi_{i}^{t}\). The column concatenation of \(\{\nabla F_{i}(\mathbf{x}_{i}^{t},\xi_{i}^{t})\}\) is denoted as \[\nabla\mathbf{F}^{t}=\nabla\mathbf{F}(\mathbf{X}^{t},\Xi^{t})=[\nabla F_{1}( \mathbf{x}_{1}^{t},\xi_{1}^{t}),\ldots,\nabla F_{n}(\mathbf{x}_{n}^{t},\xi_{ n}^{t})],\] where \(\Xi^{t}=[\xi_{1}^{t},\xi_{2}^{t},\ldots,\xi_{n}^{t}]\). Similarly, we denote \[\nabla\mathbf{f}^{t}=[\nabla f_{1}(\mathbf{x}_{1}^{t}),\ldots,\nabla f_{n}( \mathbf{x}_{n}^{t})].\] For any \(\mathbf{X}\in\mathbb{R}^{d\times n}\), we define \[\bar{\mathbf{x}}=\tfrac{1}{n}\mathbf{X}\mathbf{1},\quad\overline{\mathbf{X}}= \mathbf{X}\mathbf{J}=\bar{\mathbf{x}}\mathbf{1}^{\top},\quad\mathbf{X}_{\perp }=\mathbf{X}(\mathbf{I}-\mathbf{J}),\] where \(\mathbf{1}\) is the all-one vector, and \(\mathbf{J}=\tfrac{\mathbf{1}\mathbf{1}^{\top}}{n}\) is the averaging matrix. Similarly, we define the mean vectors \[\overline{\nabla}\mathbf{F}^{t}=\tfrac{1}{n}\mathbf{F}^{t}\mathbf{1},\ \overline{\nabla}\mathbf{f}^{t}=\tfrac{1}{n}\mathbf{f}^{t}\mathbf{1}.\] We will use \(\mathbb{E}_{t}\) for the expectation about the random samples \(\Xi^{t}\) at the \(t\)th iteration and \(\mathbb{E}\) for the full expectation. \(\mathbb{E}_{Q}\) denotes the expectation about a stochastic compressor \(Q\). ## 2 Related Works The literature of decentralized optimization has been growing vastly. To exhaust the literature is impossible. Below we review existing works on decentralized algorithms for solving nonconvex problems, with or without using a compression technique. For ease of understanding the difference of our methods from existing ones, we compare to a few relevant methods in Table 2. ### Non-compressed Decentralized Methods For nonconvex decentralized problems with a nonsmooth regularizer, a lot of deterministic decentralized methods have been studied, e.g., (Di Lorenzo & Scutari, 2016; Wai et al., 2017; Zeng & Yin, 2018; Chen et al., 2021; Scutari & Sun, 2019). When only stochastic gradient is available, a majority of existing works focus on smooth cases without a regularizer or a hard constraint, such as (Lian et al., 2017; Assran et al., 2019; Tang et al., 2018), gradient tracking based methods (Lu et al., 2019; Zhang & You, 2019; Koloskova et al., 2021), and momentum-based variance reduction methods (Xin et al., 2021; Zhang et al., 2021). Several works such as (Bianchi & Jakubowicz, 2012; Wang et al., 2021; Xin et al., 2021; Mancino-Ball et al., 2022) have studied stochastic decentralized methods for problems with a nonsmooth term \(r\). However, they either consider some special \(r\) or require a large batch size. (Bianchi & Jakubowicz, 2012) considers the case where \(r\) is an indicator function of a compact convex set. Also, it requires bounded stochastic gradients. (Wang et al., 2021) focuses on problems with a polyhedral \(r\), and it requires a large batch size of \(\mathcal{O}(\frac{1}{\epsilon})\) to produce an (expected) \(\epsilon\)-stationary point. (Xin et al., 2021; Mancino-Ball et al., 2022) are the most closely related to our methods. To produce an (expected) \(\epsilon\)-stationary point, the methods in (Xin et al., 2021) require a large batch size, either \(\mathcal{O}(\frac{1}{\epsilon^{2}})\) or \(\mathcal{O}(\frac{1}{\epsilon})\) if variance reduction is applied. The method in (Mancino-Ball et al., 2022) requires only \(\mathcal{O}(1)\) samples for each update by taking a momentum-type variance reduction scheme. However, in order to reduce variance, it needs a stronger mean-squared smoothness assumption. In addition, the momentum variance reduction step can often hurt the generalization performance on training complex neural networks, as we will demonstrate in our numerical experiments. ### Compressed Distributed Methods Communication efficiency is a crucial factor when designing a distributed optimization strategy. The current machine learning paradigm oftentimes resorts to models with a large number of parameters, which indicates a high communication cost when the models or gradients are transferred from workers to the parameter server or among workers. This may incur significant latency in training. Hence, communication-efficient algorithms by model or gradient compression have been actively sought. Two major groups of compression operators are quantization and sparsification. The quantization approaches include 1-bit SGD (Seide et al., 2014), SignSGD (Bernstein et al., 2018), QSGD (Alistarh et al., 2017), TernGrad (Wen et al., 2017). The sparsification approaches include Random-\(k\)(Stich et al., 2018), Top-\(k\)(Aji & Heafield, 2017), Threshold-\(v\)(Dutta et al., 2019) and ScaleCom (Chen et al., 2020). Direct compression may slow down the convergence especially when compression ratio is high. Error compensation or error-feedback can mitigate the effect by saving the compression error in one communication step and compensating it in the next communication step before another compression (Seide et al., 2014). These compression operators are first designed to compress the gradients in the centralized setting (Tang et al., 2019; Karimireddy et al., 2019). The compression can also be applied to the decentralized setting for smooth problems, i.e., (2) with \(r=0\). (Tang et al., 2019) applies the compression with error compensation to the communication of model parameters in the decentralized seeting. Choco-Gossip (Koloskova et al., 2019) is another communication way to mitigate the slow down effect from compression. It does not compress the model parameters but a residue between model parameters and its estimation. Choco-SGD uses Choco-Gossip to solve (2). BEER (Zhao et al., 2022) includes gradient tracking and compresses both tracked stochastic gradients and model parameters in each iteration by the Choco-Gossip. BEER needs a large batchsize of \(\mathcal{O}(\frac{1}{\epsilon^{2}})\) in order to produce an \(\epsilon\)-stationary solution. DoCoM-SGT(Yau & Wai, 2022) does similar updates as BEER but with a momentum term for the update of the tracked gradients, and it only needs an \(\mathcal{O}(1)\) batchsize. Our proposed CDProxSGT is for solving decentralized problems in the form of (2) with a nonsmooth \(r(\mathbf{x})\). To the best of our knowledge, CDProxSGT is the first compressed decentralized method for nonsmooth nonconvex problems without the use of a large batchsize, and it can achieve an optimal sample complexity without the assumption of data similarity or gradient boundedness. ## 3 Decentralized Algorithms In this section, we give our decentralized algorithms for solving (2) or equivalently (1). To perform neighbor communications, we introduce a mixing (or gossip) matrix \(\mathbf{W}\) that satisfies the following standard assumption. **Assumption 2** (Mixing matrix).: We choose a mixing matrix \(\mathbf{W}\) such that 1. \(\mathbf{W}\) is doubly stochastic: \(\mathbf{W}\mathbf{1}=\mathbf{1}\) and \(\mathbf{1}^{\top}\mathbf{W}=\mathbf{1}^{\top}\); 2. \(\mathbf{W}_{ij}=0\) if \(i\) and \(j\) are not neighbors to each other; 3. \(\mathrm{Null}(\mathbf{W}-\mathbf{I})=\mathrm{span}\{\mathbf{1}\}\) and \(\rho\triangleq\|\mathbf{W}-\mathbf{J}\|_{2}<1\). The condition in (ii) above is enforced so that _direct_ communications can be made only if two nodes (or workers) are immediate (or 1-hop) neighbors of each other. The condition in (iii) can hold if the graph \(\mathcal{G}\) is connected. The assumption \(\rho<1\) is critical to ensure contraction of consensus error. The value of \(\rho\) depends on the graph topology. (Koloskova et al., 2019) gives three commonly used examples: when uniform weights are used between nodes, \(\mathbf{W}=\mathbf{J}\) and \(\rho=0\) for a fully-connected graph (in which case, our algorithms will reduce to centralized methods), \(1-\rho=\Theta(\frac{1}{n})\) for a 2d torus grid graph where every node has 4 neighbors, and \(1-\rho=\Theta(\frac{1}{n^{2}})\) for a ring-structured graph. More examples can be found in (Nedic et al., 2018). ### Non-compreseed Method With the mixing matrix \(\mathbf{W}\), we propose a decentralized proximal stochastic gradient method with gradient tracking (DProxSGT) for (2). The pseudocode is shown in Algorithm 1. In every iteration \(t\), each node \(i\) first computes a local stochastic gradient \(\nabla F_{i}(\mathbf{x}_{i}^{t},\xi_{i}^{t})\) by taking a sample \(\xi_{i}^{t}\) from its local data distribution \(\mathcal{D}_{i}\), then performs gradient tracking in (3) and neighbor communications of the tracked gradient in (4), and finally takes a proximal gradient step in (5) and mixes the model parameter with its neighbors in (6). Note that for simplicity, we take only one random sample \(\xi_{i}^{t}\) in Algorithm 1 but in general, a mini-batch of random samples can be taken, and all theoretical results that we will establish in the next section still hold. We emphasize that we need only \(\mathcal{O}(1)\) samples for each update. This is different from ProxGT-SA in (Xin et al., 2021), which shares a similar update formula as our algorithm but needs a very big batch of samples, as many as \(\mathcal{O}(\frac{1}{\epsilon^{2}})\), where \(\epsilon\) is a target tolerance. A small-batch training can usually generalize better than a big-batch one (LeCun et al., 2012; Keskar et al., 2016) on training large-scale deep learning models. Throughout the paper, we make the following standard assumption on the stochastic gradients. **Assumption 3** (Stochastic gradients).: _We assume that_ 1. _The random samples_ \(\{\xi_{i}^{t}\}_{i\in\mathcal{N},t\geq 0}\) _are independent._ 2. _There exists a finite number_ \(\sigma\geq 0\) _such that for any_ \(i\in\mathcal{N}\) _and_ \(\mathbf{x}_{i}\in\mathrm{dom}(r)\)_,_ \[\mathbb{E}_{\xi_{i}}[\nabla F_{i}(\mathbf{x}_{i},\xi_{i})]= \nabla f_{i}(\mathbf{x}_{i}),\] \[\mathbb{E}_{\xi_{i}}[\|\nabla F_{i}(\mathbf{x}_{i},\xi_{i})-\nabla f _{i}(\mathbf{x}_{i})\|^{2}]\leq\sigma^{2}.\] _The gradient tracking step in (3) is critical to handle heterogeneous data (Di Lorenzo & Scutari, 2016; Nedic et al., 2017; Lu et al., 2019; Pu & Nedic, 2020; Sun et al., 2020; Xin et al., 2021; Song et al., 2021; Mancino-Ball et al., 2022; Zhao et al., 2022; Yau & Wai, 2022; Song et al., 2022). In a deterministic scenario where \(\nabla f_{i}(\cdot)\) is used instead of \(\nabla F_{i}(\cdot,\xi)\), for each \(i\), the tracked gradient \(\mathbf{y}_{i}^{t}\) can converge to the gradient of the global function \(\frac{1}{n}\sum_{i=1}^{n}f_{i}(\cdot)\) at \(\bar{\mathbf{x}}^{t}\), and thus all local updates move towards a direction to minimize the _global_ objective. When stochastic gradients are used, the gradient tracking can play a similar role and make \(\mathbf{y}_{i}^{t}\) approach to the stochastic gradient of the global function. With this nice property of gradient tracking, we can guarantee convergence without strong assumptions that are made \begin{table} \begin{tabular}{l c c c c c} \hline \hline Methods & CMP & \(r\not\equiv 0\) & GRADIENTS & SMOOTHNESS & (BS, VR, MMT) \\ \hline ProxGT-SA & No & Yes & No & \(f_{i}\) is smooth & \(\left(\mathcal{O}(\frac{1}{\sigma^{2}})\text{, No, No)}\) \\ ProxGT-SR-O & No & Yes & No & mean-squared & \(\left(\mathcal{O}(\frac{1}{\epsilon^{2}})\text{, Yes, No)}\) \\ DEEPSTORM & No & Yes & No & mean-squared & \(\left(\mathcal{O}(1)\text{, Yes, Yes}\right)\) \\ **DProxSGT (this paper)** & No & Yes & No & \(f_{i}\) is smooth & \(\left(\mathcal{O}(1)\text{, No, No)}\) \\ \hline ChocoSGD & Yes & No & \(\mathbb{E}_{\xi}[\|\nabla F_{i}(\mathbf{x},\xi_{i})\|^{2}]\leq G^{2}\) & \(f_{i}\) is smooth & \(\left(\mathcal{O}(1)\text{, No, No)}\) \\ BEER & Yes & No & No & \(f\) is smooth & \(\left(\mathcal{O}(\frac{1}{\epsilon^{2}})\text{, No, No)}\) \\ **CDProxSGT (this paper)** & Yes & Yes & No & \(f_{i}\) is smooth & \(\left(\mathcal{O}(1)\text{, No, No)}\) \\ \hline \hline \end{tabular} \end{table} Table 1: Comparison between our methods and some relevant methods: ProxGT-SA and ProxGT-SR-O in (Xin et al., 2021), DEEPSTORM (Mancino-Ball et al., 2022), ChocoSGD (Koloskova et al., 2019), and BEER (Zhao et al., 2022). We use “CMP” to represent whether compression is performed by a method. GRADIENTS represents additional assumptions on the stochastic gradients in addition to those made in Assumption 3. SMOOTHNESS represents the smoothness condition, where “mean-squared” means \(\mathbb{E}_{\xi_{i}}[\|\nabla F_{i}(\mathbf{x};\xi_{i})-\nabla F_{i}(\mathbf{y} ;\xi_{i})\|^{2}]\leq L^{2}\|\mathbf{x}-\mathbf{y}\|^{2}\) that is stronger than the \(L\)-smoothness of \(f_{i}\). BS is the required batchsize to get an \(\epsilon\)-stationary solution. VR and MMT represent whether the variance reduction or momentum are used. Large batchsize and/or momentum variance reduction can degrade the generalization performance, as we demonstrate in numerical experiments. in existing works, such as bounded gradients (Koloskova et al., 2019; 20; Taheri et al., 2020; Singh et al., 2021) and bounded data similarity over nodes (Lian et al., 2017; Tang et al., 2018; 2019; Vogels et al., 2020; Wang et al., 2021). ### Compressed Method In DProxSGT, each worker needs to communicate both the model parameter and tracked stochastic gradient with its neighbors at every iteration. Communications have become a bottleneck for distributed training on GPUs. In order to save the communication cost, we further propose a compressed version of DProxSGT, named CDProxSGT. The pseudocode is shown in Algorithm 2, where \(Q_{\mathbf{x}}\) and \(Q_{\mathbf{y}}\) are two compression operators. ``` Initialize \(\mathbf{x}_{i}^{0}\); set \(\mathbf{y}_{i}^{-1}=\underline{\mathbf{y}}_{i}^{-1}=\nabla F_{i}(\mathbf{x}_{ i}^{-1},\xi_{i}^{-1})=\underline{\mathbf{x}}_{i}^{0}=\mathbf{0}\), \(\forall i\in\mathcal{N}\). for\(t=0,1,2,\ldots,T-1\)do all nodes \(i=1,2,\ldots,n\) do the updates in parallel: \[\mathbf{y}_{i}^{t-\frac{1}{2}}=\mathbf{y}_{i}^{t-1}+\nabla F_{i}(\mathbf{x}_ {i}^{t},\xi_{i}^{t})-\nabla F_{i}(\mathbf{x}_{i}^{t-1},\xi_{i}^{t-1}),\] (7) \[\underline{\mathbf{y}}_{i}^{t}=\underline{\mathbf{y}}_{i}^{t-1}+Q _{\mathbf{y}}\big{[}\mathbf{y}_{i}^{t-\frac{1}{2}}-\underline{\mathbf{y}}_{i }^{t-1}\big{]},\] (8) \[\mathbf{y}_{i}^{t}=\mathbf{y}_{i}^{t-\frac{1}{2}}+\gamma_{y} \left(\sum_{j=1}^{n}\mathbf{W}_{ji}\underline{\mathbf{y}}_{j}^{t}-\underline{ \mathbf{y}}_{i}^{t}\right),\] (9) \[\mathbf{x}_{i}^{t+\frac{1}{2}}=\mathbf{Prox}_{\eta\eta}\left( \mathbf{x}_{i}^{t}-\eta\mathbf{y}_{i}^{t}\right),\] (10) \[\underline{\mathbf{x}}_{i}^{t+1}=\underline{\mathbf{x}}_{i}^{t}+Q _{\mathbf{x}}\big{[}\mathbf{x}_{i}^{t+\frac{1}{2}}-\underline{\mathbf{x}}_{i }^{t}\big{]},\] (11) \[\mathbf{x}_{i}^{t+1}=\mathbf{x}_{i}^{t+\frac{1}{2}}+\gamma_{x} \Big{(}\sum_{j=1}^{n}\mathbf{W}_{ji}\underline{\mathbf{x}}_{j}^{t+1}- \underline{\mathbf{x}}_{i}^{t+1}\Big{)}.\] (12) endfor ``` **Algorithm 2** CDProxSGT In Algorithm 2, each node communicates the non-compressed vectors \(\underline{\mathbf{y}}_{i}^{t}\) and \(\underline{\mathbf{x}}_{i}^{t+1}\) with its neighbors in (9) and (12). We write it in this way for ease of read and analysis. For efficient and _equivalent_ implementation, we do not communicate \(\underline{\mathbf{y}}_{i}^{t}\) and \(\underline{\mathbf{x}}_{i}^{t+1}\) directly but the compressed residues \(Q_{\mathbf{y}}\big{[}\mathbf{y}_{i}^{t-\frac{1}{2}}-\underline{\mathbf{y}}_{ i}^{t-1}\big{]}\) and \(Q_{\mathbf{x}}\big{[}\mathbf{x}_{i}^{t+\frac{1}{2}}-\underline{\mathbf{x}}_{i }^{t}\big{]}\), explained as follows. Besides \(\mathbf{y}_{i}^{t-1}\), \(\mathbf{x}_{i}^{t}\), \(\underline{\mathbf{y}}_{i}^{t-1}\) and \(\underline{\mathbf{x}}_{i}^{t}\), each node also stores \(\mathbf{z}_{i}^{t-1}\) and \(\mathbf{s}_{i}^{t}\) which record \(\sum_{j=1}^{n}\mathbf{W}_{ji}\underline{\mathbf{y}}_{i}^{t-1}\) and \(\sum_{j=1}^{n}\mathbf{W}_{ji}\underline{\mathbf{x}}_{i}^{t}\). For the gradient communication, each node \(i\) initializes \(\mathbf{z}_{i}^{-1}=\mathbf{0}\), and then at each iteration \(t\), after receiving \(Q_{\mathbf{y}}\big{[}\mathbf{y}_{j}^{t-\frac{1}{2}}-\underline{\mathbf{y}}_{ j}^{t-1}\big{]}\) from its neighbors, it updates \(\underline{\mathbf{y}}_{i}^{t}\) by (8), and \(\mathbf{z}_{i}^{t}\) and \(\mathbf{y}_{i}^{t}\) by \[\mathbf{z}_{i}^{t} =\mathbf{z}_{i}^{t-1}+\sum_{j=1}^{n}\mathbf{W}_{ji}Q_{\mathbf{y} }\big{[}\mathbf{y}_{j}^{t-\frac{1}{2}}-\underline{\mathbf{y}}_{j}^{t-1}\big{]},\] \[\mathbf{y}_{i}^{t} =\mathbf{y}_{i}^{t-\frac{1}{2}}+\gamma_{y}\big{(}\mathbf{z}_{i}^{t }-\underline{\mathbf{y}}_{i}^{t}\big{)}.\] From the initialization and the updates of \(\underline{\mathbf{y}}_{i}^{t}\) and \(\mathbf{z}_{i}^{t}\), it always holds that \(\mathbf{z}_{i}^{t}=\sum_{j=1}^{n}\mathbf{W}_{ji}\mathbf{y}_{i}^{t}\). The model communication can be done efficiently in the same way. The compression operators \(Q_{\mathbf{x}}\) and \(Q_{\mathbf{y}}\) in Algorithm 2 can be different, but we assume that they both satisfy the following assumption. **Assumption 4**.: There exists \(\alpha\in[0,1)\) such that \[\mathbb{E}[\|\mathbf{x}-Q[\mathbf{x}]\|^{2}]\leq\alpha^{2}\|\mathbf{x}\|^{2}, \forall\,\mathbf{x}\in\mathbb{R}^{d},\] for both \(Q=Q_{\mathbf{x}}\) and \(Q=Q_{\mathbf{y}}\). The assumption on compression operators is standard and also made in (Koloskova et al., 2019; 20; Zhao et al., 2022). It is satisfied by the sparsification, such as Random-\(k\)(Stich et al., 2018) and Top-\(k\)(Aji and Heafield, 2017). It can also be satisfied by rescaled quantizations. For example, QSGD (Alistarh et al., 2017) compresses \(\mathbf{x}\in\mathbb{R}^{d}\) by \(Q_{sqgd}(\mathbf{x})=\frac{\mathbf{sign}(\mathbf{x})\|\mathbf{x}\|}{s}[s \frac{|\mathbf{x}|}{\|\mathbf{x}\|}+\xi]\) where \(\xi\) is uniformly distributed on \([0,1]^{d}\), \(s\) is the parameter about compression level. Then \(Q(\mathbf{x})=\frac{1}{\tau}Q_{sqgd}(\mathbf{x})\) with \(\tau=(1+\min\{d/s^{2},\sqrt{d}/s\})\) satisfies Assumption 4 with \(\alpha^{2}=1-\frac{1}{\tau}\). More examples can be found in (Koloskova et al., 2019). Below, we make a couple of remarks to discuss the relations between Algorithm 1 and Algorithm 2. _Remark 1_.: When \(Q_{\mathbf{x}}\) and \(Q_{\mathbf{y}}\) are both identity operators, i.e., \(Q_{\mathbf{x}}[\mathbf{x}]=\mathbf{x}\), \(Q_{\mathbf{y}}[\mathbf{y}]=\mathbf{y}\), and \(\gamma_{x}=\gamma_{y}=1\), in Algorithm 2, CDProxSGT will reduce to DProxSGT. Hence, the latter can be viewed as a special case of the former. However, we will analyze them separately. Although the big-batch training method ProxGT-SA in (Xin et al., 2021) shares a similar update as the proposed DProxSGT, our analysis will be completely different and new, as we need only \(\mathcal{O}(1)\) samples in each iteration in order to achieve better generalization performance. The analysis of CDProxSGT will be built on that of DProxSGT by carefully controlling the variance error of stochastic gradients and the consensus error, as well as the additional compression error. _Remark 2_.: When \(Q_{\mathbf{y}}\) and \(Q_{\mathbf{x}}\) are identity operators, \(\underline{\mathbf{y}}_{i}^{t}=\mathbf{y}_{i}^{t-\frac{1}{2}}\) and \(\underline{\mathbf{z}}_{i}^{t+1}=\mathbf{x}_{i}^{t+\frac{1}{2}}\) for each \(i\in\mathcal{N}\). Hence, in the compression case, \(\underline{\mathbf{y}}_{i}^{t}\) and \(\underline{\mathbf{x}}_{i}^{t+1}\) can be viewed as estimates of \(\mathbf{y}_{i}^{t-\frac{1}{2}}\) and \(\mathbf{x}_{i}^{t+\frac{1}{2}}\). In addition, in a matrix format, we have from (9) and (12) that \[\mathbf{Y}^{t+1} =\mathbf{Y}^{t+\frac{1}{2}}\widehat{\mathbf{W}}_{y}+\gamma_{y} \big{(}\underline{\mathbf{Y}}^{t+1}-\mathbf{Y}^{t+\frac{1}{2}}\big{)}(\mathbf{W }-\mathbf{I}), \tag{13}\] \[\mathbf{X}^{t+1} =\mathbf{X}^{t+\frac{1}{2}}\widehat{\mathbf{W}}_{x}+\gamma_{x}( \underline{\mathbf{X}}^{t+1}-\mathbf{X}^{t+\frac{1}{2}})(\mathbf{W}-\mathbf{I}), \tag{14}\] where \(\widehat{\mathbf{W}}_ mixing matrices \(\widehat{\mathbf{W}}_{y}\) and \(\widehat{\mathbf{W}}_{x}\), and the addition of the estimation error \(\underline{\mathbf{Y}}^{t+1}-\mathbf{Y}^{t+\frac{1}{2}}\) and \(\underline{\mathbf{X}}^{t+1}-\mathbf{X}^{t+\frac{1}{2}}\) after one round of neighbor communication. ## 4 Convergence Analysis In this section, we analyze the convergence of the algorithms proposed in section 3. Nonconvexity of the problem and stochasticity of the algorithms both raise difficulty on the analysis. In addition, the coexistence of the nonsmooth regularizer \(r(\cdot)\) causes more significant challenges. To address these challenges, we employ a tool of the so-called Moreau envelope (Moreau, 1965), which has been commonly used for analyzing methods on solving nonsmooth weakly-convex problems. **Definition 1** (Moreau envelope).: Let \(\psi\) be an \(L\)-weakly convex function, i.e., \(\psi(\cdot)+\frac{L}{2}\|\cdot\|^{2}\) is convex. For \(\lambda\in(0,\frac{1}{L})\), the Moreau envelope of \(\psi\) is defined as \[\psi_{\lambda}(\mathbf{x})=\min_{\mathbf{y}}\left\{\psi(\mathbf{y})+\frac{1}{ 2\lambda}\|\mathbf{y}-\mathbf{x}\|^{2}\right\},\] and the unique minimizer is denoted as \[\mathbf{Prox}_{\lambda\psi}(\mathbf{x})=\operatorname*{arg\,min}_{\mathbf{y} }\left\{\psi(\mathbf{y})+\frac{1}{2\lambda}\|\mathbf{y}-\mathbf{x}\|^{2} \right\}.\] The Moreau envelope \(\psi_{\lambda}\) has nice properties. The result below can be found in (Davis & Drusvyatskiy, 2019; Nazari et al., 2020; Xu et al., 2022). **Lemma 2**.: _For any function \(\psi\), if it is \(L\)-weakly convex, then for any \(\lambda\in(0,\frac{1}{L})\), the Moreau envelope \(\psi_{\lambda}\) is smooth with gradient given by \(\nabla\psi_{\lambda}(\mathbf{x})=\lambda^{-1}(\mathbf{x}-\widehat{\mathbf{x}}),\) where \(\widehat{\mathbf{x}}=\mathbf{Prox}_{\lambda\psi}(\mathbf{x})\). Moreover,_ \[\|\mathbf{x}-\widehat{\mathbf{x}}\|=\lambda\|\nabla\psi_{\lambda}(\mathbf{x}) \|,\quad\mathbf{dist}(\mathbf{0},\partial\psi(\widehat{\mathbf{x}}))\leq\| \nabla\psi_{\lambda}(\mathbf{x})\|.\] Lemma 2 implies that if \(\|\nabla\psi_{\lambda}(\mathbf{x})\|\) is small, then \(\widehat{\mathbf{x}}\) is a near-stationary point of \(\psi\) and \(\mathbf{x}\) is close to \(\widehat{\mathbf{x}}\). Hence, \(\|\nabla\psi_{\lambda}(\mathbf{x})\|\) can be used as a valid measure of stationarity violation at \(\mathbf{x}\) for \(\psi\). Based on this observation, we define the \(\epsilon\)-stationary solution below for the decentralized problem (2). **Definition 3** (Expected \(\epsilon\)-stationary solution).: Let \(\epsilon>0\). A point \(\mathbf{X}=[\mathbf{x}_{1},\ldots,\mathbf{x}_{n}]\) is called an expected \(\epsilon\)-stationary solution of (2) if for a constant \(\lambda\in(0,\frac{1}{L})\), \[\tfrac{1}{n}\mathbb{E}\left[\sum_{i=1}^{n}\|\nabla\phi_{\lambda}(\mathbf{x}_{ i})\|^{2}+L^{2}\|\mathbf{X}_{\perp}\|^{2}\right]\leq\epsilon^{2}.\] In the definition above, \(L^{2}\) before the consensus error term \(\|\mathbf{X}_{\perp}\|^{2}\) is to balance the two terms. This scaling scheme has also been used in existing works such as (Xin et al., 2021a; Mancino-Ball et al., 2022; Yau & Wai, 2022). From the definition, we see that if \(\mathbf{X}\) is an expected \(\epsilon\)-stationary solution of (2), then each local solution \(\mathbf{x}_{i}\) will be a near-stationary solution of \(\phi\) and in addition, these local solutions are all close to each other, namely, they are near consensus. Below we first state the convergence results of the non-compressed method DProxSGT and then the compressed one CDProxSGT. All the proofs are given in the appendix. **Theorem 4** (Convergence rate of DProxSGT).: _Under Assumptions 1 - 3, let \(\{\mathbf{X}^{t}\}\) be generated from \(\mathrm{DProxSGT}\) in Algorithm 1 with \(\mathbf{x}_{i}^{0}=\mathbf{x}^{0},\forall\,i\in\mathcal{N}\). Let \(\lambda=\min\left\{\frac{1}{4L},\frac{1}{96\rho L}\right\}\) and \(\eta\leq\min\left\{\frac{1}{4L},\frac{(1-\rho^{2})^{4}}{96\rho L}\right\}\). Select \(\tau\) from \(\{0,1,\ldots,T-1\}\) uniformly at random. Then_ \[\tfrac{1}{n}\mathbb{E}\left[\sum_{i=1}^{n}\|\nabla\phi_{\lambda} (\mathbf{x}_{i}^{\tau})\|^{2}+\tfrac{4}{\lambda\eta}\|\mathbf{X}_{\perp}^{ \tau}\|^{2}\right]\] \[\leq\tfrac{8\left(\phi_{\lambda}(\mathbf{x}^{0})-\phi_{\lambda}^{ \star}\right)}{\eta T}+\tfrac{4616\eta}{\lambda(1-\rho^{2})^{3}}\sigma^{2}+ \tfrac{768\eta\mathbb{E}\left[\|\nabla\mathbf{F}^{0}(\mathbf{I}-\mathbf{J})\|^ {2}\right]}{n\lambda T(1-\rho^{2})^{3}},\] _where \(\phi_{\lambda}^{\star}=\min_{\mathbf{x}}\phi_{\lambda}(\mathbf{x})>-\infty\)._ By Theorem 4, we obtain a complexity result as follows. **Corollary 5** (Iteration complexity).: _Under the assumptions of Theorem 4, for a given \(\epsilon>0\), take \(\eta=\min\{\frac{1}{4L},\frac{(1-\rho^{2})^{4}}{96\rho L},\frac{\lambda(1-\rho ^{2})^{3}\epsilon^{2}}{9232\sigma^{2}}\}\). Then \(\mathrm{DProxSGT}\) can find an expected \(\epsilon\)-stationary point of (2) when \(T\geq T_{\epsilon}=\left\lceil\frac{16\left(\phi_{\lambda}(\mathbf{x}^{0})-\phi_{ \lambda}^{\star}\right)}{\eta\epsilon^{2}}+\frac{1536\eta\mathbb{E}\left[\| \nabla\mathbf{F}^{0}(\mathbf{I}-\mathbf{J})\|^{2}\right]}{n\lambda(1-\rho^{2})^{ 3}\epsilon^{2}}\right\rceil\)._ _Remark 3_.: When \(\epsilon\) is small enough, \(\eta\) will take \(\frac{\lambda(1-\rho^{2})^{3}\epsilon^{2}}{9232\sigma^{2}}\), and \(T_{\epsilon}\) will be dominated by the first term. In this case, DProxSGT can find an expected \(\epsilon\)-stationary solution of (2) in \(O\left(\frac{\sigma^{2}\left(\phi_{\lambda}(\mathbf{x}^{0})-\phi_{\lambda}^{\star} \right)}{\lambda(1-\rho^{2})^{3}\epsilon^{4}}\right)\) iterations, leading to the same number of stochastic gradient samples and communication rounds. Our sample complexity is optimal in terms of the dependence on \(\epsilon\) under the smoothness condition in Assumption 1, as it matches with the lower bound in (Arjevani et al., 2022). However, the dependence on \(1-\rho\) may not be optimal because of our possibly loose analysis, as the _deterministic_ method with single communication per update in (Scutari & Sun, 2019) for nonconvex nonsmooth problems has a dependence \((1-\rho)^{2}\) on the graph topology. **Theorem 6** (Convergence rate of CDProxSGT).: _Under Assumptions 1 through 4, let \(\{\mathbf{X}^{t}\}\) be generated from \(\mathrm{CDProxSGT}\) in Algorithm 2 with \(\mathbf{x}_{i}^{0}=\mathbf{x}^{0},\forall\,i\in\mathcal{N}\). Let \(\lambda=\min\left\{\frac{1}{4L},\frac{(1-\alpha^{2})^{2}}{8L+41280}\right\}\), and suppose_ \[\eta\leq\ \min\left\{\lambda,\frac{(1-\alpha^{2})^{2}(1-\theta_{ \lambda}^{2})^{2}(1-\theta_{\lambda}^{2})^{2}}{18830\max\{1,L\}}\right\},\] \[\gamma_{x}\leq\ \min\left\{\frac{1-\alpha^{2}}{25},\frac{\eta}{\alpha} \right\},\quad\gamma_{y}\leq\ \tfrac{(1-\alpha^{2})(1-\theta_{\rho}^{2})(1-\theta_{\lambda}^{2})}{317}.\] _Select \(\tau\) from \(\{0,1,\ldots,T-1\}\) uniformly at random. Then_ \[\tfrac{1}{n}\mathbb{E}\left[\sum_{i=1}^{n}\|\nabla\phi_{\lambda}( \mathbf{x}_{i}^{\tau})\|^{2}+\tfrac{4}{\lambda\eta}\|\mathbf{X}_{\perp}^{ \tau}\|^{2}\right]\] \[\leq\tfrac{8\left(\phi_{\lambda}(\mathbf{x}^{0})-\phi_{\lambda}^{ \star}\right)}{\eta T}+\tfrac{50096n+48\eta\sigma^{2}}{n\lambda(1-\rho_{x}^{2})^{ 2}(1-\theta_{y}^{2})}+\tfrac{4176\eta\mathbb{E}\left[\|\nabla\mathbf{F}^{0} (\mathbf{I}-\mathbf{J})\|^{2}\right]}{n\lambda T(1-\rho_{x}^{2})^{2}(1-\theta_ {y}^{2})},\] _where \(\phi_{\lambda}^{\star}=\min_{\mathbf{x}}\phi_{\lambda}(\mathbf{x})>-\infty\)._ of Theorem 6, for a given \(\epsilon>0\), take_ \[\eta=\min\left\{\frac{1}{4L},\frac{(1-\alpha^{2})^{2}}{9L+41280}, \frac{(1-\alpha^{2})^{2}(1-\widehat{\rho}_{x}^{2})^{2}(1-\widehat{\rho}_{y}^{2}) ^{2}}{18830\max\{1,L\}}\right.,\] \[\left.\frac{n\lambda(1-\widehat{\rho}_{x}^{2})^{2}(1-\widehat{ \rho}_{y}^{2})^{2}}{2(50096n+48)\sigma^{2}}\right\},\] \[\gamma_{x}=\min\left\{\frac{1-\alpha^{2}}{25},\frac{\eta}{\alpha} \right\},\quad\gamma_{y}=\frac{(1-\alpha^{2})(1-\widehat{\rho}_{x}^{2})(1- \widehat{\rho}_{y}^{2})}{317}.\] _Then \(\mathrm{CDProxSGT}\) can find an expected \(\epsilon\)-stationary point of (2) when \(T\geq T_{\epsilon}^{c}\) where_ \[T_{\epsilon}^{c}=\left\lceil\frac{16\left(\phi_{\lambda}(\mathbf{x}^{0})- \phi_{\lambda}^{*}\right)}{\eta\epsilon^{2}}+\frac{8352\eta\left\lVert\mathbf{ vF}^{0}\right\rVert^{2}}{n\lambda(1-\widehat{\rho}_{x}^{2})^{2}(1-\widehat{\rho}_{y}^{ 2})\epsilon^{2}}\right\rceil.\] _Remark 4_.: When the given tolerance \(\epsilon\) is small enough, \(\eta\) will take \(\frac{n\lambda(1-\widehat{\rho}_{x}^{2})^{2}(1-\widehat{\rho}_{y}^{2})^{2}}{2 (50096n+48)\sigma^{2}}\) and \(T_{\epsilon}^{c}\) will be dominated by the first term. In this case, similar to DProxSGT in Remark 3, CDProxSGT can find an expected \(\epsilon\)-stationary solution of (2) in \(O\Big{(}\frac{\sigma^{2}\left(\phi_{\lambda}(\mathbf{x}^{0})-\phi_{\lambda}^{ *}\right)}{\lambda(1-\widehat{\rho}_{x}^{2})^{2}(1-\widehat{\rho}_{y}^{2}) \epsilon^{2}}\Big{)}\) iterations. ## 5 Numerical Experiments In this section, we test the proposed algorithms on training two neural network models, in order to demonstrate their better generalization over momentum variance-reduction methods and large-batch training methods and to demonstrate the success of handling heterogeneous data even when only compressed model parameter and gradient information are communicated among workers. One neural network that we test is LeNet5 (LeCun et al., 1989) on the FashionMNIST dataset (Xiao et al., 2017), and the other is FixupResNet20 (Zhang et al., 2019) on Cifar10 (Krizhevsky et al., 2009). Our experiments are representative to show the practical performance of our methods. Among several closely-related works, (Xin et al., 2021a) includes no experiments, and (Mancino-Ball et al., 2022; Zhao et al., 2022) only tests on tabular data and MNIST. (Koloskova et al., 2019a) tests its method on Cifar10 but needs similar data distribution on all workers for good performance. FashionMNIST has a similar scale as MNIST but poses a more challenging classification task (Xiao et al., 2017). Cifar10 is more complex, and FixupResNet20 has more layers than LeNet5. All the compared algorithms are implemented in Python with Pytorch and MPI4PY (for distributed computing). They run on a Dell workstation with two Quadro RTX 5000 GPUs. We use the 2 GPUs as 5 workers, which communicate over a ring-structured network (so each worker can only communicate with two neighbors). Uniform weight is used, i.e., \(W_{ji}=\frac{1}{3}\) for each pair of connected workers \(i\) and \(j\). Both FashionMNIST and Cifar10 have 10 classes. We distribute each data onto the 5 workers based on the class labels, namely, each worker holds 2 classes of data points, and thus the data are heterogeneous across the workers. For all methods, we report their objective values on training data, prediction accuracy on testing data, and consensus errors at each epoch. To save time, the objective values are computed as the average of the losses that are evaluated during the training process (i.e., on the sampled data instead of the whole training data) plus the regularizer per epoch. For the testing accuracy, we first compute the accuracy on the whole testing data for each worker by using its own model parameter and then take the average. The consensus error is simply \(\lVert\mathbf{X}_{\perp}\rVert^{2}\). ### Sparse Neural Network Training In this subsection, we test the non-compressed method DProxSGT and compare it with AllReduce (that is a centralized method and used as a baseline), DEEPSTORM1 and ProxGT-SA (Xin et al., 2021a) on solving (2), where \(f\) is the loss on the whole training data and \(r(\mathbf{x})=\mu\lVert\mathbf{x}\rVert_{1}\) serves as a sparse regularizer that encourages a sparse model. Footnote 1: For DEEPSTORM, we implement DEEPSTORM v2 in (Mancino-Ball et al., 2022). For training LeNet5 on FashionMNIST, we set \(\mu=10^{-4}\) and run each method to 100 epochs. The learning rate \(\eta\) and batchsize are set to \(0.01\) and 8 for AllReduce and DProxSGT. DEEPSTORM uses the same \(\eta\) and batchsize but with a larger initial batchsize 200, and its momentum parameter is tuned to \(\beta=0.8\) in order to yield the best performance. ProxGT-SA is a large-batch training method. We set its batchsize to 256 and accordingly apply a larger step size \(\eta=0.3\) that is the best among \(\{0.1,0.2,0.3,0.4\}\). For training FixupResnet20 on Cifar10, we set \(\mu=5\times 10^{-5}\) and run each method to 500 epochs. The learning rate and batchsize are set to \(\eta=0.02\) and 64 for AllReduce, DProxSGT, and DEEPSTORM. The initial batchsize is set to 1600 for DEEPSTORM and the momentum parameter set to \(\beta=0.8\). ProxGT-SA uses a larger batchsize 512 and a larger stepsize \(\eta=0.1\) that gives the best performance among \(\{0.05,0.1,0.2,0.3\}\). The results for all methods are plotted in Figure 1. For LeNet5, DProxSGT produces almost the same curves as the centralized training method AllReduce, while on FixupResnet20, DProxSGT even outperforms AllReduce in terms of testing accuracy. This could be because AllReduce aggregates stochastic gradients from all the workers for each update and thus equivalently, it actually uses a larger batchsize. DEEPSTORM performs equally well as our method DProxSGT on training LeNet5. However, it gives lower testing accuracy than DProxSGT and also oscillates significantly more seriously on training the more complex neural network FixupResnet20. This appears to be caused by the momentum variance reduction scheme used in DEEPSTORM. In addition, we see that the large batch training method ProxGT-SA performs much worse than DProxSGT within the same number of epochs (i.e., data pass), especially on training FixupResnet20. ### Neural Network Training by Compressed Methods In this subsection, we compare CDProxSGT with two state-of-the-art compressed training methods: Choco-SGD (Koloskova et al., 2019; 20) and BEER (Zhao et al., 2022). As Choco-SGD and BEER are studied only for problems without a regularizer, we set \(r(\mathbf{x})=0\) in (2) for the tests. Again, we compare their performance on training LeNet5 and FixupResnet20. The two non-compressed methods AllReduce and DProxSGT are included as baselines. The same compressors are used for CDProxSGT, Choco-SGD, and BEER, when compression is applied. We run each method to 100 epochs for training LeNet5 on FashionMNIST. The compressors \(Q_{y}\) and \(Q_{x}\) are set to top-\(k(0.3)\)(Aji and Heafield, 2017), i.e., taking the largest \(30\%\) elements of an input vector in absolute values and zeroing out all others. We set batchsize to 8 and tune the learning rate \(\eta\) to \(0.01\) for AllReduce, DProxSGT, CDProxSGT and Choco-SGD, and for CDProxSGT, we set \(\gamma_{x}=\gamma_{y}=0.5\). BEER is a large-batch training method. It uses a larger batchsize 256 and accordingly a larger learning rate \(\eta=0.3\), which appears to be the best among \(\{0.1,0.2,0.3,0.4\}\). For training FixupResnet20 on the Cifar10 dataset, we run each method to 500 epochs. We take top-\(k(0.4)\)(Aji and Heafield, 2017) as the compressors \(Q_{y}\) and \(Q_{x}\) and set \(\gamma_{x}=\gamma_{y}=0.8\). For AllReduce, DProxSGT, CDProxSGT and Choco-SGD, we set their batchsize to 64 and tune the learning rate \(\eta\) to \(0.02\). For BEER, we use a larger batchsize 512 and a larger learning rate \(\eta=0.1\), which is the best among \(\{0.05,0.1,0.2,0.3\}\). The results are shown in Figure 2. For both models, CDProxSGT yields almost the same curves of objective values and testing accuracy as its non-compressed counterpart DProxSGT and the centralized non-compressed method AllReduce. This indicates about 70% saving of communication for the training of LeNet5 and 60% saving for FixupResnet20 without sacrifying the testing accuracy. In comparison, BEER performs significantly worse than the proposed method CDProxSGT within the same number of epochs in terms of all the three measures, especially on training the more complex neural network FixupResnet20, which should be attributed to the use of a larger batch by BEER. Choco-SGD can produce comparable objective values. However, its testing accuracy is much lower than that produced by our method CDProxSGT. This should be because of the data heterogeneity that ChocoSGD cannot handle, while CDProxSGT applies the gradient tracking to successfully address the challenges of data heterogeneity. ## 6 Conclusion We have proposed two decentralized proximal stochastic gradient methods, DProxSGT and CDProxSGT, for nonconvex composite problems with data heterogeneously distributed on the computing nodes of a connected graph. CDProxSGT is an extension of DProxSGT by applying compressions on the communicated model parameter and gradient information. Both methods need only a single or \(\mathcal{O}(1)\) samples for each update, which is important to yield good generaliza Figure 1: Results of training sparse neural networks by non-compressed methods with \(r(\mathbf{x})=\mu\|\mathbf{x}\|_{1}\) for the same number of epochs. Left: LeNet5 on FashionMNIST with \(\mu=10^{-4}\). Right: FixupResnet20 on Cifar10 with \(\mu=5\times 10^{-5}\). Figure 2: Results of training neural network models by compressed methods for the same number of epochs. Left: LeNet5 on FashionMNIST. Right: FixupResnet20 on Cifar10. tion performance on training deep neural networks. The gradient tracking is used in both methods to address data heterogeneity. An \(\mathcal{O}\left(\frac{1}{\epsilon^{4}}\right)\) sample complexity and communication complexity is established to both methods to produce an expected \(\epsilon\)-stationary solution. Numerical experiments on training neural networks demonstrate the good generalization performance and the ability of the proposed methods on handling heterogeneous data.
2309.09617
The Potential of Subsampling and Inpainting for Fast Low-Dose Cryo FIB-SEM Imaging and Tomography
Traditional image acquisition for cryo focused ion-beam scanning electron microscopy tomography often sees thousands of images being captured over a period of many hours, with immense data sets being produced. When imaging beam sensitive materials, these images are often compromised by additional constraints related to beam damage and the devitrification of the material during imaging, which renders data acquisition both costly and unreliable. Subsampling and inpainting are proposed as solutions for both of these aspects, allowing fast and low-dose imaging to take place in the FIB-SEM without an appreciable low in image quality. In this work, experimental data is presented which validates subsampling and inpainting as a useful tool for convenient and reliable data acquisition in a FIB-SEM, with new methods of handling 3-dimensional data being employed in context of dictionary learning and inpainting algorithms using a newly developed microscope control software and data recovery algorithm.
Daniel Nicholls, Maryna Kobylysnka, Jack Wells, Zoe Broad, Alex W. Robinson, Damien McGrouther, Amirafshar Moshtaghpour, Angus I. Kirkland, Roland A. Fleck, Nigel D. Browning
2023-09-18T09:40:48Z
http://arxiv.org/abs/2309.09617v2
# The Potential of Subsampling and Inpainting for ###### Abstract Traditional image acquisition for cryo focused ion-beam scanning electron microscopy tomography often sees thousands of images being captured over a period of many hours, with immense data sets being produced. When imaging beam sensitive materials, these images are often compromised by additional constraints related to beam damage and the devitrification of the material during imaging, which renders data acquisition both costly and unreliable. Subsampling and inpainting are proposed as solutions for both of these aspects, allowing fast and low-dose imaging to take place in the FIB-SEM without an appreciable loss in image quality. In this work, experimental data is presented which validates subsampling and inpainting as a useful tool for convenient and reliable data acquisition in a FIB-SEM, with new methods of handling 3-dimensional data being employed in context of dictionary learning and inpainting algorithms using a newly developed microscope control software and data recovery algorithm. ## 1 Introduction Focused ion-beam scanning electron microscopy (FIB-SEM) tomography is a powerful technique for performing high-resolution volume imaging. This technique produces three/four-dimensional SEM image data cubes generated by sequential SEM imaging and FIB serial sectioning [11]; Kizilyaprak et al. (2014). In the case of three-dimensional data, each FIB-section is followed by a single image. In the case of four-dimensional data, each FIB-section is followed by a set of images, referred to individually as frames. The dimensionality of this 4D data can be naturally reduced to 3D by means of integrating each stack of frames, and this is commonly done during the imaging of beam/charge sensitive materials such as those often experienced when employing cryo techniques for the imaging of biological materials. It is observed that acquiring multiple images with a reduced dwell time (_i.e.,_ the beam exposure time for each pixel in an image) produces higher quality images when compared to acquiring an equivalent electron-count image at a higher dwell time but with fewer frames. The mechanism behind this phenomena is not entirely understood by the community and work related to this is ongoing, but is generally attributed to sample charging. For the aforementioned volume imaging of biological materials at cryo conditions [10, 11, 12], there is a constant struggle - the sample must be adequately irradiated to produce enough signal for fine features to be resolved by the scanning electron microscope, yet not too irradiated to induce charging, which affects signal-to-noise (SNR), or worse yet, alter the structure of the material or devitrify it. Greater understanding of sample preparation and more sensitive detectors have made significant strides in lowering the barrier of entry to perform cryo FIB-SEM imaging and tomography, but the process of acquiring significant sized volumes with sufficient quality for performing post-acquisition analysis remains difficult, and requires an incredible time and expertise commitment. It is not uncommon for FIB-SEM tomography data sets to span multiple days or weeks, depending on the imaging conditions and experiment requirements, with regular monitoring. A majority of this time is spent imaging, wherein thousands of low-dose images may be acquired for each frame, before being stitched together to form a full volume. It has been previously proposed that compressive sensing fundamentals can be applied to this imaging domain through the application of subsampling and inpainting methods [13]. This work validated, via simulation, the use of subsampling: the deliberate acquisition of incomplete data sets, and inpainting, the recovery of missing sections of images. This work indicated that taking advantage of the unique dimensionality of the data provided by the FIB-SEM acquisition model is incredibly useful in increasing data acquisition and inpainting efficiency. Investigated here is an extension of this line of enquiry not only with experimental data, but also with a new method for efficiently inpainting data produced by cryo FIB-SEM imaging of a biological system, _Euglena gracilis_. ## 2 Methods ### Experimental Setup All of the data presented in this work was acquired using a JEOL JIB-4700F Z FIB-SEM (JEOL, Japan), equipped with a Leica microsystems EM VCT500 cryo stage and cryo transfer system (Leica microsystems, Austria). The data acquisition was performed using SenseAI's control software (SenseAI, UK) and Quantum Detectors' scan engine (Quantum Detectors, UK). The sample, _Euglena gracilis_ (4\(\mu\)L), Klebs CCAP 1224/5Z, was pipetted onto a gold TEM grid with a holey carbon support film (R 1.2/1.3) with 1.2\(\mu\)m hole diameter, 1.3\(\mu\)m spacing, and 2.5\(\mu\)m periodicity (Quantifoil, Germany). Samples were blotted (60s at 98% RH) and plunge frozen into liquid ethane (EM GP, plunge freezer, Leica microsystems). Vitrified grids were clipped onto a JEOL cryoARM transfer cartridge (JEOL, Japan) and loaded onto a EM VCT transfer block for transfer to the FIB-SEM. Loading was performed at 0.1% RH in a EM VCM cryo loading station (Leica microsystems) and carried to the FIB-SEM under vacuum in a cryogenically cooled EM VCT500 transfer shuttle. An intermediate evacuation of the EM VCT500 shuttle was applied by attaching the shuttle to a EM VCT500 compatible EM ACE 900 vacuum coating instrument (Leica microsystems). A single _E. gracilis_ was identified and an organoplatinum coating (5s) was applied to the surface of the sample to minimise curtaining and aid sample conductivity. A clean face of the specimen was prepared prior to imaging using FIB milling and polishing. This sample was chosen as it is a well understood sample that is well documented, and is a typical proxy for many relevant biological materials. To perform the subsampled data acquisition and inpainting, SenseAI's control software and Quantum Detectors' scan engine were utilised. SenseAI is a software suite which provides microscope control to perform regular and subsampled imaging with high levels of user control, as well as performing image reconstruction directly through the use of dictionary learning based image inpainting algorithms. Quantum Detectors' scan engine is used to interface directly with the SEM scanning system, circumventing the manufacturer's image acquisition software. Together, these tools provide the microscopist with powerful options for tailoring their image acquisition to their exact specifications. Fig. 2 shows an example of an SEM secondary electron image formed by integration of 100 frames acquired with a 1\(\mu\)s dwell time. A detailed description of the acquisition model can be found in previous work [Nicholls et al. (2023)]. To ensure hysteresis issues are minimised, as they are not the focus of this study, line hop sampling was utilised [Kovarik et al. (2016); Nicholls et al. (2021)]. Line hop sampling has been previously been implemented on scanning transmission electron microscopes with positive results, and as the optics and system are similar, it was proposed that line hop sampling would be adequate for use with scanning electron microscopes. A detailed study into scanning electron microscope hysteresis when subsampling is reserved for future work, though positive results have been pub Figure 1: (Left) Operating principles of Cryo FIB-SEM. A focused ion-beam is used to remove a layer of material. A scanning electron microscope is then used to obtain surface information from that newly revealed surface. (Right) Diagram showing the 3-dimensional nature of the data cube and the structure of a patch in 3D for dictionary leaning and inpainting. Each voxel in this diagram is equivalent to a pixel in each image, where the cube is constructed from a series of images. lished previously [1]. ### Data Recovery For a detailed description of the image recovery model, the reader is referred to previous work [21] which covers this in depth. The only changes to the image recovery model for this work is that the partitioning of the data cube into \(B\times B\) overlapping square patches is instead partitioned into \(B\times B\times L\) patches, where \(L\) is depth of the patch in the Z axis, through the data cube, and as such the respective dictionary dimensions change to reflect this. As previously mentioned, the data was acquired and then inpainted using SenseAI, a microscope control and image recovery software. SenseAI employs the beta process factor analysis (BPFA) algorithm [14, 15], a dictionary leaning algorithm, which has been previously validate for use in other electron microscopy applications [16, 17, 20, 21, 22]. BPFA can be used to learn features within a data set, and when paired with an appropriate sparse coding algorithm, can be used to inpaint data - fill in the missing gaps. By using a 3-dimensional patch shape within SenseAI, information can be learned and inpainted in all three dimensions. This allows dictionary elements to be formed which consider the whole data stack, allowing information from different layers within the stack to inform the learning process. For the examples provided in this work, no image registration or alignment was performed - the data is untreated and used as it is acquired from the microscope. Performance could theoretically be improved by pre-processing, but this is omitted to allow the proposed method to be studied in isolation. ## 3 Results Fig. 4 shows various conditions used to image a single prepared face of an algae, _Euglena gracilis_, prepared by FIB-milling. For a series of sampling percentages, a set of 100 images (or frames) was acquired with a dwell time of 1\(\mu\)s. As can be seen in the first column of Fig. 4, as the sampling percentage decreases, by the application of probe subsampling, the image becomes darker as a smaller portion of the data is acquired, with non-sampled data being represented by black pixels. This sampling percentage directly correlates to both the time to acquire the frame and the relative electron dose use to form it - a frame acquired at 10% sampling was acquired in a tenth the time and electron dose compared to the 100% sampling equivalent image. The second column of Fig. 4 shows this individual frame reconstructed using a 3-dimensional patch through SenseAI, where the whole stack of 100 frames was included in the dictionary learning process. For all cases, from high sampling to low sampling, this reconstruction visually shows a quality increase over the subsampled frame acquisition. Another option for treating subsampled data without requiring a reconstruction is to simply integrate the data, and this is shown in column three of Fig. 4, whereby each pixel value is determined by the non-zero mean of the pixel values in the integrating axis. For 100 frames and line hop sampling, this method is valid down to 15% sampling, where image artefacts are beginning to appear. At 10% and below, significant distortions are present. In all cases however, benefit is seen through inpainting, as seen in column four: images formed by integrating the entire stack of reconstructions, akin to the regular method of integrating the entire stack of fully sampled images. For each of these integrated images (columns three and four), the peak signal-to-noise ratio, an image quality metric, was calculated. Reconstruction by SenseAI showed an average increase in image quality of 5.5 dB when compared to the equivalent image at 100% sampling, as shown in Fig. 3. This can be interpreted as a significant increase in image quality due to inpainting, which enables subsampling as a valid imaging tool to perform fast, low dose SEM imaging. ## 4 Conclusions and Future Work In this work, subsampling and inpainting methods have been used to image a cryo-fixed vitrified algae using scanning electron microscopy, enabled by software and hardware supplied by SenseAI and Quantum Detectors. The efficacy of these methods was significantly increased through the application of 3-dimensional dictionary learning and inpainting, wherein an entire data stack was used to inform the data recovery processes. This led to a significant increase in image quality for both single frames and integrated images, enabling both time resolved SEM imaging as well as high-resolution imaging of this beam sensitive material under low-dose conditions. The positive results of applying 3-dimensional dictionary learning and inpainting to this integrated-SEM imaging framework demonstrates an important fact: higher dimensionality data provides unique benefits when performing dictionary learning. This is naturally understandable as learning and inpainting efficacy generally increases as the amount of data (and processing power) available to the algorithm increases. A natural extension of these methods is to move towards 4-dimensional imaging, wherein FIB-enabled slice-and-view methods inform the fourth dimension, _i.e.,_ tomography. This theoretical experiment begins with a data cube of dimension (X, Y, frames, slices), where all four axes are subsampled in some manner. This extension would, in theory, further increase the efficiency of the learning algorithm, and as such increase reconstruction quality and enable even lower sampling rates to be considered. However, a major technological issue stands in the way, in that the size of the data that can be processed by SenseAI is currently limited to the amount of memory on the GPU avail Figure 2: SEM secondary electron integrated image formed by SenseAI’s microscope control software and Quantum Detectors’ scan engine; 100% sampling, 1\(\mu\)s dwell time, probe 3, and 100 frames with no processing. Critical structures which are resolved within the image are labelled. Flyback distortions are present in the image (left hand edge) as no flyback compensation is performed. This is done to minimise dose exposure and minimise imaging time. Image dimensions are 21.50 \(\times\) 16.12 \(\mu\)m. able. Currently, meaningful 4-dimensional cryo FIB-SEM data volumes exceed SenseAI's capability, and only 3-dimensional data sets such as the one presented in this work are capable for the time being, with work regarding 4-dimensional inpainting reserved for future work. Other ongoing work related to dictionary learning and inpainting of 4D-STEM data using SenseAI corroborates this concept, though the data volumes in 4D-STEM are significantly smaller [Robinson et al. (2023a)]. ## 5 Acknowledgements The authors would like to acknowledge JEOL (UK) Ltd and Quantum Detectors for enabling and supporting this work. ## 6 Competing Interests Authors D. Nicholls, J. Wells, A. Robinson, and N. D. Browning are employed by SenseAI Innovations Ltd. Author D. McGrouther is employed by JEOL (UK) Ltd. All other authors declare no competing interests.
2309.04333
Encoding Multi-Domain Scientific Papers by Ensembling Multiple CLS Tokens
Many useful tasks on scientific documents, such as topic classification and citation prediction, involve corpora that span multiple scientific domains. Typically, such tasks are accomplished by representing the text with a vector embedding obtained from a Transformer's single CLS token. In this paper, we argue that using multiple CLS tokens could make a Transformer better specialize to multiple scientific domains. We present Multi2SPE: it encourages each of multiple CLS tokens to learn diverse ways of aggregating token embeddings, then sums them up together to create a single vector representation. We also propose our new multi-domain benchmark, Multi-SciDocs, to test scientific paper vector encoders under multi-domain settings. We show that Multi2SPE reduces error by up to 25 percent in multi-domain citation prediction, while requiring only a negligible amount of computation in addition to one BERT forward pass.
Ronald Seoh, Haw-Shiuan Chang, Andrew McCallum
2023-09-08T14:00:29Z
http://arxiv.org/abs/2309.04333v1
# Encoding Multi-Domain Scientific Papers ###### Abstract Many useful tasks on scientific documents, such as topic classification and citation prediction, involve corpora that span multiple scientific domains. Typically, such tasks are accomplished by representing the text with a vector embedding obtained from a Transformer's single CLS token. In this paper, we argue that using multiple CLS tokens could make a Transformer better specialize to multiple scientific domains. We present Multi\({}^{2}\)SPE: it encourages each of multiple CLS tokens to learn diverse ways of aggregating token embeddings, then sums them up together to create a single vector representation. We also propose our new multi-domain benchmark, MultiSciDocs, to test scientific paper vector encoders under multi-domain settings. We show that Multi\({}^{2}\)SPE reduces error by up to 25% in multi-domain citation prediction, while requiring only a negligible amount of computation in addition to one BERT forward pass. ## 1 Introduction With an ever-increasing amount of research publications, it has become virtually essential to develop NLP methods that would allow researchers to efficiently process the wealth of scientific knowledge. Leveraging pretrained language models and citation graphs, SPECTER Cohan et al. (2020) brings sizeable improvement over the previously state-of-the-art paper encoders and similarity estimation models such as SciBERT Beltagy et al. (2019) and Citeomatic Bhagavatula et al. (2018). Recently, SciNCL Ostendorff et al. (2022) has introduced more sophisticated positive and negative sampling strategies to improve SPECTER further. Despite all the progress made so far, what is yet missing from literature is examining whether existing encoders can effectively represent the scientific papers across diverse subject areas. In previous work Bhagavatula et al. (2018); Beltagy et al. (2019); Cohan et al. (2020), their training and evaluation data primarily consist of scientific papers from specific subject areas such as computer science and medicine. While these choices might be due to non-technical reasons such as the lack of open access articles Piwowar et al. (2018) or insufficient number of users from certain domains, it naturally makes us wonder whether we can represent papers from more diverse scientific domains using a single encoder, and whether we could improve state-of-the-art models under multi-domain settings. In this paper, we lay out our two-parted solution to overcome this limitation: the first part is our scientific paper encoder, Multi\({}^{2}\)SPE. This is inspired by Multi-CLS BERT Chang et al. (2022) and built upon the intuition that extracting embeddings through just one CLS token is limiting, as more ideal ways of mixing contextualized word Figure 1: An overview of our two-parted solution. 1) Multi\({}^{2}\)SPE better utilizes multi-domain citation data through multiple diversified CLS embeddings. 2) MultiSciDocs is our new benchmark for testing scientific papers embeddings under multi-domain settings. embeddings could be different for each subject areas. Instead, we add multiple CLS tokens to obtain embeddings that pay attention to different words, and ensemble the embeddings together to form a single paper representation. For the second part, we introduce the Multi-SciDocs benchmark, to better understand the capabilities of scientific document representations in handling multi-domain settings. Comparing Multi\({}^{2}\)SPE and single CLS baselines on Multi-SciDocs suggests that training Multi\({}^{2}\)SPE on single domain-dominated citation graphs already boosts the scores on multi-domain tasks; with more balanced multi-domain training, Multi\({}^{2}\)SPE brings even bigger improvements. ## 2 Multi\({}^{2}\)SPE: Multi-Domain \(\times\) Multi-CLS Scientific Paper Encoder We describe the major components of Multi\({}^{2}\)SPE, our scientific paper encoder. Our key idea is to use multiple CLS tokens instead of just one: since one CLS embedding corresponds to merely a single scheme of aggregating word embeddings, it might be sufficient for the documents from one domain but may be far from ideal for the documents from other domains. We address this observation by prepending multiple CLS tokens to input documents and introducing small architectural additions that encourage each CLS embeddings to learn distinctive ways of mixing word embeddings together for the final document representation. ### Multiple CLS Encoder With multiple CLS tokens ([CLS1],... [CLSK]), we insert linear layers \(L_{l,k}\) at the sequence positions of each CLS embeddings as shown in Figure 2, to encourage the CLS embeddings to pay attention to different contextualized word embeddings. We use a re-parameterization trick to ensure that all the added linear transformations at each BERT layer are different and not similar to each other: \[L_{l,k}(\mathbf{h}_{l,k}^{c})=(\mathbf{W}_{l,k}-\frac{1}{K}\sum_{k^{\prime}} \mathbf{W}_{l,k^{\prime}})\mathbf{h}_{l,k}^{c}+\mathbf{b}_{l,k}, \tag{1}\] \(L_{l,k}\) is the linear transformation for \(k\)th CLS token at the layer \(l\), \(\mathbf{W}_{l,k}-\frac{1}{K}\sum_{k^{\prime}}\mathbf{W}_{l,k^{\prime}}\) is the linear projection weights and \(\mathbf{b}_{l,k}\) is the bias term. To prevent \(\mathbf{W}_{l,k}-\frac{1}{K}\sum_{k^{\prime}}\mathbf{W}_{l,k^{\prime}}=\mathbf{0}\), the gradient descent tend to learn different \(\mathbf{W}_{l,k}\) for different \(k\). ### Contrastive Citation Prediction Loss Existing state-of-the-art scientific paper encoders such as SPECTER (Cohan et al., 2020) and SciNCL (Ostendorff et al., 2022) use training signals coming from a contrastive citation prediction task: their objective function is to encourage the embedding of each query paper to be close to those of the paper cited by them, and be far away the papers they did not cite, \(\mathcal{P}^{-}\). Similarly, we minimize the cross entropy loss of a given query paper \(\mathcal{P}^{Q}\), a cited paper \(\mathcal{P}^{+}\), and a paper not cited, \(\mathcal{P}^{-}\): \[L_{\mathcal{P}^{Q},\mathcal{P}^{+},\mathcal{P}^{-}}=-\log\left(\frac{\exp( \text{S}_{\mathcal{P}^{Q},\mathcal{P}^{+})}^{MC}}{\sum\limits_{\mathcal{P}\in \{\mathcal{P}^{+},\mathcal{P}^{-}\}}\exp(\text{S}_{\mathcal{P}^{Q},\mathcal{P} }^{MC})}\right), \tag{2}\] where \(\text{S}_{\mathcal{P}^{Q},\mathcal{P}^{+}}^{MC}\) is the similarity between the query paper \(\mathcal{P}^{Q}\) and the cited paper \(\mathcal{P}^{+}\) from the multiple CLS encoder. It is also the logit score for predicting the paper \(\mathcal{P}^{+}\) as the cited paper. ### Measuring Document Similarity with Multiple Embeddings One typical use of document embeddings is to perform a nearest neighbor search for retrieving candidates similar to the query document. While it would be possible to use each of CLS embeddings separately, or concatenate them together to encode each document, we would significantly increase the computational costs of the retrieval process. Instead, during inference, we simply take the summation of CLS embeddings from paper \(A\) to be its final paper representation \(\mathbf{c}^{A}=\sum_{k}\mathbf{c}^{A}_{k}\) and \(\mathbf{c}^{A}_{k}=L_{12,k}(\mathbf{h}_{12,k}^{c,A})\). During the contrastive training (Section 2.2), we compute the similarity between two papers \(\text{S}_{\mathcal{P}^{A},\mathcal{P}^{B}}^{MC}\) using dot products between their paper embeddings Figure 2: The architecture of Multi\({}^{2}\)SPE and its similarity measurement during training \(\text{S}_{\mathcal{P}^{A},\mathcal{P}^{B}}^{MC}\). \((\mathbf{c}^{A})^{T}(\mathbf{c}^{B})\) and the most similar CLS embeddings \(\max_{i,j}(\mathbf{c}^{A}_{i})^{T}\mathbf{c}^{B}_{j}\): \[\small\texttt{S}_{PA,\mathcal{P}^{B}=}^{MC}\lambda\max_{i,j}(\mathbf{c}^{A}_{i})^{T} \mathbf{c}^{B}_{j}+(1-\lambda)(\mathbf{c}^{A})^{T}(\mathbf{c}^{B}), \tag{3}\] where \(\mathbf{c}^{A}=\sum_{k}\mathbf{c}^{A}_{k}\) and \(\lambda\) is the hyperparameter for controlling the dependency between the CLS embeddings. Smaller \(\lambda\) makes similarity measurement in training and testing more consistent and encourages the CLS embeddings to collaborate with each other. Larger \(\lambda\) encourages each of the CLS embeddings to become more meaningful paper embeddings on their own. ## 3 Multi-SciDocs Cohan et al. (2020) proposed SciDocs as a comprehensive benchmark for evaluating scientific paper embeddings. SciDocs introduces 12 metrics from 7 tasks, but we have discovered that the domain distributions of 5 tasks are heavily biased toward computer science (CS) papers.1 The only exceptions are MeSH (Medical Subject Headings) (Lipscomb, 2000), which covers the papers from the biomedical domain, and MAG (Microsoft Academic Graph) (Sinha et al., 2015), which is a document classification task into 19 subject areas. Footnote 1: Please see Appendix C.3 for detailed statistics. Thus, for a better measurement of multi-domain performance, we have created the multi-domain (co-)citation prediction tasks. We refer to the collection of 3 multi-domain tasks, multi. cite, multi. co-cite, and MAG as Multi-SciDocs. For multi. (co-)cite datasets, we randomly sample the query papers from S2ORC (Lo et al., 2020), avoiding a certain domain from being the majority of query papers, and follow the construction procedure of (co-)cite in SciDocs to get the positive and negative papers. For each query, we collect 500 negative papers and up to 5 positive papers. The task is to assign higher similarity scores to the positive papers and lower scores to the negative papers. In both datasets, the negative samples come from randomly sampled papers. In the multi. cite dataset, the positive samples are the papers cited by the query paper. In the multi. co-cite dataset, the positive samples, and the query paper are both cited by another paper. ## 4 Experiments and Analyses In the experiments, we evaluate Multi2SPE and the corresponding baselines with Multi-SciDocs. SPECTER and SciNCL are our single [CLS] token baselines: both use identical neural architectures and loss functions, but differ in sampling methods used to create their contrastive triples. Since we found training datasets in previous literature to be potentially limiting in handling papers from various scientific domains, we build our own multi-domain training datasets that follow the same sampling methods of SPECTER and SciNCL, but are more balanced in terms of the domain distribution.2 Footnote 2: Please see Appendix C.3 for the comparison of domain distribution of SPECTER/SciNCL single-domain datasets and our multi-domain datasets. ### Results Our main results are shown in Table 1. We can see that Multi2SPE has consistently outperformed the baselines in all training cases. The scores from MAG show that Multi2SPE is better capable of classifying the texts into diverse subject domains. In multi-domain citation prediction, its error reductions are up to 25%. We hypothesize that the large improvement partially comes from the prevalent cross-domain citations in both our training and evaluation data. We note that the overall gains are smaller for SciNCL. We suspect that SciNCL's sampling method reduces the number of cross-domain citations in the dataset, which would have helped increase the diversity in CLS embeddings. ### Ablation Studies In Table 2, we start by examining the effect of \(\lambda\), the hyperparameter for controlling dependencies between CLS embeddings. While the differences are relatively small for \(\lambda=0.0\), we observe noticeable performance drops as we increase \(\lambda\) to 0.5 and 1.0. Our intuition is that it is generally more beneficial to encourage all embeddings to become a meaningful whole together, rather than directing each of them to stand on their own. In the second set of our ablation studies, we quantify the performance benefits of each architectural changes we introduced in Section 2.1. We can see that multiple CLS tokens are crucial, as having just one CLS leads to clear performance drop. Increasing the number of CLS tokens from 3 to 5 leads to mixed results. Their overall similar performance suggests that the quality of our multi-domain paper embeddings is not sensitive to the number of CLS tokens. Lastly, we observe that both the linear layer injection to BERT and the re-parameterization trick have clear contributions to our models' better performance. ## 5 Related Work Many studies focus only specific scientific NLP tasks such as citation recommendation Bhagavatula et al. (2018); Farber and Jatowt (2020); Farber and Sampath (2020); Ma et al. (2020) and paper recommendation Beel et al. (2016); Zhang et al. (2020). Instead, our goal is improving upon general-purpose scientific paper encoders such as SPECTER Cohan et al. (2020) and SciNCL Ostendorf et al. (2022). Another line of efforts relies on pre-defined facets Chakraborty et al. (2016); Chan et al. (2018); Ostendorff et al. (2020) or topics Zhang et al. (2020) for specific domains of interest, and measure paper similarities based on those facets/topics. Recently, Mysore et al. (2022) suggests encoding a paper into multiple sentence embeddings to allow the users to search similar papers using partially constructed query papers. In contrast, Multi\({}^{2}\)SPE automatically learns to identify the facets that are helpful for the citation prediction task, and combines all the facets into a single embedding to improve the similarity measurement and nearest neighbor search over the single CLS baseline while maintaining similar computational costs. Chang et al. (2022) proposes an efficient BERT ensemble model called Multi-CLS BERT, which inserts different linear layers for different CLS tokens, and re-parameterize its top linear layer during fine-tuning on the tasks in GLUE Wang et al. (2019) and SuperGLUE Wang et al. (2019). As explained in section 2, we also envisioned that multiple CLS tokens could be beneficial, especially for our case as all the scientific papers would come from a wide range of distinctive disciplines. However, there are key differences between Multi-CLS BERT and our encoder, such as using the re-parameterization trick in all the inserted linear layers during contrastive learning, and summing all the unnormalized CLS embeddings into a single paper embedding for the scientific paper similarity tasks. Moreover, we find that such efficient ensembling is especially beneficial when being trained and tested using multi-domain papers. ## 6 Conclusion In this work, we identified insufficiencies in existing training datasets, evaluation benchmarks, and encoder architectures, when handling diverse subject domains in scientific literature. To overcome the current limitations, we introduced Multi\({}^{2}\)SPE, a modified BERT encoder that learns a diversi \begin{table} \end{table} Table 1: Results of our methods and baselines on Multi-SciDocs. All scores are averaged over four random seeds. We show standard errors as their confidence intervals. Percentages indicate relative error reduction over the baselines (SPECTER or SciNCL), which is an important metric for the models with high accuracy. \begin{table} \end{table} Table 2: Ablation studies conducted on SPECTER and multiple domain training data. All scores are averaged over four random seeds. Percentages indicate relative error reduction over the baseline (3 CLS, \(\lambda=0.1\)). fied set of embeddings from multi-domain citation data, and Multi-SciDocs, our new benchmark for testing the embeddings of scientific papers using multi-domain tasks. Our experiments show that Multi\({}^{2}\)SPE provides consistent improvements over the SOTA baselines. The ablation studies confirm the effectiveness of our modifications to BERT. ### Limitations While we find that the training and evaluation datasets we have created allow us to better represent different scientific domains, how we could treat all subject domains more fairly in general is very much an open problem--especially with many real-life constraints on scientific literatures such as the scarcity of open access articles in certain domains. The domain distribution3 of S2ORC Lo et al. (2020), our primary source of citation records and full texts, suggests that there is apparent imbalance in the number of papers available across different domains. This situation gets even more complicated when we consider the fact that certain areas are strongly related to each other due to the nature of their subjects (e.g., Mathematics/Computer Science, Medicine/Biology.) Footnote 3: [https://github.com/allenai/s2orc](https://github.com/allenai/s2orc) We believe that the three tasks chosen in Multi-SciDocs are discriminative and objective measurements of the encoder's potential ability to handle multi-domain settings. However, it is yet to be proven that the high correlation exists between these intrinsic evaluation metrics and the actual effectiveness of any real-life systems that utilize paper representation methods. Lastly, there is more progress to be made to achieve a complete understanding of how Multi\({}^{2}\)SPE brings performance improvements. More specifically, we would like to perform more qualitative investigation on the specific roles of each CLS embeddings and how they collaborate with each other to create a single representation. ### Ethical and Broader Impact We believe that SOTA NLP techniques could deliver healthy boosts in the productivity and creativity of any researchers, regardless of their academic disciplines. For example, better similarity measurements between the papers across multiple domains could improve scientific paper retrieval systems: this would allow researchers to efficiently navigate through the wealth of scientific knowledge, and could eventually serve a significant role in encouraging new research activities. ## Acknowledgements We thank Purujit Goyal for discovering a critical bug in our code. This work was supported in part by the Center for Data Science and the Center for Intelligent Information Retrieval, in part by the Chan Zuckerberg Initiative under the project Scientific Knowledge Base Construction, in part by the IBM Research AI through the AI Horizons Network, in part using high-performance computing equipment obtained under a grant from the Collaborative R&D Fund managed by the Massachusetts Technology Collaborative, and in part by the National Science Foundation (NSF) grant numbers IIS-1922090 and IIS-1763618. Any opinions, findings, conclusions, or recommendations expressed in this material are those of the authors and do not necessarily reflect those of the sponsor.
2309.12551
Is it Possible to Modify Text to a Target Readability Level? An Initial Investigation Using Zero-Shot Large Language Models
Text simplification is a common task where the text is adapted to make it easier to understand. Similarly, text elaboration can make a passage more sophisticated, offering a method to control the complexity of reading comprehension tests. However, text simplification and elaboration tasks are limited to only relatively alter the readability of texts. It is useful to directly modify the readability of any text to an absolute target readability level to cater to a diverse audience. Ideally, the readability of readability-controlled generated text should be independent of the source text. Therefore, we propose a novel readability-controlled text modification task. The task requires the generation of 8 versions at various target readability levels for each input text. We introduce novel readability-controlled text modification metrics. The baselines for this task use ChatGPT and Llama-2, with an extension approach introducing a two-step process (generating paraphrases by passing through the language model twice). The zero-shot approaches are able to push the readability of the paraphrases in the desired direction but the final readability remains correlated with the original text's readability. We also find greater drops in semantic and lexical similarity between the source and target texts with greater shifts in the readability.
Asma Farajidizaji, Vatsal Raina, Mark Gales
2023-09-22T00:47:18Z
http://arxiv.org/abs/2309.12551v2
Is it Possible to Modify Text to a Target Readability Level? An Initial Investigation Using Zero-Shot Large Language Models ###### Abstract Text simplification is a common task where the text is adapted to make it easier to understand. Similarly, text elaboration can make a passage more sophisticated, offering a method to control the complexity of reading comprehension tests. However, text simplification and elaboration tasks are limited to only relatively alter the readability of texts. It is useful to directly modify the readability of any text to an absolute target readability level to cater to a diverse audience. Ideally, the readability of readability-controlled generated text should be independent of the source text. Therefore, we propose a novel readability-controlled text modification task. The task requires the generation of 8 versions at various target readability levels for each input text. We introduce novel readability-controlled text modification metrics. The baselines for this task use ChatGPT and Llama-2, with an extension approach introducing a two-step process (generating paraphrases by passing through the language model twice). The zero-shot approaches are able to push the readability of the paraphrases in the desired direction but the final readability remains correlated with the original text's readability. We also find greater drops in semantic and lexical similarity between the source and target texts with greater shifts in the readability. ## 1 Introduction Natural language consists of information that is conveyed for a targeted audience. In order to make the text appropriate for a diverse set of readers, the source text needs to be modified accordingly. Automatic text simplification is a popular natural language processing (NLP) task where the source text is adapted to make the content easier to understand by reducing its linguistic complexity Siddharthan (2014); Sikka and Mago (2020). Typically such simplification solutions are valuable for various audiences including younger readers De Belder and Moens (2010), foreign language speakers Bingel et al. (2018), dyslexia Rello et al. (2013), sufferers of autism Evans et al. (2014) and aphasics Carroll et al. (1998). Similarly, text elaboration offers methods to make content more challenging for reading comprehension tasks and hence cater to higher level students Ross et al. (1991). However, both text simplification and elaboration are able to only relatively control the readability of the text. This means that the generated text is simplified/elaborated relative to the original text document but it does not guarantee the text itself is at an appropriate readability level for the target audience. In an ideal setting, it should be possible to modify a text document to a precise and absolute readability level. Pertinently, the readability of the modified text should be _independent_ and _uncorrelated_ with the source text's readability. Hence, regardless of the nature of the source text, it can be modified to any other readability level. To address the relative nature of current text modification approaches, we propose a novel text modification task to control text readability Harris and Hodges (1995). Given a set of text documents Figure 1: Example for the readability-controlled text modification task. The source text from CLEAR Crossley et al. (2023) is paraphrased at various target readability levels according to the Flesch reading ease score (FRES) Flesch (1948). across the whole spectrum of readability levels, generate 8 versions for each document corresponding to different target readability levels. Precisely, the target readability scores are ranging from being readable for a \(5^{\text{th}}\) grade student to understandable for university graduates (Flesch, 1948). Paraphrasing is a common NLP task where a source text is modified to convey the same meaning but using different words or sentence structures (Zhou and Bhat, 2021). Hence, automated paraphrasing solutions offer an opportunity to modify text to various target readability levels. However, standard solutions do not attempt to control the readability of the generated paraphrase and usually aim to maintain consistency with the source text (Kumar et al., 2020; Chen et al., 2020). Despite the lack of flexibility of these paraphrasing models, the remarkable growth of large-scale autoregressive foundation models (Zhou et al., 2023) have demonstrated capabilities across a broad range of NLP tasks with simple prompting (Sanh et al., 2021). Thus, the baseline solutions for the novel task in our work use zero-shot (Brown et al., 2020) prompting of such models as their backbones for readability-controlled paraphrasing. The text modification approaches that generate the eight adaptations of each source text document are assessed for their ability to control the readability. The proposed metrics assess both the readability control at an individual example level and the population level. At the individual scale, we assess, with various metrics, whether the readability of a text approaches the target value. At the population scale, we explore the extent to which the measured readability of a generated text document is conditional on the source text document's readability. Additionally, we explore the behaviour of the modified texts for each target readability level according to standard paraphrasing metrics. A good paraphrase can expect to be lexically divergent but semantically similar to the source. Our contributions can be summarized as follows: * Introduction of a novel task for readability-controlled text modification. * Definition of appropriate evaluation metrics for controlling readability. * In-depth analysis of zero-shot large language model solutions for controlling text readability with paraphrasing. ## 2 Related Work In this work we focus on controllability in text modification for readability. Previous works have explored similar approaches and tasks to control various attributes across a diverse range of natural language tasks. Here, we discuss the control of attributes in machine translation (Logeswaran et al., 2018), automatic summarization and text generation (Zhang et al., 2022). Machine translation is a natural language generation task that translates a source text into a different language. Kikuchi et al. (2016) investigates the ability to control the length of generated sentences such that translations can vary from brief summaries to longer texts. Beyond structural control, Yamagishi et al. (2016) controls the voice of the translation while Sennrich et al. (2016) controls the honorifics and politeness as the selected attributes. Summarization is a standard natural language generation task where a source text must be condensed whilst maintaining the core elements of the original passage. Automatic summarization has observed the control of various attributes including length, entity-centric, source-specific and linked to a particular portion of the source document (Fan et al., 2018). In text generation, Zhang et al. (2022) states that the attributes to control are grouped into 3 distinct categories: semantic, structural and lexical. Semantic control involves the control of emotion (Chen et al., 2019; Dathathri et al., 2019) such as sentiment of the generated text as well the choice of the topic (Khalifa et al., 2020) being discussed and the degree of toxicity (Krause et al., 2021; Liu et al., 2021) in the text. Structural control typically looks at defining the syntax in the generated text and the occurrence of graphs and tables (Puduppully et al., 2019; Ribeiro et al., 2021). Finally, lexical control in text generation focuses on attributes such as the inclusion of keywords or phrases (Carlsson et al., 2022; He, 2021). ## 3 Text Readability Text readability assesses how easy a piece of text is to read. Several standard measures exist for measuring the readability of text including the Flesch-Kincaid Grade Level (Kincaid et al., 1975), Dale Chall Readability (Dale and Chall, 1949), Automated Readability Index (ARI) (Senter and Smith, 1967), Coleman Liau Index (Coleman and Liau, 1975), Gunning Fog (Gunning et al., 1952), Spache (Spache, 1953) and Linear Write (Klare, 1974). In this work, the Flesch reading-ease (Flesch, 1948) score (FRES) is used where higher scores indicate material that is easier to read while lower scores are reflective of more challenging passages. The score accounts for the ratio of the number of words to the number of sentences and the ratio of the number of syllables to the number of words to determine the overall readability as indicated in Equation 11. Footnote 11: Implementation available at: [https://pypi.org/project/py-readability-metrics/](https://pypi.org/project/py-readability-metrics/) \[\textsc{FRES}=206.835-1.015\left(\frac{n_{w}}{n_{se}}\right)-84.6\left(\frac{n _{sy}}{n_{w}}\right) \tag{1}\] where \(n_{w}\) denotes the total number of words, \(n_{se}\) denotes the total number of sentences and \(n_{sy}\) denotes the total number of syllables. FRES is selected as a simple measure for readability because it has highly interpretable ranges for the score as well as a high correlation with human comprehension as measured by reading tests (DuBay, 2007). For example, Table 1 shows that a FRES score below 10 indicates the text is readable by university graduates, FRES in the fifties is targeted for \(10-12^{\text{th}}\) grade while FRES above 90 is readable for \(5^{\text{th}}\) grade students. Such well defined ranges allows an exploration of the ability for controlling the readability of text. Note, FRES is not strictly constrained to be in the range of 0 to 100. ## 4 Readability-Controlled Text Modification ### Task definition The readability-controlled text modification task is defined as follows: "_Given a text paragraph, generate 8 versions with target readability scores of 5, 20, 40, 55, 65, 75, 85 and 95 accordingly_". This task is applied for every text in a dataset of text paragraphs. The target readability scores are selected as the halfway values for each range of FRES from Table 1. ### Evaluation The quality of the readability-controlled text modifications generated are assessed according to individual and population scale control in readability as well as additional analysis with standard paraphrasing metrics. **Individual-scale readability control**: For each example in a test set, 8 paraphrases are generated. The individual-scale readability control metrics assess the ability to appropriately control the readability of these paraphrases for each individual example. Broadly, the ranking, regression and classification abilities of a readability-controlled paraphrase generator are assessed. Let \(x\) denote the original text sequence, \(y_{r}\) denote the generated paraphrase with target readability score of \(r\in\mathcal{R}=\{5,20,40,55,65,75,85,95\}\). Let \(\mathcal{F}(\cdot)\) represent the function for calculating FRES from Equation 1. The ranking ability is assessed by calculating the Spearman's rank correlation coefficient, \(\rho\), between the 8 values of \(\mathcal{F}\left(y_{r\in\mathcal{R}}\right)\) and \(\mathcal{R}\). Hence, here we only assess whether the order of the generated paraphrases aligns with their target readabilities. Given the target readability scores, the regression ability of the model is assessed by calculating \begin{table} \begin{tabular}{c c l} \hline \hline Range & Level (US) & Description \\ \hline 0-10 & Professional & Extremely difficult to read. Best understood by university graduates. \\ 10-30 & College graduate & Very difficult to read. Best understood by university graduates. \\ 30-50 & College & Difficult to read. \\ 50-60 & 10-12th grade & Fairly difficult to read. \\ 60-70 & 8-9th grade & Plain English. Easily understood by 13- to 15-year-old students. \\ 70-80 & 7th grade & Fairly easy to read. \\ 80-90 & 6th grade & Easy to read. Conversational English for consumers. \\ 90-100 & 5th grade & Very easy to read. Easily understood by an average 11-year-old student. \\ \hline \hline \end{tabular} \end{table} Table 1: Interpretable meaning of FRES (Flesch, 1948). the root mean square error (rmse) between the actual and target readability scores of the paraphrases. \[\text{rmse}=\left[\frac{1}{8}\sum(\mathcal{F}(y_{r})-r)^{2}\right]^{1/2} \tag{2}\] Finally, the classification ability checks the ability of the paraphrase generator to control the readability of the generated text into the target range as defined in Table 1. For example, a paraphrase with a target readability of 65 is deemed correct if the measured generated text readability is in the range of 60-70 and incorrect otherwise. Therefore, the classification accuracy can be calculated according to Equation 3. \[\text{accuracy}=\frac{1}{8}\sum_{r\in\mathcal{R}}\mathbf{1}_{\mathcal{A}_{r}} (\mathcal{F}(y_{r})) \tag{3}\] where \(\mathcal{A}_{5}\in[0,10]\), \(\mathcal{A}_{20}\in[10,30]\), \(\mathcal{A}_{40}\in[30,50]\), \(\mathcal{A}_{55}\in[50,60]\), \(\mathcal{A}_{65}\in[60,70]\), \(\mathcal{A}_{75}\in[70,80]\), \(\mathcal{A}_{85}\in[80,90]\), \(\mathcal{A}_{95}\in[90,100]\). For the ranking, regression and classification metrics, the mean is reported across the test set of examples. Population-scale readability controlThese metrics assess the actual readability of each target readability across a whole population (test set) rather than considering each example individually. In particular, an important aspect of readability control requires the controlled readability of the generated text to be decorrelated and independent with the source passage readability. In principle, the original text should not have any influence on the readability of the generated text if the control of the paraphrase generator is ideal. First, we report the Pearson's correlation coefficient (pcc) between the source readability and the calculated generated text readability separately for each target readability class. Ideally, a decorrelated score should expect pcc=0. Additionally, a linear regression line of the form of \(y=ax+b\) is calculated for each target readability class between the source and generated text readability scores. In an ideal setting, the regression line should approach a gradient \(a=0\). Standard paraphrasingA good paraphrase should be lexically divergent but semantically similar to the original text (Gleitman and Gleitman, 1970; Chen and Dolan, 2011; Bhagat and Hovy, 2013). In line with Lin et al. (2021), we assess lexical divergence using self-WER (Och, 2003) 2. Semantic similarity is assessed using BERTScore (Zhang et al., 2019) 3. Footnote 2: Implementation available at: [https://github.com/belambert/asr-evaluation](https://github.com/belambert/asr-evaluation) Footnote 3: Implementation available at: [https://github.com/Tiiiger/bert_score](https://github.com/Tiiiger/bert_score) Self-WER calculates the word error rate (WER) (inspired from automatic speech recognition (Malik et al., 2021) and machine translation (Lee et al., 2023)) between the source and generated text respectively. A lexically divergent paraphrase can expect to have a high self-WER. BERTScore compares the semantic similarity of the source and paraphrase by calculating the pairwise cosine similarities between pre-computed BERT (Kenton and Toutanova, 2019) token embeddings of each of the texts. Hence, the F1 metric is reported as the geometric mean of precision and recall. ## 5 Experiments ### Data CLEAR (Crossley et al., 2023, 2021) is a large-scaled corpus for assessing text readability. Here, it is used as a test set of input passages on which readability-controlled text modification is performed. Table 2 outlines the main statistics. There are roughly 5000 different texts with a mean of 10 sentences, allowing the text modification task to be performed at the passage-level rather than at the sentence-level. \begin{table} \begin{tabular}{c c c c} \hline \hline \# examples & \# words & \# sentences & \# paragraphs \\ \hline 4,724 & \(179_{\pm 18}\) & \(9.6_{\pm 4.6}\) & \(2.5_{\pm 1.9}\) \\ \hline \hline \end{tabular} \end{table} Table 2: CLEAR dataset statistics. Figure 2: Distribution of text readability scores. Other standard datasets exist for text simplification but these are generally at the sentence-level Sun et al. (2021) while we focus on longer texts. Alternatively, various popular passage-level datasets exist in reading comprehension and paraphrasing literature. Figure 2 compares the distribution of the FRES for the passages within CLEAR, the SQuAD Rajpurkar et al. (2016) development set and News-Commentary Lin et al. (2021) test set. Due to the presence of texts across the whole spectrum of FRES scores, CLEAR is an attractive choice for investigating readability-controlled text modification. Hence, the experiments here are conducted on the CLEAR dataset only. ### Zero-shot Large-scale generative foundation models Brown et al. (2020); Chowdhery et al. (2022); Scao et al. (2022), including the popularized ChatGPT, have demonstrated state-of-the-art performance across a large range of natural language tasks in zero-shot and few-shot settings. Despite not having been specifically trained on certain tasks, these models are capable of successfully performing novel tasks with natural language prompting. Therefore, our baseline solutions for readability-controlled text modification involve zero-shot solutions using ChatGPT and Llama-2 Touvron et al. (2023). Specifically, we use gpt-3.5-turbo 4 and Llama-2-7b-chat-hf 5 respectively. Footnote 4: API access through [https://platform.openai.com/docs/models/gpt-3-5](https://platform.openai.com/docs/models/gpt-3-5) Footnote 5: Available at: [https://huggingface.co/meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) Note, Llama-2 model weights are open-sourced, allowing future solutions to further finetune the zero-shot solution specifically for readability controlled text modification. Inference with ChatGPT only requires API requests while for Llama-2, generating 8 paraphrases per example passage takes approximately 45 seconds on an Nvidia A100 GPU. All experiments conducted in this work are based on publicly accessible datasets and models for reproducibility 6. Footnote 6: Experiments available at: [https://github.com/asma-faraji/text-readability-control](https://github.com/asma-faraji/text-readability-control) **Vanilla** The zero-shot solutions using ChatGPT and Llama-2 require natural language prompts to control the generated paraphrases. As the models do not have an inherent understanding of the FRES, explicit prompts are required to control the readability appropriately. Table 3 summarizes the prompts corresponding to each target readability level as defined in Section 4.1. The prompts are selected in relation to the descriptions in Table 1. It is observed that often the outputs from the Llama-2 zero-shot solution for certain input passages are random incoherent string of tokens. Therefore, a simple garbage detector checks whether the generated paraphrase is coherent English and in the situation garbage is detected, the corresponding paraphrase is replaced by the original text 7. Footnote 7: This has only been observed to occur in under 1% of generated paraphrases. **Two-step** Unlike many other natural language generation tasks (such as question generation Lu and Lu (2021), summarization Widyassari et al. (2022) and question-answering Baradaran et al. (2022)), the nature of the output matches the input for text modification. Therefore, paraphrasing based zero-shot approaches to control readability using large language models can sequentially be applied multiple times on a source text. Here, the \begin{table} \begin{tabular}{p{42.7pt} p{341.4pt}} \hline \hline Target & Prompt \\ \hline 5 & Paraphrase this document for a professional. It should be extremely difficult to read and best understood by university graduates. \\ 20 & Paraphrase this document for college graduate level (US). It should be very difficult to read and best understood by university graduates. \\ 40 & Paraphrase this document for college level (US). It should be difficult to read. \\ 55 & Paraphrase this document for 10th-12th grade school level (US). It should be fairly difficult to read. \\ 65 & Paraphrase this document for 8th/9th grade school level (US). It should be plain English and easily understood by 13- to 15-year-old students. \\ 75 & Paraphrase this document for 7th grade school level (US). It should be fairly easy to read. \\ 85 & Paraphrase this document for 6th grade school level (US). It should be easy to read and conversational English for consumers. \\ 95 & Paraphrase this document for 5th grade school level (US). It should be very easy to read and easily understood by an average 11-year old student. \\ \hline \hline \end{tabular} \end{table} Table 3: Model prompts for each target readability level. two-step process is as follows: 1. the selected large language model is prompted to generate a paraphrase at the target readability level as according to Table 3 with the source text at the input; 2. the model is then again prompted (with the identical prompt) to generate a new text but instead with the output from the previous step at the input. The intuition for this approach is motivated by the concept that it is possible to shift closer to a target readability if the source readability is closer to the target value. Here, we explore the two-step process for ChatGPT as it's the higher performing model (see Table 4). ## 6 Results and Discussion As described in Section 5.2, several baseline solutions are considered for the readability controlled to target values. Table 4 presents the performance of these solutions for the individual-scale metrics averaged across all examples in the CLEAR test set (see Section 4.2). The _copy_ system represents the setup where the source text is simply copied for each of target readability levels 5, 20, 40, 55, 65, 75, 85 and 95. Hence, the _copy_ system offers a lowerbound on performance according to each of the metrics. According to the Spearman's rank correlation coefficient, all ChatGPT and Llama-2 implementations are effective at relatively controlling the readability of the text documents with ChatGPT 1-step attaining the highest correlation of 87.5% while Llama-2 lags behind by about 15%. In contrast, the models struggle to directly map the readability of texts to absolute target readability levels with rmse values spanning typically two readability ranges (see Table 1) and the classification accuracies below 25%. Given all approaches are zero-shot implementations where the language models do not have an exact understanding of FRES, it is understandable the models are able to achieve a sensible readability ranking for the 8 generated texts but are incapable of matching the exact target readability values. Additionally, it can be noted that the 2-step process on ChatGPT observes incremental improvements (roughly 0.2 rmse and 1.0 classification accuracy) in achieving the absolute target readability values compared to the 1-step process. This perhaps is because there are two attempts to push the model towards the desired numeric readability score. Figure 3 presents the relationship between the source text readability and the generated text measured readability for each of the target readability classes 5 to 95. The relationship is plotted as a binned scatterplot where the average measured readability is plotted for each bin on the source \begin{table} \begin{tabular}{l|c c c} \hline \hline Approach & \(\rho\) (\(\uparrow\)) & rmse (\(\downarrow\)) & accuracy (\(\uparrow\)) \\ \hline Copy & \(0.0\) & \(35.4\) & \(12.5\) \\ ChatGPT 1-step & \(\mathbf{87.5}_{\pm 9.2}\) & \(19.4_{\pm 4.9}\) & \(23.1_{\pm 12.9}\) \\ ChatGPT 2-step & \(86.0_{\pm 9.0}\) & \(\mathbf{19.2}_{\pm 4.5}\) & \(\mathbf{24.2}_{\pm 13.2}\) \\ Llama-2 & \(73.3_{\pm 24.1}\) & \(23.6_{\pm 8.0}\) & \(20.6_{\pm 12.6}\) \\ \hline \hline \end{tabular} \end{table} Table 4: Individual-scale readability control with Spearman’s rank correlation coefficient (as %), \(\rho\), regression ability with rmse and classification accuracy. The mean across all the examples is reported for each performance metric as well as one standard deviation. Figure 3: Generated text readability against source text readability as a binned scatterplot. text readabilities. For all 3 models, it is observed that the readabilities of each class are generally in a sensible order where the measured readabilities for the target class 5 run along the bottom and the scores for the target class 95 act as the highest curve. Llama-2 also appears to be better than ChatGPT at disentangling the 5 and 20 classes but struggles at the higher classes. Additionally, the 2-step process is able at the lower target readabilities to push down to lower measured readabilities compared to the 1-step process. However, it is also apparent from Figure 3 that the measured readability of the generated texts, albeit correctly ordered, is highly correlated with the source text readability. Table 5 further quantifies the ability of the models to decorrelate the measured readability of the generated text with the source text readability according to the population-scale metrics (see Section 4.2). An ideal system can expect to have a Pearson's correlation coefficient of 0 and a regression line of best fit with gradient 0 and y-intercept corresponding to the absolute target readability level. It is observed that the relationship for all target readability classes remains highly dependent with the source text readability with ChatGPT 1-step achieving best results for lower target readability classes and better results for ChatGPT 2-step and Llama-2 for higher readability classes. It is also seen that the lower target readability classes are closer to the ideal performance for all models compared to the higher target readability classes. For further analysis, we look at the behaviour of the generated texts at various target readability metrics according to lexical divergence and semantic similarity from paraphrasing literature (see Section 4.2). We present the analysis here specifically for the best performing model overall: ChatGPT 2-step. Figures 3(a), 3(b) and 3(c) display how the generated text readability, WER (measure of lexical divergence) and BERTScore F1 (measure of semantic similarity) of the generated texts respectively vary with shifts between the target and source readability score classes. In order to plot the heatmap, each source text has its readability classed into Figure 4: Heatmaps of select variables for each pair of source and target text readability classes. \begin{table} \begin{tabular}{c|c c c c|c c c c|c c c c} \hline \hline & \multicolumn{4}{c|}{ChatGPT 1-step} & \multicolumn{4}{c|}{ChatGPT 2-step} & \multicolumn{4}{c}{Llama-2} \\ Target & pcc (\(\downarrow\)) & \(a\) (\(\downarrow\)) & \(b\) & \(r^{2}\) & pcc (\(\downarrow\)) & \(a\) (\(\downarrow\)) & \(b\) & \(r^{2}\) & pcc (\(\downarrow\)) & \(a\) (\(\downarrow\)) & \(b\) & \(r^{2}\) \\ \hline Source & 100 & 1 & 0 & 1 & 100 & 1 & 0 & 1 & 100 & 1 & 0 & 1 \\ 5 & 36.9 & 0.29 & 14.8 & 0.14 & 52.3 & 0.41 & 8.7 & 0.27 & 47.7 & 0.41 & 16.4 & 0.23 \\ 20 & 39.9 & 0.32 & 15.3 & 0.16 & 50.8 & 0.40 & 8.8 & 0.26 & 51.1 & 0.61 & 9.9 & 0.26 \\ 40 & 46.0 & 0.38 & 18.2 & 0.21 & 58.6 & 0.52 & 8.2 & 0.34 & 70.0 & 0.73 & 6.9 & 0.49 \\ 55 & 56.0 & 0.41 & 24.7 & 0.31 & 62.2 & 0.47 & 20.4 & 0.39 & 65.1 & 0.51 & 33.0 & 0.42 \\ 65 & 69.7 & 0.42 & 41.7 & 0.49 & 65.8 & 0.38 & 45.9 & 0.43 & 67.2 & 0.48 & 32.9 & 0.45 \\ 75 & 68.8 & 0.44 & 39.4 & 0.47 & 65.0 & 0.38 & 44.2 & 0.42 & 62.6 & 0.47 & 44.3 & 0.39 \\ 85 & 67.8 & 0.39 & 44.9 & 0.46 & 63.4 & 0.33 & 49.6 & 0.40 & 63.5 & 0.47 & 44.4 & 0.40 \\ 95 & 66.3 & 0.36 & 50.3 & 0.44 & 61.3 & 0.31 & 54.8 & 0.38 & 61.3 & 0.42 & 48.1 & 0.38 \\ \hline \hline \end{tabular} \end{table} Table 5: Population-scale readability control with Pearson’s correlation coefficient, pcc, and linear regression, \(y=ax+b\), between the source and generated text readability with \(0<r^{2}<1\) denoting the quality of the fit of the regression line with 0 as worst fit and 1 as best fit. one of the 8 readability ranges defined by Table 1. Hence the source and target text readability classes fall into one of following classes: \(\{5,20,40,55,65,75,85,95\}\). The heatmap depicts the mean of the selected variable (the variable for the heatmap to plot includes either generated text readability, WER or BERTScore) for each pairing of going from the source readability class (several source texts fall into each class) and the corresponding target readability class. First, we see Figure 3(a) reinforces the observations from Figure 3 as the lightest colours are observed in the top right while the darkest in the bottom left. This means that the highest generated text readability scores are observed for when the target readability class is high but also when the source text readability is high. From Figure 3(b) it is noticeable that along the leading diagonal we have the darkest shades and lighter shades on the peripheries of the off-diagonal. Conversely, from Figure 3(c) we see the lighter shades in the leading diagonal. Hence, keeping a matched readability between the source and the target leads to lower lexical divergence and higher semantic similarity. It can further be noted that there is an asymmetry for the WER. For the WER, we see that changing from a very high source text readability to a very low target readability has a greater WER compared to a modification from a low source text readability to a high text readability. This suggests that it is more challenging to maintain the same lexical language for text elaboration compared to text simplification. We can conclude from the variations in WER and BERTScore that greater the change between the source and target texts, the lower their semantic similarity and greater their lexical divergence. ## 7 Conclusions This work introduces the readability-controlled text modification task. Our task challenges controllable language models to generate eight versions of a text, each targeted for specific readability levels, in a manner independent from the source text readability. Novel metrics, inspired by paraphrasing, assess the quality of readability-controlled text modification. Zero-shot adaptations for ChatGPT and Llama-2 show potential in steering readability but retain some correlation with the source text readability. A two-step process of generating paraphrases sequentially offers modest gains over one-step approaches. Notably, more significant shifts in readability lead to reduced semantic and lexical similarity between source and target texts, highlighting the challenge of balancing readability control and content preservation. ## 8 Limitations The main insights drawn from this work for controllable text modification are based upon a single dataset, CLEAR; some of the observations may not generalize to datasets in other domains. ## 9 Acknowledgements This research is funded by the EPSRC (The Engineering and Physical Sciences Research Council) Doctoral Training Partnership (DTP) PhD studentship and supported by Cambridge Assessment, University of Cambridge and ALTA.
2309.11070
Decomposing the Spectrum of Ultra-Luminous X-ray Pulsar NGC 300 ULX-1
A phase-resolved analysis on the X-ray spectrum of Ultra-Luminous X-ray Pulsar (ULXP) NGC 300 ULX-1 is performed with data taken with XMM-Newton and NuSTAR on 2016 December 16th. In addition to the classical phase-restricting analysis, a method developed in active galactic nuclei studies is newly employed for ULXP. It has revealed that the pulsation cycle of the source can be divided into two intervals in terms of X-ray variability. This suggests the rotating flow consists of at least two representative emission regions. Furthermore, the new method successfully decomposed the spectrum into an independent pair in each interval. One is an unchanging-component spectrum that can be reproduced by a standard disk model with a $720^{+220}_{-120}$ km inner radius and a $0.25\pm0.03$ keV peak temperature. The other is the spectrum of the component that coincides with the pulsation. This was explained with a Comptonization of a $0.22^{+0.2}_{-0.1}$ keV blackbody and exhibited a harder photon index in the brighter phase interval of two. The results are consistent with a picture that the pulsating emission originates from a funnel-like flow formed within the magnetosphere, and the inner flow exhibiting a harder continuum is observed exclusively when the opening cone points to the observer.
Shogo B. Kobayashi, Hirofumi Noda, Teruaki Enoto, Tomohisa Kawashima, Akihiro Inoue, Ken Ohsuga
2023-09-20T05:34:15Z
http://arxiv.org/abs/2309.11070v1
# Decomposing the Spectrum of Ultra-Luminous X-ray Pulsar NGC 300 ULX-1 ###### Abstract A phase-resolved analysis on the X-ray spectrum of Ultra-Luminous X-ray Pulsar (ULXP) NGC 300 ULX-1 is performed with data taken with XMM-Newton and NuSTAR on 2016 December 16th. In addition to the classical phase-restricting analysis, a method developed in active galactic nuclei studies is newly employed for ULXP. It has revealed that the pulsation cycle of the source can be divided into two intervals in terms of X-ray variability. This suggests the rotating flow consists of at least two representative emission regions. Furthermore, the new method successfully decomposed the spectrum into an independent pair in each interval. One is an unchanging-component spectrum that can be reproduced by a standard disk model with a \(720^{+220}_{-120}\) km inner radius and a \(0.25\pm 0.03\) keV peak temperature. The other is the spectrum of the component that coincides with the pulsation. This was explained with a Comptonization of a \(0.22^{+0.2}_{-0.1}\) keV blackbody and exhibited a harder photon index in the brighter phase interval of two. The results are consistent with a picture that the pulsating emission originates from a funnel-like flow formed within the magnetosphere, and the inner flow exhibiting a harder continuum is observed exclusively when the opening cone points to the observer. Ultraluminous x-ray sources(2164) -- Pulsars(1306) -- Accretion(14) -- X-ray binary stars(1811) + Footnote †: journal: ApJ 0000-0002-8070-7882]Shogo B. Kobayashi 0000-0002-4070-3870]Hirofumi Noda 0000-0002-4880-7885]Teruaki Enoto 0000-0002-1881-7088]Tomohisa Kawashima 0000-0002-1881-7088]Akihiro Inoue 0000-0002-4880-7885]Ken Ohsuga ## 1 Introduction Ultra-Luminous X-ray sources (ULXs) are X-ray bright point sources frequently found at the off-nucleus regions of galaxies with high star formation rates, such as the (interacting) spirals, dwarfs, and starbursts (Kaaret et al., 2017; Walton et al., 2021; King et al., 2023). Since their X-ray luminosity, \(10^{39.5-41}\) erg sec\({}^{-1}\), well exceeds the Eddington limit of stellar-mass (\(10M_{\odot}\)) black holes, they are often regarded as possible candidates for intermediate-mass (\(10^{2-3}M_{\odot}\)) black holes (e.g., Makishima et al., 2000) or stellar-mass objects accreting matters at well above their Eddington rate (e.g., Mineshige and Ohsuga, 2007). Although the true nature of ULXs has been under discussion since its discovery in the 1980s (Fabbiano and Trinchieri, 1987), the epoch-making detection of 1.4 sec X-ray pulsation from M82 X-2 (Bachetti et al., 2014) has revealed that at least some fractions of ULXs do harbor neutron stars as their central object accreting at \(\sim 100\) times the Eddington limit. At present, 9 (8 extra-Galactic and 1 Galactic) ULXs are confirmed to be containing neutron stars as their accretor (Bachetti et al., 2014; Furst et al., 2016; Israel et al., 2017, 2017; Tsygankov, S. S. et al., 2017; Carpano et al., 2018; Wilson-Hodge et al., 2018; Sathyaprakash et al., 2019; Rodriguez Castillo et al., 2020; Chandra et al., 2020; Vasilopoulos et al., 2020). Since the ULX Pulsars (ULXPs) are those of limited systems that are firmly regarded as accreting at well above their Eddington rate, these are intensively studied as ideal systems to unravel the poorly understood nature of the super-critical accretion flows. X-ray spectral analysis often plays a significant role in studying accretion physics and geometry of mass accreting objects. However, those in ULXs, including ULXPs, generally provide few clues. This is mainly because ULXPs tend to exhibit continuum-dominated spectra with few characteristic features, allowing multiple spectral models to explain the same spectrum with nearly identical statistics. Therefore, it is crucial in ULXPs studies to somehow grasp the actual spectral shapes of the components forming the continuum, and an analysis method ideal for such an objective is available from studies on Active Galactic Nuclei (AGN). Following Churazov et al. (2001) and Taylor et al. (2003), Noda et al. (2014) (see also Noda et al., 2011; Noda et al., 2013) introduced a method that extracts the spectral shape of components that forms the original X-ray spectrum of the source. It relies on the correlation between the count rates of two energy bands, one from a fixed energy band and the other from an arbitrary part of the rest. By evaluating how these two correlates as the X-ray intensity of the source varies, one can directly derive the exact count rate contribution of two consisting components at that energy band; one that changes its intensity in coincidence with the X-ray variability and the other that does not. Repeating this procedure in various energy bands, the authors successfully decomposed a featureless spectrum of an AGN into two additional ones without relying on any physical models. Since these two newly-extracted spectra are additionally available for the model fittings, the method provided the authors with more stringent restrictions to their spectral modeling, and the same asset can be expected in the ULX studies. Especially in rotating neutron stars like ULXPs, their X-ray pulse periods, which are unavailable in black holes like those in AGNs, enable us to distinguish emission components originating from pulsating regions bound to the dipole magnetic field of a neutron star and those from the unbound one. Our objective in this work is to apply this method introduced by Noda et al. (2014) to the pulsation of an ULXP for the first time and untangle the model degeneracy that has been present in ULX studies for nearly a decade. NGC 300 ULX-1 (hereafter, ULX-1) is a ULXP residing in a nearby (1.9 Mpc; Gieren et al., 2005) spiral galaxy NGC 300. Its X-ray emission was first detected with Chandra as a weak (\(\sim 6\times 10^{35}\) erg sec\({}^{-1}\); Binder et al., 2011) source associated with a super-nova imposter SN2010da (Monard, 2010) and suddenly became bright as a ULX (\(>10^{39}\) erg sec\({}^{-1}\)) in 2016. The object turned out to be a X-ray pulsar rotating with a spin period of \(\sim 31.7\) sec and also spinning up with a significant rate of \(5.6\times 10^{-7}\) sec sec\({}^{-1}\)(Carpano et al., 2018). Despite the highest spin-up rate among ULXPs, ULX-1 is still the one that rotates at the longest period (\(\sim 17\) sec in the latest observation in 2018; Vasilopoulos et al., 2019). The method mentioned above gives higher resolution if one can divide the spin period into a large number of sections with each containing a sufficient photon statistic. Hence, its relatively long spin period among ULXPs makes the source ideal for our study ## 2 Observation and Data Reduction In general, phase-resolved spectral analysis of neutron star X-ray binaries requires high photon statistics, time resolution, and wide energy band coverage. Hence, we utilize the large effective area and short read-out time of the XMM-Newton (Jansen et al., 2001) European Photon Imaging Camera (EPIC) and the high-energy capability of the NuSTAR (Harrison et al., 2013) Focal Plane Module (FPM). In the present study, we revisit the data sets utilized in several works (e.g., Carpano et al., 2018; Walton et al., 2018; Koliopanos, F. et al., 2019). They were taken with XMM-Newton and NuSTAR simultaneously from 2016 December 16th with the longest total duration among the ones currently available (\(\sim 320\) ks). Throughout the observation, three EPIC instruments MOS1, MOS2, and pn were operating normally in full-window mode. Since this study requires a large effective area, we utilized only pn in the present analysis. All data screening processes are carried out with software included in Science Analysis System version 19.0.0 and the current calibration file updated on 2021/12/3. The pipeline processes such as gain calibration and removal of bad quality events are done by epchain with default criteria established by the XMM-Newton instrumental team. The spectrum and light curve of ULX-1 are extracted from an on-source circular region with a \(30^{\prime\prime}\) radius, while those of backgrounds are from a \(60^{\prime\prime}\) radius circle placed \(\sim 2.5^{\prime}\) off from ULX-1 wherein no apparent X-ray sources are detected. Two X-ray detectors on board NuSTAR, FPM-A and FPM-B, were also operating normally throughout the observation. All screening processes were carried out by using software included in High Energy Astronomy SOFT version 6.28 and the calibration data base updated on 2022/01/06. Basic pipeline processes such as bad-grade event reduction, mast and spacecraft attitude correction, and discarding events within South Atlantic Anomaly are done via nupipeline command. We set all of the parameters in this procedure to the recommended values set by the NuSTAR instrumental team. Secondary products generated from the pipeline-processed event data (such as the X-ray spectra, light curves, and response matrix files) are generated with the nuproducts command. The source spectra and light curves are extracted from a \(30^{\prime\prime}\) radius circular region, and those of backgrounds are from a \(60^{\prime\prime}\) radius one with a \(\sim 3^{\prime}\) offset from ULX-1. The arrival time of each X-ray event in both EPIC-pn and FPM data is corrected for the barycentric coordinate of the solar system with the JPL planetary ephemeris DE-200 (Standish, 1990). Here, we adopted a source coordinate of \((\mathrm{RA},\ \mathrm{DEC})=(3.77^{\circ},\ -37.70^{\circ})\), which is the direction toward ULX-1. According to Walton et al. (2018), the measured initial pulsation period at 57738.65732 MJD and its derivative are \(P=31.7183411308\) sec and \(\dot{P}=-5.563\times 10^{-7}\) sec/sec, respectively. We used this result to calculate the pulsation phase of each event in the data set. ## 3 Analysis and Results ### Light Curve and Pulse Profile In Figure 1, we present background subtracted-light curves of ULX-1. Those in the larger panel have a length of the overall observation with a bin width of 700 sec, and one in the top left is a small portion (200 sec) of XMM-Newton pn observation with a bin width of 4 sec. We can confirm the presence of coherent variability with the \(\sim 31\)-sec pulsation period in the 4 sec bin light curve, whereas nothing significant can be found in the others with the longer-time bins. Thus, the data set contains no apparent X-ray variability but from pulsation, making these data sets ideal to study the pure variability of the pulsating component. Figure 2 represents how the X-ray intensity of NGC 300 ULX-1 varies in terms of the pulse phase and energy. As described in section 1, we folded the entire data set with the initial period and spin-up rate derived by Walton et al. (2018), which are \(31.7183411308\) sec (reference epoch is \(57738.65732\) MJD) and \(-5.563\times 10^{-7}\) sec sec\({}^{-1}\), respectively. The pulse profile is relatively sinusoidal, which is consistent with the previous results (e.g., Carpano et al., 2018), and peaked at \(\phi=0.5-0.6\) throughout \(0.3-25\) keV, suggesting a "single zone" hot spot. Figure 3 presents an energy dependency of the fractional Root Mean Square (RMS) amplitude, \(\sqrt{S^{2}/\bar{x}}\), where \(S^{2}\) and \(\bar{x}\) are the variance and average of the count rate over the pulsation cycle, respectively. The fractional RMS amplitude becomes larger as the energy increases, reaching \(\geq 70\%\) above 3 keV. This is consistent with the previous results that used the same data sets as the present study (e.g., Carpano et al., 2018; Vasilopoulos et al., 2019). Hence, the spectrum of NGC 300 ULX-1 is expected to be dominated by the pulsating emission component at a higher energy band. ### Count-Count Correlation with Positive Offset (C3PO) Method Just like those in ULXPs, AGNs in Seyfert 1 galaxies also tend to exhibit featureless X-ray spectra, allowing multiple physical models to reproduce them without any significant statistical differences. Therefore, adding restrictions to models has been crucial, and spectral variability is one of the helpful tools to resolve this model degeneracy. Noda et al. (2014) introduced a method called Count-Count Correlation with Positive Offset (C3PO), which utilizes the characteristic X-ray variability of AGN to extract a pair of spectra that composes the original spectrum without relying on any physical models. One is from the component that accounts for the variability, and the other is from the unchanging one. The method is based on a correlation between the count rates of two energy bands. One is from a fixed energy range defined as a reference band, and the other is from a test band that is an arbitrary part of the rest. If we plot the test-band count rate against that of the reference band, the data points ought to exhibit a certain locus whose shape depends on how the spectral shape changes in time. The simplest example is when the variable component changes only its intensity. In this case, the data points will form a straight line on the count rate v.s. count rate plot (CCP) plane, and a product of its slope and the count rate in abscissa represents the actual count rate of the variable component at that en Figure 1: Background subtracted light curves of ULX-1 taken with EPIC-pn (\(0.3-10.0\) keV: black) and FPM-A (3-\(-25\) keV: red). The zero point of the horizontal axis corresponds to the beginning of the NuSTAR observation (MJD 57738.6573). Those in the larger panel has a full length of the observation with 700 sec width per bin, whereas one in the other panel is a 200 sec fraction of particular interval in the EPIC-pn observation with a 4 sec bin width. ergy band. Furthermore, if the locus shows any positive offset at the zero point of the reference band, then the source spectrum is likely to contain a component that does not correlate to the variability with a count rate equivalent to that very offset value. Noda et al. (2014) found that the CCPs at the bright state of a highly variable AGN, NGC 3227, form straight loci, and each of them shows an apparent positive offset if they extrapolate the data down to the count rate where that of the reference band reaches zero. It clearly suggests, as described above, that the X-ray spectral continuum of NGC 3227 consists of at least two components. One is something uncorrelated (or stable) against the variability representing the positive offset in CCP, and the other is the variable component that changes only its intensity in time. The authors fitted each CCP with a linear function expressed as \[y=ax+b, \tag{1}\] where \(x\), \(y\), \(a\), and \(b\) are the reference-band count rate, test-band count rate, slope, and y-intercept value, respectively. Here, \(a\) and \(b\) are set as free parameters, and the derived value \(ax_{0}\) (\(x_{0}\) is the average of \(x\) used in the fitting) and \(b\) represent the actual count rate of the variable and stable component at that energy band, respectively. By repeating this procedure in the other energy bands, Noda et al. (2014) successfully extracted the spectrum of the stable component with a thermal emission shape and a variable one with an absorbed power-law figure hidden under the featureless X-ray spectrum of NGC 3227. In the present study, we apply the same Figure 2: X-ray pulse profile (bottom) and its energy dependency (top) obtained from the \(0.3-10\) keV band of XMM-Newton EPIC-pn (left) and the \(3-25\) keV of NuSTAR FPM-A (right). For clarity, only FPM-A is presented for the NuSTAR data. The color scales of the top panels represent the fluctuation over the average count rate in percent. The count rates in the bottom panels are the average over respective energy bands; \(0.3-3\) keV (filled circles), \(3-10\) keV (open circles), and \(10-25\) keV (filled triangles). The data points of \(10-25\) keV are scaled by a factor of four for clarity C3PO method as Noda et al. (2014) to NGC 300 ULX-1, but in a pulse phase-dependent manner. Figure 4 presents the extracted CCP of ULX-1. Taking the high photon statistics of XMM-Newton and the highest pulse fraction (or variability) at \(>7\) keV (Figure 3) into account, we select \(7.0-10.0\) keV of XMM-Newton pn as the reference band (horizontal axis). To grasp the overall behavior of the CCP, we maximized the statistics by designating the rest of the entire energy band, \(0.3-7.0\) keV, as the test band (vertical axis). Instead of the raw count rates utilized in Noda et al. (2014), each data point in this CCP represents an average count rate within a divided portion (\(1/20=0.05\) cycle per data point in this case) of the \(\sim 31.7\) sec pulsation cycle. In short, the CCPs in the present paper reflect how the count rate in each energy band varies as a function of the pulsation phase. The CCP of ULX-1 is forming a rather straight locus with a "break" at \(\sim 0.03\) count sec\({}^{-1}\), which corresponds to a pulse phase of \(\phi\sim 0.4\) and \(\phi\sim 0.75\). It suggests that the variable component, emission bound to the pulsating accretion flow, increases its X-ray intensity without changing its spectral shape within phase intervals below or above this break. Since the data above the bent are forming a locus with a shallower slope, the appearance of the spectra should be different between these two phases. A similar breaking CCP is also reported in NGC 3227 by Noda et al. (2014). Instead of explaining the entire data points with a single linear line as Noda et al. (2011) and Noda et al. (2013) did, the authors tested two alternative functions that could fit such curving CCPs. One is a piecewise-segmented linear function that consists of a pair of linear functions individually explaining the data points below and above the break separately. The other is a single power-law function expressed as \(y=Mx^{N}\)(e.g., Uttley & McHardy, 2005), where \(M\) and \(N\) are left free to vary. Following Noda et al. (2014), we tested these two possible solutions as shown in Figure 4. The piecewise-segmented linear function is defined around the breaking point \(x_{\rm b}\) as, \[y=a_{1}x+b_{1}\ \ (x\leq x_{\rm b})\] \[y=a_{2}x+b_{2}\ \ (x>x_{\rm b})\] \[x_{\rm b}=(a_{1}-a_{2})/(b_{2}-b_{1}),\] where \(a_{i}\) and \(b_{i}\) (\(i=1,2\)) are the slopes and intercepts of individual linear functions, respectively. Hence, the best-fit slopes and intercepts automatically derive \(x_{b}\). In the regression analysis, we utilized the ROOT analysis package developed by CERN. The errors in \(x\) are projected to the \(y\)-axis direction to take contributions from both \(x\) and \(y\) into account by calculating chi-square at Figure 4: A CCP obtained from the \(0.3-7\) keV band of the XMM-Newton PN data (top) and its residuals from the respective model function (bottom). The red and blue solid lines are the best-fit breaking-linear function and exponential function, respectively. The colors of the residual data points are in correspondence with those of the function curves above. The data points with the highest and lowest count rate correspond to the peak and bottom of the pulse, respectively. Figure 3: Energy dependency of the fractional RMS amplitude over the pulsation cycle of ULX-1 calculated from the XMM-Newton PN detector (filled circles) and FPM (open circles) data. For clarity, only the data from FPM-A are shown for NuSTAR. each data point as \(\chi^{2}=(y-f(x))/(\sigma_{y}^{2}+\sigma_{x}^{2}f^{\prime}(x)^{2})\), where \(\sigma_{x}\), \(\sigma_{y}\), \(f(x)\), and \(f^{\prime}(x)\) are the errors in \(x\) and \(y\), the fitting function, and the derivative of the function, respectively. Although the piecewise linear function gave a slightly better fit (\(\chi^{2}/\)degree of freedom \(=29.2/16\)) than the single power-law function (\(\chi^{2}/\)degree of freedom \(=34.2/18\)), the difference is rather marginal. If we calculate the Bayesian Information Criteria, which can be expressed using the likelihood function \(L\), number of parameter \(k\), and number of data points \(n\) as \(-2\ln L+k\ln n\), the difference between the models is \(<1\). According to criteria by Kass and Raftery (1995), this is insufficient to claim that the piecewise linear function is statistically preferable over the power-law function. Unlike Noda et al. (2014), we were unsuccessful in statistically ruling out the power-law function due to the limited number of data points in the CCP. Since the difference between the two functions becomes significant around the breaking point, it may be possible to distinguish one from another by observing ULX-1 with a long exposure or instruments with larger effective areas. We leave this as a future work and adopt the results on piecewise linear function as a working hypothesis of the following analysis. The best-fit value of the breaking count rate is \(0.028\) count sec\({}^{-1}\), and hence we hereafter refer to the phase below this count rate (\(0.0\leq\phi\leq 0.4\) and \(0.75\leq\phi\leq 1.0\)) as "faint phase" and above it (\(0.4<\phi<0.5\)) as "bright phase". To extract the spectra of stable and variable components, we divided the \(0.3-7.0\) keV of XMM-Newton pn and \(3.0-25.0\) keV of NuSTAR FPM into 15 and 5 sub-bands, respectively, and created CCPs for each using \(7.0-10.0\) keV count rate of XMM-Newton pn as a reference (Figures 5, 6 and A1). The sub-bands are defined as those consisting of data points that contain at least 20 counts and have logarithmically-equal width within the individual instruments except for the \(10-25\) keV bands of NuSTAR FPMs. Thus, the same breaking feature as Figure 4 is present in the other energy band. The change in slope is apparent in the lower energy bands and becomes more ambiguous as the energy of the test band reaches 10 keV or higher. As Noda et al. (2014), we fit the data points with count rates below and above the break in each CCP separately with the respective linear functions shown in blue and red solid lines in Figures 5, 6, and A1. The goodness of fit and obtained parameters are summarized in Table 1. The pairs of linear functions successfully reproduced the individual CCPs, and some exhibited non-zero y-intercept values. Hence, following the procedure described above, we can generate two pairs of spectra from the best-fit values for each phase. Combining XMM-Newton and NuSTAR, we obtained a stable-component spectrum with 7 bins in \(0.30-1.61\) keV via data points in the faint phase area and another with 20 bins in \(0.3-10\) keV from those on the other side. As for the variable component, we obtained a spectrum with 25 bins between \(0.3-25\) keV from the faint phase, whereas those from the bright phase produced one with 22 valid bins ranging from 0.56 keV to 25 keV. ### Phase-resolved Spectra and Model Fitting In a typical phase-resolved spectral analysis, many compare spectra extracted from limited time intervals around the peak and bottom of the pulsation. For example, Koliopanos, F. et al. (2019), who analyzed the same data set as the present study, utilized 60% (30% each for on and off-pulse spectra) of a rotation cycle and left the rest from the phase-resolved analysis. Although this may clarify the difference between the on-pulse and off-pulse spectra, one can also discard the photon statistics excessively by restricting the time interval, and lead several models to degenerate. In contrast to the ordinary phase-resolved analysis, our study enables us to add further information to untangle the model degeneracy and utilize the data more efficiently. As the CCPs clarified in the previous section, the entire pulsation cycle of NGC 300 ULX-1 can be categorized into two phase intervals, namely the "faint phase" and the "bright phase". Since we now visually know from the CCPs that the spectral shape is constant within each phase, we do not have to limit the time interval further to see the change in spectrum. Hence, we hereafter divide the entire observational data into these two phases and extract spectra from each using all of the data therein to study what spectral model composition can explain the spectrum from each epoch with as high photon statistics as possible. We present the bright phase (black) and the faint phase (red) spectra in the top panel of Figure 7. To tentatively unfold the spectra with the instrumental response, they are shown in ratios over a power-law model with a photon index of \(\Gamma=2\). Thus, ULX-1 exhibits a hard (\(\Gamma\sim 1.4\)) and continuum-dominated spectrum throughout the pulsation phase. As the pulsating component decreases its intensity, an additional thermal component peaking at 1 keV becomes more apparent above the continuum. While the ratio between the two spectra (the bottom panel of Figure 7) indicates a spectral hardening in the \(0.3-5\) keV band, not as apparent in the \(>5\) keV band. This is consistent with the be havior we saw in the "breaking" feature of the CCPs presented in section 3.2. In the following section, we study the underlying emission compositions of these two spectra along with those newly extracted via the C3PO method. #### 3.3.1 Spectral Decomposition With the C3PO Method In Figures 8 (a) and (c), we present the identical phase-resolved spectra as Figure 7 (black) with those newly decomposed via the C3PO method (colored). Thus, we successfully extracted two spectral pairs of the stable and variable components. The former of the faint phase has a convex shape peaking at sub-keV (red in Figure 8 a), accounting for the noticeable "soft excess" mentioned in Figure 7. In addition, it lacks any significant emission at the hard energy band. The FPM-B data showed a finite offset and a different slope from FPM-A by \(\sim 2\sigma\) at \(5.28-7.00\) keV. This could be interesting if it was real as the energy band includes the peak energy of the Fe K\(\alpha\) line. However, XMM-Newton Figure 5: XMM-Newton pn phase-resolved Count-Count Plot of ULX-1. The red and blue solid lines represent the best-fit polynomial functions. The data points with the highest and lowest count rate correspond to the peak and bottom of the pulse, respectively. pn and FPM-A, in contrast, gave only upper limits to the stable component at this energy. Considering the five times higher effective area of XMM-Newton and the fact that FPM-A is consistent with XMM-Newton, we conclude that the offset and inconsistent slope in FPM-B is an artifact induced by statistical fluctuation. The spectrum of the variable component (or the pulsating component, in other words) is expanding through a wide energy band of \(0.4-25\) keV with a characteristic rollover at seven \(\sim 7\) keV (blue in Figure 8 a). In the bright phase (Figure 8 c), the stable component (red) exhibits a similar convex spectrum as the faint-phase one except for the extended distribution that drops off sharply at \(\sim 6\) keV. Whereas, the variable component (blue) yields a similarly hard-bending continuum as that in the faint phase. The decomposition done above assumes that the count rate of the reference band reaches zero at minimum, which is not necessarily guaranteed. In fact, the pulse fraction of \(7.0-10\) keV is relatively high (\(\sim 80\%\)) but does not reach \(100\%\). Since the CCPs in section 3.2 reflects the variability of the pulsating component, we thus may have to treat a particular count rate of 7.0-\(-10.0\) keV as a net zero point of the analysis to decompose the spectrum into the pulsating accretion flow Figure 6: Same as Figure 5, but vertical axis represents the count rate of NuSTAR FPM-A Figure 7: Upper panel; phase-resolved spectra (on phase: black, off phase: red). The instrumental response is tentatively removed by taking ratios over a power-law model with a photon index of 2. Bottom panel; spectral ratio between two phases. Figure 8: Phase resolved (black) and decomposed stable (red)/variable (blue) component spectra. They are shown in ratios over a power-law model for the same reason as Figure 7. The difference between spectra in left hand and right hand side panels is whether assuming zero (left) or certain count rates (right) as the intensity floor (see text for detail). Filled and open markers represent the data from XMM-Newton and NuSTAR, respectively. Data from NuSTAR FPM-A are shown in circle markers and those from FPM-B are in squares. \begin{table} \begin{tabular}{c c c c c c c} \hline \hline & \multicolumn{4}{c}{below the break} & \multicolumn{4}{c}{above the break} \\ \cline{2-7} Energy band & slope & offset & reduced \(\chi^{2}(\nu)\) & slope & offset & reduced \(\chi^{2}(\nu)\) \\ \hline \multicolumn{7}{c}{XMM-Newton EPIC-PN} \\ \(0.30-0.37\) keV & \(0.7\pm 0.1\) & \(0.017\pm 0.002\) & 1.706 (11) & \(0.3\pm 0.1\) & \(0.028\pm 0.005\) & 1.731 (5) \\ \(0.37-0.46\) keV & \(1.2\pm 0.2\) & \(0.026\pm 0.002\) & 1.330 (11) & \(0.6\pm 0.1\) & \(0.041\pm 0.006\) & 0.891 (5) \\ \(0.46-0.56\) keV & \(1.9\pm 0.2\) & \(0.030\pm 0.003\) & 1.591 (11) & \(0.6\pm 0.2\) & \(0.064\pm 0.008\) & 0.882 (5) \\ \(0.56-0.69\) keV & \(2.6\pm 0.3\) & \(0.034\pm 0.003\) & 2.373 (11) & \(1.4\pm 0.2\) & \(0.06\pm 0.01\) & 2.092 (5) \\ \(0.69-0.86\) keV & \(3.8\pm 0.4\) & \(0.037\pm 0.005\) & 2.354 (11) & \(1.8\pm 0.3\) & \(0.08\pm 0.01\) & 1.135 (5) \\ \(0.86-1.06\) keV & \(4.2\pm 0.4\) & \(0.039\pm 0.005\) & 1.554 (11) & \(1.5\pm 0.3\) & \(0.12\pm 0.01\) & 0.470 (5) \\ \(1.06-1.30\) keV & \(4.7\pm 0.4\) & \(0.024\pm 0.005\) & 1.480 (11) & \(2.0\pm 0.3\) & \(0.11\pm 0.01\) & 0.723 (5) \\ \(1.30-1.61\) keV & \(4.7\pm 0.04\) & \(0.011\pm 0.005\) & 1.457 (11) & \(2.1\pm 0.3\) & \(0.09\pm 0.01\) & 2.178 (5) \\ \(1.61-1.99\) keV & \(4.2\pm 0.4\) & \(0.006\pm 0.004\) & 1.218 (11) & \(2.1\pm 0.3\) & \(0.09\pm 0.01\) & 2.996 (5) \\ \(1.99-2.45\) keV & \(3.1\pm 0.3\) & \(0.001\pm 0.003\) & 0.979 (11) & \(1.8\pm 0.2\) & \(0.05\pm 0.01\) & 1.620 (5) \\ \(2.45-3.02\) keV & \(2.4\pm 0.2\) & \(0.001\pm 0.003\) & 1.199 (11) & \(1.1\pm 0.2\) & \(0.05\pm 0.01\) & 2.718 (5) \\ \(3.02-3.73\) keV & \(2.7\pm 0.2\) & \(-0.003\pm 0.003\) & 1.382 (11) & \(1.6\pm 0.2\) & \(0.04\pm 0.01\) & 1.759 (5) \\ \(3.73-4.60\) keV & \(2.4\pm 0.2\) & \(-0.002\pm 0.003\) & 1.271 (11) & \(1.6\pm 0.2\) & \(0.02\pm 0.01\) & 3.267 (5) \\ \(4.60-5.67\) keV & \(1.9\pm 0.2\) & \(0.001\pm 0.002\) & 1.538 (11) & \(1.3\pm 0.2\) & \(0.03\pm 0.01\) & 1.152 (5) \\ \(5.67-7.00\) keV & \(1.6\pm 0.2\) & \(4.9\times 10^{-5}\pm 0.002\) & 0.672 (11) & \(1.3\pm 0.2\) & \(0.010\pm 0.008\) & 1.178 (5) \\ \multicolumn{7}{c}{NuSTAR FPM-A} \\ \(3.0-4.0\) keV & \(0.34\pm 0.04\) & \(-0.0008\pm 0.0005\) & 1.723 (11) & \(0.17\pm 0.05\) & \(0.005\pm 0.002\) & 1.612 (5) \\ \(4.0-5.3\) keV & \(0.43\pm 0.05\) & \(-0.0005\pm 0.0006\) & 1.612 (11) & \(0.39\pm 0.07\) & \(0.0016\pm 0.003\) & 0.790 (5) \\ \(5.3-7.0\) keV & \(0.57\pm 0.06\) & \(-0.0007\pm 0.0008\) & 0.534 (11) & \(0.38\pm 0.07\) & \(0.004\pm 0.003\) & 0.622 (5) \\ \(7.0-10.0\) keV & \(0.42\pm 0.05\) & \(-0.0003\pm 0.0006\) & 1.076 (11) & \(0.46\pm 0.07\) & \(-0.002\pm 0.03\) & 1.198 (5) \\ \(10.0-25.0\) keV & \(0.31\pm 0.04\) & \(-0.0011\pm 0.0005\) & 0.945 (11) & \(0.32\pm 0.06\) & \(-0.002\pm 0.003\) & 1.095 (5) \\ \multicolumn{7}{c}{NuSTAR FPM-B} \\ \(3.0-4.0\) keV & \(0.27\pm 0.04\) & \(0.0003\pm 0.0005\) & 0.869 (11) & \(0.16\pm 0.05\) & \(0.005\pm 0.002\) & 0.693 (5) \\ \(4.0-5.3\) keV & \(0.47\pm 0.06\) & \(-0.0005\pm 0.0007\) & 1.134 (11) & \(0.30\pm 0.07\) & \(0.005\pm 0.003\) & 2.620 (5) \\ \(5.3-7.0\) keV & \(0.38\pm 0.06\) & \(0.0018\pm 0.0007\) & 1.238 (11) & \(0.39\pm 0.07\) & \(0.003\pm 0.003\) & 0.765 (5) \\ \(7.0-10.0\) keV & \(0.48\pm 0.06\) & \(-0.0011\pm 0.0007\) & 1.012 (11) & \(0.40\pm 0.07\) & \(0.0004\pm 0.003\) & 1.324 (5) \\ \(10.0-25.0\) keV & \(0.25\pm 0.04\) & \(-0.0003\pm 0.0005\) & 1.068 (11) & \(0.29\pm 0.05\) & \(-0.002\pm 0.002\) & 0.135 (5) \\ \hline \end{tabular} Note. – Errors represent 68% confidence level. \end{table} Table 1: Results on linear-function fit of CCPs component from that of non-pulsating one. Therefore, we consider the non-zero case by employing an "intensity floor" as Noda et al. (2014) did. To account for the intensity floor, we shift equation 1 toward the \(+x\) direction by the floor count rate of \(c\). Hence equation 1 can be rewritten as \[y=a(x-c)+b^{\prime}, \tag{2}\] where \[b^{\prime}=b+ac. \tag{3}\] Here, we employed the floor count rate equivalent to the minimum count rate in each phase, namely \(c=0.0063\) count sec\({}^{-1}\) for the faint phase and \(c=0.030\) count sec\({}^{-1}\) for the bright phase (e.g., see 5). The revised spectra of both phases are shown in Figure 8 (b) and (d). As can be followed from equation 2 and 3, the revision changes the shape and normalization of the stable-component spectrum. As for variable one, however, it only scales the normalization by a factor of \(1-c/x_{0}\) and does not affect its spectral shape. The revised stable-component spectrum is generally a summation of that in the \(c=0.0\) case and a fraction of the variable component. In fact, the spectral shapes of the revised two components in the faint phase are nearly identical at \(>2\) keV (Figure 8 b). We presume that the pulsating component is still present in the spectrum at the pulse minimum, and it is accounting for the deficient 20% of the count rate for the pulse fraction to reach 100%. Hence, to separately discuss the pulsating-component spectrum and that of the non-pulsating one, we hereafter assume the \(c=0.0\) count sec\({}^{-1}\) case in the analysis of faint phase spectra. Generally, the floor intensity in the C3PO method is the lower limit down to where we can safely extrapolate the variability. Although we assumed \(c=0.0\) count sec\({}^{-1}\) in the faint phase, this may not be appropriate in the other from this perspective because the spectral mode shifts rather continuously from one to another, which is apparent in Figure 4, and employing \(c=0.0\) count sec\({}^{-1}\) in the bright phase will drop this information. As a matter of fact, the spectral shape of the stable component in Figure 8 (c) is inconsistent with that of the faint phase. This is due to assuming that the variable component can decrease its intensity to zero, and it is not the case in this particular phase. Since the bright phase mode contains a solid floor intensity, namely the break count rate, we thus employ the \(c=0.030\) count sec\({}^{-1}\) case in the following analysis. #### 3.3.2 Model Fitting of the Faint-phase Spectra We begin by fitting the spectrum extracted from events in the faint phase. The original (black), stable component (red), and variable component spectra (blue) are presented in the uppermost panel of Figure 9. As we thus have successfully decomposed the spectrum of NGC 300 ULX-1 into the pulsating and stable components, we fit these three spectra simultaneously with a pair of emission models that each represents the individual components. Since the emission mechanism of super-critical accretion flow is still poorly understood, physically reasonable spectral modeling is yet to be established for the pulsating emission component of ULXPs. Hence, studies on the component usually rely on models relatively empirical. Reflecting its hard and extending shape continuum with a characteristic rollover at \(\sim 7\) keV, the most frequently seen example of the modeling is one utilizing a power law with an exponential cutoff (cutoffpl in XSPEC expression). In fact, Brightman et al. (2016) and Walton et al. (2018) successfully re Figure 9: Top panel; spectra and the best-fit model unfolded with the instrumental response. The faint phase, stable component, and variable component spectra are shown in black, red, and blue, respectively. Panels (a), (b), and (c); residuals from \(\tt diskbb_{st}+cutoffpl_{v}\), \(\tt diskbb_{st}+gauss_{st}+diskbb_{v}+power_{v}\), and \(\tt diskbb_{st}+gauss_{st}+simpl_{v}\)\(\tt nthcomp_{v}\), respectively produced the hard continuum of M82 X-2 and NGC 7793 P-13 with this model, respectively. On the other hand, the parameters in the cutoffpl model have no physical meanings. This has motivated some authors (e.g., Koliopanos, F. et al., 2019) to alternatively use a multi-color disk blackbody model (diskbb in XSPEC expression; Mitsuda et al., 1984) or Comptonization model (nthcomp in XSPEC expression; Zdziarski et al., 1996) to approximate the emission physics expected from several theoretical studies of super-critical accretion flows (e.g., Mushtukov et al., 2017). In the present paper, we test these three patterns of modeling to explain the pulsating component that dominates the hard energy band. As for the stable component, we employ either a single-temperature blackbody or a multi-color disk blackbody model. This is because the emission has a characteristic convex shape that is likely to be an optically thick thermal emission. In addition, it clearly originates from a region formed somewhere around the neutron star. Even if the accretion disk is the origin of the stable component, it is still unclear whether the disk is in the standard accretion state derived by Shakura & Sunyaev (1973). Therefore, we test two patterns of disk blackbody models, one assuming the disk to be in the standard accretion regime (diskbb in XSPEC; Mitsuda et al., 1984) and the other having a radial temperature dependence as an additional free parameter (diskbb in XSPEC). Finally, we commonly multiply a photoelectric absorption model, tbabs, and a constant factor to both components. The former is to take the absorption below 1 keV into account and has a column density as a free parameter. It represents the hydrogen-equivalent number density of matter within the line of sight, which includes absorption by the Galactic interstellar medium and intrinsic within the NGC 300 galaxy. We assumed the solar abundance and used a material table by Wilms et al. (2001). The latter is to account for the systematic errors in absolute count rate due to using spectra from different instruments. Since models with the same names are used to reproduce distinct components, we hereafter clarify to which the model belongs by denoting the one explaining the stable one with an st subscript and the other with a v subscript. Despite utilizing models that were successful in several previous ULX studies, none of the patterns gave acceptable fits to the spectra. As a representative, we present residuals between the diskbb\({}_{\texttt{nt}}\)+cutoffpl\({}_{\texttt{v}}\) model and data in panel (a) of Figure 9. We can confirm a significant enhancement at \(\sim 1\) keV and a slight wiggling feature that goes downwards in \(7-10\) keV and then upwards in \(>10\) keV. Furthermore, the entire stable component model overestimates the data, which we discuss later. The residual in \(\sim 1\) keV is likely to be an emission-line-like feature occasionally reported from several ULXs including ULXs (for non-pulsating ULXs e.g., NGC 5408 X-1, NGC 6946 X-1 Middleton et al., 2014, NGC 1313 X-1 Pinto et al., 2016, and NGC 247 ULX-1 Pinto et al., 2021; for ULXs e.g., SMC X-3 Koliopanos, F. et al., 2019). The feature is also reported in a different observation of ULX-1 (Ng et al., 2022), and considered to be Fe L, Ne X, or Fe XVIII emission lines from the surrounding gas. Since the enhancement is also present in the stable component spectrum, we hence modify the model for the component by adding a Gaussian line at \(\sim 1.0\) keV. The wiggling residual is also widely seen in the hard X-ray band of ULXs (e.g., Bachetti et al., 2013, Walton et al., 2018), indicating that spectra require an extra component that extends up to \(\sim 25\) keV. A simple solution for this is to introduce another power law (power in XSPEC) to the variable component model. In fact, Koliopanos, F. et al. (2019) have successfully reproduced the \(>2\) keV spectrum of this ULXP, ULX-1, with a combination of multi-color disk and a power-law model. As another option, it is also effective to multiply a model that modifies a part of the current model shape to an extending power law. For example, Walton et al. (2018) and Walton et al. (2014) multiplied the simpl model to a cutoff power law and a Comptonization model, respectively. The model redistributes a part of the photons from the multiplied model to form an additional power-law emission in higher energy, which technically gives similar effects as adding a power-law model. The difference from adding a power law is that the simpl model extends only toward higher energy, whereas the power law does to infinity in both energy directions. Therefore, simpl avoids photon numbers diverting in the lower energy band. In the following analysis, we refit the variable component with these modified models, namely power\({}_{\texttt{v}}\)+diskbb\({}_{\texttt{v}}\), simpl\({}_{\texttt{v}}\)*cutoffpl\({}_{\texttt{v}}\), and simpl\({}_{\texttt{v}}\)*nthcomp\({}_{\texttt{v}}\). The parameters and the goodness of fit are summarized in Table 2, and the best-fit model and examples of the residuals are shown in Figures 9 (b), and (c), respectively. Let us begin by comparing the results in terms of the difference in variable component models. Although the discrepancy in \(\sim 1\) keV and \(>10\) keV are significantly improved, the models including cutoffpl\({}_{\texttt{v}}\) for the variable component failed to simultaneously reproduce the stable component just like the one previ ously shown in Figure 9 (a). This is mainly because the cutoffpl\({}_{\texttt{v}}\) model is causing a clash with the stable component model due to its extending characteristic mentioned in the previous paragraph. It is also the same in the model that includes diskbb\({}_{\texttt{v}}\)+power\({}_{\texttt{v}}\) in it. Since the photons of the diskbb\({}_{\texttt{v}}\) model quickly drop off in lower energy due to the Rayleigh-Jeans region, the model gave a better fit than those using cutoffpl\({}_{\texttt{v}}\) (Table 2). Still, the power\({}_{\texttt{v}}\) model introduced to account for the \(>10\) keV causes a conflict with the stable component at \(0.5-1.3\) keV shown in Figure 9 (b). In contrast to the present work, the diskbb\({}_{\texttt{v}}\)+power\({}_{\texttt{v}}\) model was successful to explain the hard X-ray spectrum in the previous work using the same data set (Koliopanos, F. et al., 2019). In the analysis, authors relied only on the phase-resolved spectra like the ordinary phase-resolving spectral analysis, and no other data were available to restrict the shape of the models that form the lower energy continuum in particular. Such lack of restriction can allow tbabs and the cooler diskbb\({}_{\texttt{gt}}\) (or bb\({}_{\texttt{gt}}\)) model to absorb the overshooting power-law component by increasing the column density and/or decreasing its normalization. Koliopanos, F. et al. (2019), in fact, pointed out that a strong correlation was present between the column density and the power-law photon index. Hence, the models have degenerated in the Koliopanos, F. et al. (2019) case, and thanks to the C3PO method, we have successfully resolved this by providing the models additional spectra to reproduce simultaneously with the phase-resolved one. The radial surface temperature of the accretion disk model \(T(r)\) is proportional to a negative power of the disk radius \(r\), namely \(T(r)\propto r^{-p}\), where \(p\) is the power index. In the diskbb case, \(p\) is fixed to 0.75 with which (Koliopanos, F. et al., 2019) approximated the temperature gradient in the pulsating accretion flow. To test whether the fit improves by letting this \(p\) vary, we replaced the diskbb\({}_{\texttt{v}}\) model in the variable component with the diskbb\({}_{\texttt{v}}\) model that has the power index \(p\) as an additional free parameter. However, letting \(p\) vary did not improve the fit; it gave only a 3.5 smaller chi-squared value for one degree of freedom less \begin{table} \begin{tabular}{l c c c c c c} \hline \hline \multicolumn{1}{c}{model\({}_{\texttt{stable}}\)} & \multicolumn{3}{c}{gauss+bb} & \multicolumn{3}{c}{gauss+diskbb} \\ \cline{2-7} model\({}_{\texttt{variable}}\) & diskbb+power & simpl*cutoffpl & simpl+nthcomp & diskbb+power & simpl*cutoffpl & simpl*nthcomp \\ \hline \multicolumn{7}{c}{stable component} \\ \(T_{\text{in/bb}}^{a}\) (keV) & \(0.19\pm 0.01\) & \(0.190\pm 0.003\) & \(0.18\pm 0.01\) & \(0.25\pm 0.02\) & \(0.24^{+0.04}_{-0.02}\) & \(0.25^{+0.02}_{-0.03}\) \\ norm\({}_{\text{disk}}^{b}\) & - & - & - & \(12^{+6}_{-4}\) & \(12.8\pm 0.4\) & \(10^{+7}_{-3}\) \\ \(m_{\text{c}}^{c}\) (\(\times 10^{-6}\)) & \(6.9\pm 0.6\) & \(6.40^{+0.1}_{-0.5}\) & \(5.7^{+0.5}_{-0.4}\) & - & - & \(-\) \\ \(E_{\text{line}}^{d}\) (keV) & \(1.00\pm 0.04\) & \(1.00\pm 0.03\) & \(1.00\pm 0.03\) & \(0.94^{+0.05}_{-0.07}\) & \(0.94\pm 0.02\) & \(0.97^{+0.04}_{-0.05}\) \\ \(\sigma^{e}\) (keV) & \(<0.1\) & \(<0.1\) & \(<0.1\) & \(0.15^{+0.08}_{-0.06}\) & \(0.29\pm 0.02\) & \(0.12^{+0.07}_{-0.05}\) \\ \hline \multicolumn{7}{c}{variable component} \\ \(\Gamma_{\text{simpl}}^{f}\) & - & \(1.00^{+0.17}_{-0.02}\) & \(1.0^{+0.5}_{-0.2}\) & - & \(1.03^{+0.25}_{-0.02}\) & \(1.2^{+0.4}_{-0.2}\) \\ \(F^{g}\) & - & \(0.40^{+0.4}_{-0.02}\) & \(0.20^{+0.18}_{-0.06}\) & - & \(0.40\pm 0.02\) & \(0.22^{+0.19}_{-0.06}\) \\ \(T_{\text{in/bb}}^{a}\) (keV) & \(2.34\pm 0.08\) & - & \(0.21\pm 0.02\) & \(2.42\pm 0.09\) & - & \(0.14\pm 0.04\) \\ norm\({}_{\text{disk}}^{b}\) (\(\times 10^{-3}\)) & \(4.2\pm 0.5\) & - & - & \(3.6\pm 0.5\) & - & - \\ \(T_{\text{cut}}^{h}\) (keV) & - & \(4.44^{+0.03}_{-0.05}\) & \(1.77\pm 0.09\) & - & \(4.32^{+0.2}_{-0.05}\) & \(1.73\pm 0.09\) \\ \(\Gamma_{\text{pl/nthcomp}}^{t}\) & \(1.7\pm 0.1\) & \(0.70\pm 0.05\) & \(1.57\pm 0.03\) & \(1.9\pm 0.1\) & \(0.81\pm 0.05\) & \(1.56\pm 0.04\) \\ norm\({}_{\text{pl/nthcomp}}^{f}\) (\(\times 10^{-4}\)) & \(1.1\pm 0.3\) & \(6.24^{+0.31}_{-0.06}\) & \(3.9^{+1.2}_{-0.4}\) & \(1.6\pm 0.3\) & \(6.46\pm 0.06\) & \(4.1^{+1.0}_{-0.5}\) \\ \hline \multicolumn{7}{c}{common component} \\ \(N_{\text{H}}^{k}\) (\(\times 10^{20}\) cm\({}^{-2}\)) & \(3\pm 1\) & \(1.7\pm 0.8\) & \(1\pm 1\) & \(6\pm 1\) & \(4.6\pm 0.2\) & \(5\pm 1\) \\ \(\chi^{2}/\nu^{l}\) & \(265.38/205\) & \(287.24/206\) & \(247.60/203\) & \(248.80/205\) & \(287.7/206\) & \(219.87/203\) \\ \hline \end{tabular} Note. –: Temperature of the inner-disk radius or blackbody surface. b: Normalization parameter of the diskbb model. c: Normalization parameter of the bb model. d: Center energy of the Gaussian line. e: Standard deviation of the Gaussian line. f: Photon index of the power-law continuum that simpl creates. g: Fraction of the photons that is devoted to create the power-law continuum by simpl. h: Cutoffpl or temperature of the Comptonizing electron cloud. i: Photon index of the cutoffpl model or that of the nthcompl. j: Normalization parameter of cutoffpl or nthcomp. Represents the photon flux at 1 keV. k: Hydrogen-equivalent column density. l: Chi-squared statistics and degree of freedom. \end{table} Table 2: The best-fit parameters obtained from the faint phase spectra than the diskbb\({}_{\tt v}\) case with \(p=0.95\). As a result, the models utilizing nthcomp\({}_{\tt v}\) and simpl\({}_{\tt v}\), which yields fewer photons in lower energy due to the Rayleigh-Jeans break of nthcomp\({}_{\tt v}\) and the non-diverging characteristic of simpl\({}_{\tt v}\), gave the best fit among the variable component model patterns as shown in Figure 9 (c). Instead of creating an extending power-law component, Walton et al. (2018b) argued that they successfully reproduced the residuals in \(>10\) keV with a wide absorption line possibly originating from a cyclotron resonance scattering feature. We also tested this alternative solution by multiplying an absorption line model (gabs in XSPEC expression) to the continuum models. Since the photon statistics at \(>10\) keV are too poor to determine the line central energy, we fixed the value to that of Walton et al. (2018b); 12.8 keV. The model gave the same tendency as those shown above. Only the model utilizing nthcomp\({}_{\tt v}\) gave an acceptable fit, which is rather natural because gabs affects only the spectral shape above 10 keV. The best fit was slightly better (\(\chi^{2}/\nu=247.26/217\)) but insufficient to rule out the model using simpl\({}_{\tt v}\). The obtained line width of \(4.2^{+1.1}_{-0.6}\) keV is consistent with that in Walton et al. (2018b); \(3.1^{+0.8}_{-0.7}\) keV. The model exhibited the same parameter values as those in the model using simpl\({}_{\tt v}\) within the errors, except for a higher electron temperature (\(2.7\pm 0.1\) keV) and a slightly harder photon index (\(1.51^{+0.05}_{-0.07}\)). Hence, we conclude at least the variable component requires a model that breaks sharply in the lower energy end and cannot distinguish whether an extending power law or an absorption line is the best model to account for the spectral feature above 10 keV. We next compare the models used to explain the stable component. All the models utilizing diskbb\({}_{\tt st}\) gave a slightly better fit than those including bb\({}_{\tt st}\). This is because diskbb\({}_{\tt st}\) yields a softer spectrum than bb\({}_{\tt st}\) due to the contribution of cooler blackbody emission from the outer disk region. The residual indicates that the stable component spectrum is too wide to be explained with the later model. Furthermore, its hard spectral nature forces the bb\({}_{\tt st}\) model to compensate for its lack of photon in the lower energy band by making the absorption column density smaller (\(N_{\rm H}<3.6\times 10^{20}\) cm\({}^{-2}\)) than those of diskbb\({}_{\tt st}\) (Table 2). In particular, that of the best-fit pattern being comparable to or even smaller than the Galactic value (\(\sim 2\times 10^{20}\) cm\({}^{-2}\); Dickey & Lockman 1990), and we consider this unreasonable. Gathering these results together, we conclude that the data favor the diskbb\({}_{\tt st}\) model for the stable component spectrum. To test whether the multi-color disk blackbody emission differs from the standard disk, we also allowed the radial disk temperature dependency to vary by replacing diskbb\({}_{\tt st}\) with diskbbbt as we did in the variable component modeling. While the temperature profile is \(p=0.75\) in the standard accretion disk (Shakura & Sunyaev 1973), it is expected to be flatter if the disk deviates from the standard regime as the accretion rate increases. In a near-Eddington accretion rate, the disk is expected to reach a state called a slim disk, in which the disk has radial temperature with \(p=0.5\)(Watarai et al. 2000, 2001). The best fit for \(p\) is 0.63, which is in the middle between the slim disk and the standard disk. However, allowing \(p\) to vary did not improve the fit, giving only \(\Delta\chi^{2}=0.5\) with -1 degree of freedom from the model assuming the standard disk (\(p=0.75\)). Fixing p to 0.5, namely assuming the slim disk, still gave nearly identical goodness of fit (\(\Delta\chi^{2}=1.0\)) with a higher column density of \(\sim 6\times 10^{20}\) cm\({}^{-2}\) than those in the standard disk case. This is because the slim disk has a softer spectrum than the standard disk due to its flatter temperature gradient, forcing the absorption model to give higher column density. Hence, we cannot statistically distinguish whether the stable component favors the slim disk state or the standard accretion regime. Although we thus have successfully reproduced the entire continuum component of ULX-1, a narrow positive excess is still present at \(\sim 6.7\) keV, which is consistent with the energy of the He-like Fe K\(\alpha\) line. In fact, adding a Gaussian line improved the fit by \(\Delta\chi^{2}=20.6\) with a decrease of 3 degrees of freedom. Since the upper limits on the count rate of the stable component are well below the expected strength of the line at this energy, we here assumed that the possible emission line feature belongs to the variable one. The best-fit center energy and width of the Gaussian line were \(6.7^{+0.1}_{-0.4}\) keV and \(<0.6\) keV, respectively. To evaluate the statistical significance of this line-like feature, we generated 5000 simulated spectra based on the previous best-fit model that only consists of a continuum around that energy and tested how further the fit improves by adding a Gaussian line at 6.7 keV on each spectrum. The simulated spectra are created with HEASOFT fakeit command, which generates response-folded spectra of a given spectral model with expected Poisson noise. The exposure of each spectrum is set to be equivalent to the actual observation. To prevent the fit to diverge, we limited the central energy and the width of the line to vary within \(6.0-7.0\) keV and \(<1.0\) keV, respectively. Figure 10 presents the distributions of simulated spectral fit improvements in terms of a difference in \(\chi^{2}\) statistics (\(\Delta\chi^{2}\)) between the two spectral model fittings. One is our best-fit model (diskbb\({}_{\texttt{st}}\)+gauss\({}_{\texttt{st}}\)+simpl\({}_{\texttt{*}}\)*nthcomp\({}_{\texttt{v}}\)), and the other is that with an additional Gaussian line at \(\sim 6.7\) keV. The simulated spectra that gave better improvement than the observational result (\(\Delta\chi^{2}>20.6\)) are \(\sim 40\%\) of the total simulation (grey histogram). Furthermore, if we limit to those exhibited consistent parameters as the observation within the 90% significance error (black histogram), the number decreases to 9.6%. Therefore, we conclude that the statistical significance level of the possible line feature is \(\sim 90\%\), which is rather promising but insignificant to claim as a detection. Considering that an Fe K\(\alpha\) line is present in a spectrum of ULXP SMC X-3 Koliopanos & Vasilopoulos (2018), it is natural to expect the same for NGC 300 ULX-1, and the possible feature we observed above is a good candidate. In addition to insufficient photon statistics in the total spectrum, the present detectors do not have enough effective area to divide those obtained from the C3PO method into finer bins, which are making hard to resolve the feature in the variable component spectrum. To strengthen the statistical significance and confirm the origin of this feature, we strongly advise observing this source with a long exposure or observatories having a higher effective area, such as NICER, and conducting the same analysis on that data set. Finally, to test how the assumption made in section 3.3.1 can affect our result, we performed model fittings using the spectral set assuming a non-zero floor intensity of \(C=0.0063\) count sec\({}^{-1}\) (the spectra in Figure 8 b). As described in section 3.3.1, the floor intensity does not change the shape of the variable component spectrum. In addition, the stable component spectrum becomes a summation of the \(C=0.0\) case and some fractions of the variable component. Accordingly, we used the same models as those we have employed so far, except for the variable component model being added to the stable component one. The results are relatively the same as those we have confirmed in our previous analysis. The best-fit model is diskbb\({}_{\texttt{st}}\)+gauss\({}_{\texttt{st}}\)+simpl\({}_{\texttt{*}}\)*nthcomp\({}_{\texttt{v}}\), and the obtained parameters are consistent with those in Table 2 within the errors. Models using cutoffpl\({}_{\texttt{v}}\) or power\({}_{\texttt{v}}\) caused conflicts with the stable component, and those using bb\({}_{\texttt{nt}}\) to explain the soft excess exhibited worse fit or column density smaller than the Galactic value. Hence we conclude that the assumption made on the floor intensity does not affect our results. #### 3.3.3 Model Fitting of the Bright Phase Spectra The spectra of the bright phase, stable component, and variable component are shown in the top panel of Figure 11 in black, red, and blue, respectively. Although the errors are relatively large due to the limited data points in the CCPs (Figure 5), the stable component exhibits the characteristic "two-humped" spectrum similar to that we saw in the faint phase. As for the variable component spectrum, it shows a slightly harder continuum than the stable component peaking at \(\sim 7\) keV. Here, we take the results from the faint phase into account. Since the CCPs in section 3.2 were continuous at the breaking feature, the spectral shift from the faint phase to the bright phase should also be smoothly connected. Therefore, we assume the model gave the best fit in the faint phase spectra (diskbb\({}_{\texttt{nt}}\)+gauss\({}_{\texttt{st}}\)+simpl\({}_{\texttt{v}}\)*nthcomp\({}_{\texttt{v}}\)) for the stable component of this phase. Since the shape of the variable component spectrum does not change within the faint phase, we fixed all the parameters to the best-fit values shown in Table 2 except for the normalization of simpl\({}_{\texttt{v}}\)*nthcomp\({}_{\texttt{v}}\). The variable component has a continuum extending in a power-law manner with a cutoff around \(\sim 7\) keV as the faint phase one. Hence, we tested the same models as those we used to explain the variable component in section 3.3.2. The model fitting results are presented in Table 3, and examples of the residual are in Figures 11 (a), (b) and (c). Since the variable component gently bends at \(\sim 7\) keV, neither diskbb\({}_{\texttt{v}}\) nor cutoffpl\({}_{\texttt{v}}\) reproduced the spectrum in \(>10\) keV as shown in Figure Figure 10: The \(\Delta\chi^{2}\) distribution obtained from adding a Gaussian line to the 5000 simulated spectral fits. The grey-hatched histogram represents the distribution of the total simulations, and the filled-black one is for those gave values consistent with the observational results within the statistical errors. 11 (a). Especially diskbbv, of which spectrum sharply drops off with Wien's law at higher energy, gave the worst fit among the models. To reproduce the gradual bent above 10 keV, we again tested the same three modified model combinations for the variable component as section 3.3.2; diskbbv+powerv, simplv*cutoffplv, and simplv*nthcompv. Despite adding an extra component at higher energy, diskbbv+powerv gave a similar goodness of fit, \(\chi^{2}/\nu=300.50/290\), as cutoffplv (\(\chi^{2}/\nu=299.21/291\)). This is due to the steep \(1-7\) keV continuum, which is too hard to reproduce with the temperature gradient of diskbbv. Hence, we again let the gradient, namely the spectral hardness, vary by replacing diskbbv with diskbbbv. The fit significantly improved as \(\chi^{2}/\nu=280.70/289\) by steepening the temperature gradient \(p\) (see Table 3). The fit similarly improved for the rest of the patterns. Especially, The model using nthcompv gave the best fit among them for the same reason as that we mentioned in the faint phase. The dropping-off characteristic of nthcompv is avoiding conflict with the stable component spectrum. Although we cannot statistically rule out either diskbbv+powerv or simplv*cutoffplv for the variable component modeling, we hereafter adopt simplv*nthcompv to compare the parameters with those in the faint phase. It is noticeable that the best fit values between both phases are equivalent within the errors (see Table 2 and Table 3), except for the photon index of nthcompv. The model suggests that the continuum of the variable component is slightly hard in the bright phase. The difference is rather marginal, and we must note that the Comptonization model often exhibits a strong correlation between its electron temperature and \begin{table} \begin{tabular}{l c c c c c} \hline \hline variable component model & diskbb & cutoffpl & diskbbb+power & simpl*cutoffpl & simpl*nthcomp \\ \hline norm\({}_{\rm{thcomp}}\) (\(\times 10^{-4}\)) & \(8.5\pm 0.2\) & \(8.7\pm 0.2\) & \(8.6\pm 0.2\) & \(8.5\pm 0.2\) & \(8.5\pm 0.1\) \\ \(\Gamma_{\rm{simpl}}\) & - & - & - & \(1.2^{+0.9}_{-0.2}\) & \(1.7^{+0.3}_{-0.2}\) \\ \(F\) & - & - & - & \(0.4\pm 0.2\) & \(0.38\pm 0.04\) \\ \(T_{\rm{in/bb}}\) (keV) & \(3.2\pm 0.1\) & - & \(2.4\pm 0.3\) & - & \(0.22\pm 0.02\) \\ \(p^{a}\) & - & - & \(>0.8\) & - & - \\ \(T_{\rm{cut/e}}\) (keV) & - & \(4.8^{+0.5}_{-0.4}\) & - & \(3.3^{+0.4}_{-0.5}\) & \(1.5\pm 0.2\) \\ \(\Gamma_{\rm{pl}/{\rm nthcomp}}\) & - & \(0.40\pm 0.08\) & \(0.9^{+0.3}_{-1.6}\) & \(0.2\pm 0.1\) & \(1.36\pm 0.01\) \\ norm\({}_{\rm{pl}/{\rm nthcomp}}\) (\(\times 10^{-4}\)) & - & \(2.1\pm 0.2\) & \(0.2^{+0.5}_{-0.2}\) & \(3.3^{+1}_{-0.7}\) & \(2.3\pm 0.1\) \\ norm\({}_{\rm{disk}}\) (\(\times 10^{-4}\)) & \(16\pm 3\) & - & \(60^{+50}_{-20}\) & - & - \\ \(\chi^{2}/\nu\) & 316.11/292 & 299.21/291 & 280.70/289 & 280.88/289 & 269.23/288 \\ \hline \end{tabular} Note. – a: The power index of the radial temperature profile of the disk. The rest are the same as those in Table 2 \end{table} Table 3: The best-fit parameters obtained from the spectra in the bright phase. Figure 11: Top panel; spectra and the best-fit model unfolded with the instrumental response. The bright phase, stable component, and variable component spectra are shown in black, red, and blue, respectively. Panels (a), (b), and (c); residuals from the best-fit faint phase model+, cutoffplv, simplv*cutoffplv, and simplv*nthcompv, respectively. photon index. Hence, the errors can be underestimated to some extent. To check whether the difference in the photon index is statistically significant, we created a confidence contour of these two (Figure 12). The confidence contours do not overlap in the photon index direction with more than a 99.7% confidence level, from which we can confirm that the continuum of the variable component in this phase is actually harder than the other. The result is consistent with the hint of hardening that was present in Figure 7. On the other hand, the previous study by Koliopanos, F. et al. (2019) did not find such a hardening. We consider the reason is the considerably narrow phase intervals they applied to extract on-pulse and off-pulse spectra. The spectral statistics in the previous study were compromised to clarify the difference between the two and were insufficient to spot such a marginal hardening. Thus, the C3PO method enabled us to maximize the usable spectral statistics and find new features that were not apparent before the present study. ## 4 Discussion The effective area of XMM-Newton and the high energy of capability of NuSTAR have revealed that some parts of the spectrum of ULXP NGC 300 ULX-1 vary in response to the neutron star pulsation, and the variability can be categorized into two distinct phases. Since one was around the pulsation peak and the other was the rest, we named them the faint phase and bright phase, respectively. By applying the C3PO method developed by Noda et al. (2014) to ULXP for the first time, we successfully extracted two pairs of stable and variable component spectra that form the X-ray continuum of NGC 300 ULX-1 from each phase. Each spectral group, namely the stable, variable, and original spectra, was successfully reproduced with a combination of a disk blackbody (+Gaussian) and a Comptonization continuum with an extra high energy power law tail or wide absorption line. In the following paragraphs, we discuss what kind of physical origin may explain these observational results. ### Interpretation of the Stable Component in the Faint Phase In this section, we give a possible implication to the origin of the convex non-pulsating spectral component extracted from the faint phase. According to theoretical studies on super-critically accreting objects, the entire accretion flow, from mass donating star to accretor, can be roughly separated into three regions in terms of distance from the center. One is the most outer region of the accretion flow where the radiative cooling is efficient enough to form an optically thick and geometrically thin standard accretion disk. In this region, the disk emits a multi-color disk blackbody spectrum with a radial surface temperature dependence of \(\propto r^{-0.75}\)(Shakura and Sunyaev, 1973). The closer to the center the stronger the emission becomes, and at a certain radius, the radiation pressure eventually overwhelms the self-gravitational pull of the disk. This forms the second flow region, in which the accretion disk starts to puff up and deviate from the ordinary standard disk regime. Within this radius, the accretion flow is now advection-dominated, in which a significant fraction of generated photons inside the disk are engulfed instead of being radiated from the disk surface. Although the region also emits a multi-temperature blackbody as the outer "standard disk" region, the radial temperature dependence is expected to be flatter as briefly described in section 3.3, namely \(\propto r^{-0.5}\)(e.g., Watarai et al., 2000, 2001). In addition, a cool-optically-thick wind is expected to be launched from this region due to the intense radiation pressure (e.g., Ohsuga et al., 2003; Kawashima et al., 2012). The edge of this outflow can form a photosphere that emits an additional optically thick emission with a characteristic temperature. Figure 12: Significance contours of the electron temperature vs photon index of nthcomp\({}_{\rm v}\). The confidence levels from the faint phase are shown in blue, while that from the bright phase are in red. Inner, middle, and outer solid lines represent 68%, 90%, 99.7% confidence level, respectively. The crosses indicate the best-fit values. If the central object is a strongly magnetized neutron star, its magnetic pressure eventually overcomes the gas pressure of the flow at a certain point closer to the central object. Hence within this radius, the magnetic force restricts the accreting matters to move only along the magnetic field. This is the third characteristic region in the accretion flow, and it is called an accretion column (e.g., Basko & Sunyaev, 1975; Basko & Sunyaev, 1976). As the neutron star rotates, the entire column precesses along the magnetic field, accounting for the pulsating emission component. Since the stable component is unrelated to the pulsation, the emission clearly originates from some regions outside the accretion column, where the dipole magnetic field of the neutron star is too weak to capture the infalling matters. Therefore, it should be emission from either the outer or inner region of the accretion disk. According to the spectral analysis in section 3.3.2, two types of thermal emission models, diskbb and diskbb, gave acceptable fits. Since the single-temperature blackbody model failed to reproduce the spectra, we consider it unlikely that a photosphere of the outflow is the origin of the stable component. Although allowing the radial temperature profile to vary (using diskbb) did not improve the fit and we could not statistically distinguish whether the data favor the disk to be in the standard or advection-dominated regime, we hereafter assume the former as a working hypothesis and proceed to further discussion. Since diskbb is an approximation of the standard accretion disk, its normalization gives an apparent inner-disk radius. The realistic radius \(R_{\rm in}\) can be calculated by using the model normalization parameter \(N\) as \[R_{\rm in}=\left(\frac{ND_{10}^{2}\xi^{2}\kappa^{4}}{\cos\theta_{\rm i}} \right)^{1/2} \tag{4}\] (e.g., Makishima et al., 2000). Here, \(D_{10}\), \(\xi\), \(\kappa\), and \(\theta_{\rm i}\) represent the distance to the object in units of 10 kpc, a correction factor, color-hardening factor, and the inclination angle of the disk, respectively. The factor \(\xi\) corrects for the difference in the innermost edge boundary condition between diskbb and the standard accretion disk. The value is known to be \(\xi=0.412\)(Kubota et al., 1998) for a realistic standard accretion disk model that takes general relativity effects around a black hole into account. Due to the presence of the interrupting magnetic field, it is rather complex and challenging to estimate \(\xi\) for the disk around an accreting neutron star. Some theoretical works suggest that the accretion disk may resemble the \(\xi=0.412\) case, still, the value strongly depends on the details of the accretion flow (e.g., Nixon & Pringle, 2021). Since \(\xi\) for accreting neutron stars is thus unclear, here we assume the most popular value, \(\xi=0.412\)(Kubota et al., 1998), as other studies on accreting neutron star low mass X-ray binaries (e.g., Sakurai et al., 2014) The color-hardening factor \(\kappa\) is a ratio of the color temperature to the effective temperature. The value is known to be in between \(\kappa=1.5-2.0\)(Shimura & Takahara, 1995) at the sub-Eddington regime and recent numerical study suggests that the factor has a weak dependency on the mass accretion rate up to Eddington (e.g., Davis & El-Abd, 2019). Although the total mass accretion rate is in the a super-Eddington regime for a neutron star, here we assume that the local accretion mass density is still in the sub-Eddington level for the accretion flow at the radius where the disk is formed. Hence, we employ the most commonly used value of \(\kappa=1.7\)(Shimura & Takahara, 1995). If we assume \(D\) to be the distance to NGC 300 (1.9 Mpc; Gieren et al., 2005), then the present results in Table 2 and Equation 4 give a radius of \(R_{\rm in}=720^{+220}_{-120}/\sqrt{\cos\theta_{\rm i}}\) km. Although the inclination angle of this system is still unknown, considering the detection of pulsation and the fact that no evidence of eclipse is present, we may assume that the system is nearly face-on to the observer (\(\theta_{\rm i}\sim 0\)). As discussed in section 4.1, \(R_{\rm in}\) can imply the radius where the accretion flow shifts its behavior from the standard accretion disk to an accretion column; the magnetospheric radius \(R_{\rm M}\). In addition to the inner-disk radius, we can calculate the mass accretion rate at the radius by utilizing the theory of standard accretion disk (Shakura & Sunyaev, 1973) as \[\dot{M}=\frac{2R_{\rm in}}{GM}L_{\rm disk}, \tag{5}\] where \(G\), \(M\), and \(L_{\rm disk}\) are the gravitational constant, central object mass, and disk luminosity, respectively. Assuming a typical neutron star mass of \(M=1.4M_{\odot}\) and substituting values for \(L_{\rm disk}\) and \(R_{\rm in}\) from the present result, equation 5 gives a mass accretion rate of \(\dot{M}\sim 1.3\times 10^{20}\) g sec\({}^{-1}\), which is \(\sim 50\) times the Eddington rate of \(1.4M_{\odot}\) neutron star (\(\sim 2.8\times 10^{18}\) g sec\({}^{-1}\)). In the following sections, we derive several physical values from these \(R_{\rm in}\) and \(\dot{M}\) and give a plausible explanation for the observed characteristics of NGC 300 ULX-1. #### 4.1.1 Comparison With the Photon-trapping and Spherization Radii In a super-critical accretion, two characteristic radii define where the accretion flow starts to deviate from the ordinary standard disk regime. One is a photon-trapping radius at which photons generated inside the flow start failing to escape from the surface of the accretion disk due to the strong advection derived from the high mass accretion rate. It is defined as the radius where the photon diffusion time scale becomes equivalent to the advection time scale (e.g., Kato et al., 2008; Ohsuga et al., 2003). The other is called the spherization radius, where the radiation pressure inside the flow overcomes the self-gravitational pull of the disk. According to several numerical calculations (e.g., Shakura and Sunyaev, 1973; Poutanen et al., 2007, we can approximately obtain the latter as \(R_{\rm sp}\sim 3\dot{m}R_{\rm S}\) where \(R_{\rm S}\) and \(\dot{m}\) are the Schwarzschild radius and the mass-accretion-rate ratio over the Eddington rate (i.e., \(\dot{M}/\dot{M}_{\rm edd}=GM\dot{M}/6R_{\rm S}L_{\rm edd}\)), respectively. Some studies also show that \(R_{\rm sp}\) becomes nearly equivalent to the photon-trapping radius (e.g., Poutanen et al., 2007.) Under the present accretion rate (\(\dot{M}\sim 1.3\times 10^{20}\) g sec\({}^{-1}\)), \(R_{\rm sp}\sim R_{\rm trap}\) becomes \(\sim 600\) km. Thus, the radii are smaller than \(R_{\rm in}\), which indicates that the dipole magnetic field of the neutron star is bounding the accretion flow before it reaches the critical radius, from which the disk "puffs up" due to the radiation pressure. This may explain why the spectrum of NGC 300 ULX-1 exhibits fewer emission lines than sources that are considered to be forming a large-scale-height disk at their center, such as Swift J0243.6+6124 (Bykov et al., 2022). A recent NuSTAR observation on a Galactic ULXP, Swift J0243.6+6124, has revealed that the source exhibits a significant Fe K\(\alpha\) emission line in its spectrum at luminosity above the Eddington regime (\(10.1\times 10^{38}\) erg sec\({}^{-1}\); Bykov et al., 2022). Since the Fe line component was reproduced with a reflection model that weakly varies over the pulse period, the authors proposed that the neutron star is embedded in a "well" formed by the inner edge of an inflated super-Eddington accretion disk. The central X-ray emission sweeps the inner wall as the neutron star rotates and generate the varying Fe fluorescence line. In contrast to Swift J0243.6+6124, we have found only a hint of the He-like Fe K\(\alpha\) emission in the spectrum of NGC 300 ULX-1, and this can be due to an absence of such a geometrically-thick disk. Since we expect the strong dipole magnetic field to collimate the wall-illuminating emission toward the magnetic pole to some extent, the solid angle of the irradiating surface may decrease drastically if the disk is geometrically thin. In fact, the reflection fraction of Swift J0243.6+6124 decreased simultaneously with the X-ray luminosity, as the disk shifts its state toward the geometrically thin sub-Eddinton regime (Bykov et al., 2022). Hence, the relatively large \(R_{\rm in}\) obtained from assumptions made in section 4.1 seems convincing in terms of explaining the characteristics of its featureless spectrum. #### 4.1.2 The Co-rotation Radius and the Observed X-ray Luminosity Since the magnetic field also rotates as the neutron star spins, we can define a radius where its rotation speed becomes equivalent to the Kepler velocity at that distance from the center. It is called a Co-rotation radius that can be derived by solving a balance between the spin period of neutron star \(P\) and the Kepler motion period at radius \(r\) as \[R_{\rm c}=\left(\frac{GMP^{2}}{4\pi^{2}}\right)^{1/3}. \tag{6}\] If the system has a larger magnetospheric radius than its co-rotation radius, then the centrifugal force halts the accretion flow, and the entire system becomes X-ray dim. This phenomenon is called the propeller effect. Since ULX-1 is in an ultra-luminous phase, apparently, the effect is not taking place in this system. Therefore, the system must satisfy \(R_{\rm M}<R_{\rm c}\) (in this case, \(R_{\rm in}<R_{\rm c}\)). Under the current pulse period of ULX-1, \(P\sim 31\) sec, Equation 6 gives us a co-rotation radius of \(R_{\rm c}=1.8\times 10^{4}\) km, which is significantly larger than \(R_{\rm in}=720\) km. Hence, ULX-1 is obviously not suffering from the propeller effect, and it is consistent with its \(>10^{39}\) erg sec\({}^{-1}\) luminosity. Although the magnetospheric radius may vary depending on the mass accretion rate, the neutron star must spin up to a period of \(P=0.32\) sec or shorter to achieve \(R_{\rm c}<R_{\rm M}\) in this condition. According to a recent observation, the source has kept spinning up to \(P\sim 17\) sec and accreting matters at a similar rate as this observation (Vasilopoulos et al., 2019). #### 4.1.3 An Estimation of the Magnetic Torque and Calculation of the Expected Spin-up Rate The matter infalling through the accretion disk can transfer its angular momentum to the central neutron star by applying torque onto the star through the magnetic "arm" that couples one to another. This forces the neutron star to spin up/down in time, and NGC 300 ULX-1 was, in fact, spinning up with a rate of \(\dot{P}=5.56\times 10^{-7}\) sec sec\({}^{-1}\) within the present observation (Carpano et al., 2018). The torque applied to the neutron star with a moment of inertia \(I\) and a spin angular momentum \(\omega\) can be written as \[I\dot{\omega}=-2\pi I\frac{\dot{P}}{P^{2}}=\dot{M}\sqrt{GMR_{\rm M}}n(\omega_ {\rm fast}) \tag{7}\] (e.g., Ghosh and Lamb, 1979, Parfrey et al., 2016; Vasilopoulos et al., 2018) where \(\dot{P}\) is the spin-up/down rate and \(n(\omega_{\rm fast})\) is a function of dimensionless variable \(\omega_{\rm fast}=(R_{\rm M}/R_{\rm C})^{3/2}\), which is known as the fastness parameter. For a slow rotator like NGC 300 ULX-1, namely \(\omega_{\rm fast}\ll 1\), \(n(\omega_{\rm fast})\) yields \(\sim 7/6\)(Wang, 1995). If we assume a typical neutron star (\(M=1.4M_{\odot}\) and a radius of \(\sim 10\) km) and a moment of inertia estimated from a certain equation of states and recent observational results (e.g., \(I=1.6\times 10^{38}\) m\({}^{2}\) kg; Silva et al., 2021), equation 7 can be rewritten in terms of \(\hat{P}\) as \[\hat{P}=-2.18\times 10^{-8}\times P_{30}^{2}\dot{M}_{\rm edd}\sqrt{R_{720}} \ {\rm sec}\ {\rm sec}^{-1}, \tag{8}\] where \(P_{30}\), \(\dot{M}_{\rm edd}\), and \(R_{720}\) are scaled parameters defined as \(P_{30}=P/(30\ {\rm sec})\), \(\dot{M}_{\rm edd}=\dot{M}/(1.8\times 10^{18}\ {\rm g}\ {\rm sec}^{-1})\), and \(R_{720}=R_{\rm M}/(7.20\times 10^{7}\ {\rm cm})\). Substituting values obtained from the present analysis, Equation 8 gives a spin-up rate of \(\dot{P}=-1.9^{+1.4}_{-6.1}\times 10^{-6}\ {\rm sec}\ {\rm sec}^{-1}\). Considering the mass distribution of the neutron star and the error obtained by Silva et al. (2021), we here assumed that a \(\sim 30\%\) systematic error is present in \(M\) and \(I\). The derived value is consistent with the actually observed value \(-5.56\times 10^{-7}\ {\rm sec}\ {\rm sec}^{-1}\) within the error. #### 4.1.4 Comparison of the Energy Budget within the Magnetosphere and the Observed X-ray Luminosity Let us test whether the observed X-ray luminosity is consistent with the expected value derived from the obtained mass accretion rate. As the intense magnetic field truncates the accretion disk at \(R_{\rm M}\), the total energy budget for the emission from the inner-precessing flow should be equivalent to (or less than) the gravitational potential energy released between \(R_{\rm M}\) and the neutron star surface. If we assume a free-fall accretion and the mass accretion rate to be constant at all radii, the observed luminosity is \[L_{\rm obs}=\frac{GM\dot{M}_{\rm in}}{\gamma}\left(\frac{1}{R_{\rm NS}}-\frac{ 1}{R_{\rm M}}\right) \tag{9}\] where \(R_{\rm NS}\), \(\dot{M}_{\rm in}\), and \(\gamma\) are the neutron star radius, the mass accretion rate at the inner-edge of the truncated disk, and the beaming factor, respectively. Since the dipole magnetic filed may collimate the emission pattern within \(R_{\rm M}\) for some extent, a calculation assuming an isotropic radiation can amplify the apparent luminosity. We took this effect into account by scaling the value with a dimension-less factor \(\gamma\) (\(0<\gamma\leq 1\)). Substituting the values derived from the present observation (\(\dot{M}_{\rm in}=1.3\times 10^{20}\ {\rm g}\ {\rm sec}^{-1}\), \(R_{\rm M}=7.20\times 10^{7}\ {\rm cm}\)) and assuming a typical neutron star (\(M=2.8\times 10^{33}\ {\rm g}\), \(R_{\rm NS}=10^{6}\ {\rm cm}\)), equation 9 gives a luminosity of \((2.6/\gamma)\times 10^{40}\ {\rm erg}\ {\rm sec}^{-1}\). Whereas the unabsorbed bolometric luminosity of the observed pulsating component is \(1.1\times 10^{40}\ {\rm erg}\ {\rm sec}^{-1}\). We assumed the distance to the ULXP to be the same as that to NGC 300 (1.9 Mpc; Gieren et al., 2005) and an isotropic radiation. Although the true beaming factor is unknown, we may assume \(\gamma\) to be close to unity because the pulse profile is rather sinusoidal (Figure 2) and an observation showed that the intensity of He II emission line in this system is consistent with that assuming emission with a minimal beaming effect (Binder et al., 2018). Accordingly, the observed luminosity is roughly consistent with that expected from the accretion rate derived from an assumption that the origin of the stable component emission is an accretion disk. ### Magnetic Field Estimation Utilizing the obtained observational values, we estimate the strength of the dipole magnetic field of ULX-1 by following similar discussion as Walton et al. (2018). The magnetic field of a mass-accreting neutron star with a certain mass accretion rate \(\dot{M}\) can be expressed as \[B=\left(\frac{R_{\rm M}}{2.6\times 10^{6}\ {\rm cm}}\right)^{7/4}\dot{M}^{1/2} \tag{10}\] (Ghosh & Lamb, 1979; Lai, 2014; Furst, F. et al., 2017). Since \(R_{\rm in}\) is now observable and \(\dot{M}\) can be calculated via equation 5 utilizing observed \(L_{\rm disk}\), we are able to derive the magnetic field \(B\) by assuming a typical neutron star mass of \(M=1.4M_{\odot}\) from equation 10. In Figure 13, we present the relation between the mass accretion rate and the magnetic field derived from equations 10 (black) and 5 (red). Each curve is drawn using the result obtained in the spectral analysis. According to this figure, our estimation of the magnetic field Figure 13: Estimated mass accretion rate and the magnetic field of ULX-1. Dashed lines are the best estimates, which are derived from equations 10, 5 and the best-fit values of the spectral analysis. Solid lines indicate the width of 90% confidence level. The best estimate region is highlighted as the red-hatched region. strength of ULX-1 lies in \(2-7\times 10^{12}\) G (from the lowest to the highest tip of the red-hatched region). The estimated value is consistent with those made by other methods. For example, Vasilopoulos et al. (2018) estimated the strength of field as \(5\times 10^{12}\) G by utilizing the observed spin period evolution and a theory based on induced torque to neutron stars from accreting matters. Walton et al. (2018) have found a possible absorption line feature that may be interpreted as a cyclotron resonance scattering feature. Assuming the feature originates from electron scattering, the authors concluded that the estimated magnetic field is \(\sim 10^{12}\) G. ### Possible Picture of the Pulsating Accretion Flow Finally, let us discuss the possible structure of the pulsating accretion flow in NGC 300 ULX-1 from the variability we saw in the present analysis. As described in section 3.2 and section 3.3, we have revealed that the variability of the pulsating component of this ULXP can be divided into at least two phases, the faint phase (\(0.0<\phi<0.2\) and \(0.5<\phi<1.0\)) and the bright phase (\(0.2\leq\phi\leq 0.5\)). In the faint phase, the pulsating component exhibited a hard continuum ranging from 0.5 keV to 25 keV with a rollover at \(\sim 7\) keV. Although we could roughly explain the spectrum with models that emit photons in a wide energy band with a characteristic cutoff temperature (multi-color disk blackbody, cutoff power law, and Comptonization), it required an extra component to explain the extending continuum at \(>10\) keV. The result is consistent with the previous studies of ULXs including the same ULXP (e.g., Carpano et al., 2018; Walton et al., 2018; Koliopanos et al., 2019), and we employed an extra model to resolve this discrepancy. Simply adding a power-law model could not solve the residual at \(>10\) keV because the model extends to infinity in both energy directions and causes a conflict with the stable component that dominates the flux in the lower energy band. Thus, we concluded that the data favor a model that sharply drops off at the lower energy (e.g., Rayleigh-Jeans of the blackbody), and a model consisting of Comptonization plus its power-law scattering fits the data the best so far. As discussed in the previous section, strongly magnetized neutron stars can form a magnetosphere, which forces the accreting matter to fall along its magnetic field. The magnetosphere creates a cylindrical-shaped accretion flow at a close region to the magnetic pole so-called accretion column (e.g., Basko and Sunyaev, 1975; Basko and Sunyaev, 1976). Within this column, free-falling matters are shock heated at a particular height from the stellar surface and emit high-energy X-ray photons. If the accretion rate gets close to the Eddington limit, these generated photons begin to escape from the sidewalls of the column rather than the direction of the magnetic field, creating an emission pattern perpendicular to the magnetic field called a "fan beam" (Basko and Sunyaev, 1976). Some of these fan-beam photons can irradiate an optically thick region trapped around the boundary of the magnetosphere. The incident photons will experience photoelectric absorption and multiple Compton scatterings within this region, and some eventually escape the system as reprocessed thermal emission (e.g., Mushtukov et al., 2017). Other fractions of the fan beam photons may instead irradiate the neutron star surface close to the magnetic pole and be reflected as a secondary emission pattern called a "polar beam" (e.g., Trumper et al., 2013; Poutanen et al., 2013). In this case, photons are re-emitted parallel to the magnetic field with a spectrum harder than the original fan beam. Thus, the accretion flows around highly magnetized mass accreting neutron stars can form multiple emission regions, and several observations on other ULXPs (e.g., Koliopanos and Vasilopoulos, 2018 for SMC X-3, and Bykov et al., 2022 for Swift J0243.6+6124) have supported such a complicated emission geometry. Although the sources were not in the ultra-luminous state, the resent X-ray polarization observations on accreting neutron star binaries (Tsygankov et al., 2022 for Cen X-3, and Marshall et al., 2022 for 4U 1626-67) have also hinted the presence of the multi-zone pulsed emission. Some numerical studies suggest (e.g., Mushtukov et al., 2017, 2019) that at well above the Eddington rate, the reprocessing region can extend further down to the neutron star to obscure the central hard X-ray emitting region. As almost the entire region within the magnetosphere has shifted to the reprocessing area, the accretion flow at this rate is no longer regarded as a column but rather as a "curtain" (for example, see Figure 1 in Mushtukov et al., 2017). It shields and reprocesses most of the hard X-ray photons from the central fan beam into thermal black-body-like emission. Since this reprocessing optically thick curtain has a particular temperature gradient along the magnetic field, one can approximate its overall spectrum with the summation of black bodies from respective temperature regions. A numerical study estimates the lowest and highest temperature of the black bodies to be around sub keV and a few keV for a neutron star accreting at \(5\times 10^{39}\) erg sec\({}^{-1}\) with a magnetic field of \(10^{13}\) G (Mushtukov et al., 2017). The estimated value is roughly consistent with the present observational result. The variable component spectrum exhibited breaks at both ends in energy, and we successfully explained the lower one with the Rayleigh-Jeans of 0.16 keV blackbody. Although the Comptonization model, not the multi-color blackbody as expected in the theoretical study, gave the best fit to the data, we consider that the model just happened to be the one to approximate the actual temperature gradient of this accretion flow. The smooth single-peaked pulse profile (Figure 3) suggests that the emission from the other side of the pole is not reaching the observer, which also supports the idea that the reprocessing region covers most of the magnetosphere. According to several theoretical studies, including numerical simulations (e.g., Ohsuga et al., 2005; Ohsuga, 2007; Ohsuga and Mineshige, 2011; Kawashima et al., 2012; Jiang et al., 2014; Sadowski et al., 2014), an extreme accretion rate as ULXPs may derive the surrounding disk to launch optically thick and cool outflows. Since this creates a tall funnel-like structure around the central source, it can easily interfere with the pulsating emission and change its spectral shape by Compton down scattering or absorbing photons as the entire accretion curtain precesses around the spin axis. Kosec et al. (2018) reported the presence of blue-shifted absorption lines in the same data set with a \(\sim 3\sigma\) significance. The detection is evidence of highly-ionized matters outflowing with \(\sim 20\%\) of the speed of light. However, the spectrum of the pulsating component does not change its shape as the intensity varies within each phase interval (see the CCPs in section 3.2). The behavior indicates that the emission appearance of the individual accretion curtain regions, which accounts for the intensity change in each phase interval, is somehow uniform and does not depend on the rotation phase; suggesting only the solid angle of the emission region to the observer changes as the curtain rotates with the neutron star. Furthermore, (Kosec et al., 2018) estimated the column density of the outflow to be \(1.2^{+1.9}_{-0.6}\times 10^{23}\) cm\({}^{-2}\), which is relatively transparent to the Thomson scattering (\(<10^{24}\) cm\({}^{-2}\)). According to several numerical simulations of super-critical accretions, such Thomson-thin gas can leak out from the accretion curtain (Abolmasov and Lipunova, 2022) and be accelerated up to \(10-40\%\) of the speed of light via intense radiation pressure and magnetic reconnections near the magnetic pole (Takahashi and Ohsuga, 2017). Considering the observational and simulation results above, we propose that, instead of the disk, the intense radiation of the rotating accretion curtain is blowing the thin matter leaked out near the neutron star away. Since the strong magnetic field is likely truncating the disk before it derives the geometrically/optically thick super-critical winds (see section 4.1.1), the central curtain emission is visible without being interfered with by the matters within the disk. Although the theoretical studies thus qualitatively describe most of the spectral behavior of the pulsating component below 10 keV, we still do not have a good explanation for the tail component above that. Due to the limited signal-to-noise ratio above 25 keV, it is rather challenging to determine whether the feature above 10 keV originates from an absorption line argued by Walton et al. (2018) or an extending power-law continuum. We expect a future hard X-ray mission to resolve this problem. As the source reaches the bright phase (or the pulse peak), the component that used to be variable in the faint phase (blue solid line in Figure 14) halts to increase its intensity, and an alternative variable component (magenta solid line in Figure 14) with a similar but slightly harder continuum emerges. It strongly indicates that the accretion flow accounting for the pulsation consists of at least two representative emission regions. Furthermore, considering that the harder component is observable only in the bright phase (\(\sim 30\%\) of the pulsation period), the emission can be slightly collimated toward the magnetic pole axis. Such a multi-region structure has never been reported in the rotating accretion component of ULXPs. In the top half of Figure 14, we present a schematic cross-section drawing of a possible accretion geometry that explains our observational result. As described above, an optically thick curtain is likely formed along the magnetic field due to the super-Eddington accretion. Since the magnetic field confines the accretion flow into a small region near the magnetic pole, the entire curtain creates a funnel structure, which obscures most of the neutron star and the central hard X-ray (fan beam) emitting region from the observer. Such curtains around magnetized neutron stars are also suggested by theoretical studies including numerical simulations (e.g., Takahashi and Ohsuga, 2017; Mushtukov et al., 2019; Abarca et al., 2021; Inoue et al., 2023). The entire curtain exhibits a reprocessed multi-color blackbody spectrum shown in blue lines. The area near the stellar surface (the magenta region) is observable only when the opening cone faces the direction toward the observer as the flow precesses. Thus, the hard variable component shown in the magenta line emerges in a limited phase (\(0.2\leq\phi\leq 0.5\)), whereas the blue reprocessed component is present in the spectrum at all times. Since this emerging continuum is harder than that in the faint phase, it can be interpreted as an emission from the polar beam. This is because the polar beam is a scattered-off fan beam component, in which the lower energy part of the original emission experiences a photoelectric absorption at the neutron star surface. The emission of the polar beam typically extends up to \(>10\) keV (e.g., Koliopanos, F. et al., 2019; Trumper et al., 2013), whereas that of NGC 300 ULX-1 rolls over at lower energy as \(\sim 7\) keV. Since the spectral shape is relatively similar to the variable component in the faint phase, the polar beam can be also reprocessed by the optically thick curtain, and we might have observed the remnant of its original hard spectrum. Recent two-dimensional simulation indicates (Kawashima & Ohsuga, 2020) that the super-critical accretion flow onto a highly magnetized neutron star can form a complex structure inside its opening funnel. Also, pulse shape and pulse fraction depends on the the structure of the funnel and observer's viewing angle (Inoue et al., 2020). Although the expected spectrum from such a complex structure is yet to be available, the present result might support such a simulation. Finally, we discuss how the present accretion scenario is related to a longer time-scale spectral evolution. According to the recent SWIFT/XRT and NICER monitoring campaign on ULX-1, the source has continuously decreased its X-ray flux by an order of magnitude since this observation in 2016 (e.g., Ray et al., 2019; Vasilopoulos et al., 2019; Ng et al., 2022). Although the cause of the dimming is still unclear, Vasilopoulos et al. (2019) suggested that obscuration by radiation pressure-dominated disk and its outflow can explain the phenomenon rather than a decrease in intrinsic mass accretion rate. This is because the source kept spinning up at the same rate as that in the brightest era, meaning the central neutron star continuously gained an equivalent amount of accreting matters, i.e., spin-up torque, in the dimming phase. Ng et al. (2022) also supported this scenario by reporting spectral softening and occasional detections of possible partially-covered disk emissions during this X-ray flux decrease. As we calculated in section 4.1.1, the estimated \(R_{\rm M}\) of ULX-1 is comparable to or marginally larger than \(R_{\rm sp}\) (or \(R_{\rm trap}\)) in this particular observation. Therefore, a slight increase in mass accretion rate may result in a formation of a geometrically thick radiation pressure-dominated disk region that launches thick outflows. Given that the intrinsic mass accretion rate has been comparable to or possibly higher than the present data, we suggest that a radiation-pressure-dominated disk has formed during the dimming phase due to a further decrease of \(R_{\rm M}\) (or an increase of \(R_{\rm sp}\)) from the 2016 observation and such a thick disk has obscured the central neutron star region as it precesses along the line of sight due to a physical mechanism; such as Lense-Thirring precession (e.g., Middleton et al., 2017) as Vasilopoulos et al. (2019) and Ng et al. (2022) proposed. ## 5 Summary and Conclusions We reanalyzed X-ray data sets of NGC 300 ULX-1 taken with XMM-Newton and NuSTAR on 2016/12/16. In addition to the classical phase-resolved spectral analysis, we newly employed a method that has been developed in the AGN studies called the C3PO method. As previous studies have reported, the pulse profile of NGC 300 ULX-1 is relatively sinusoidal and peaked at one particular phase throughout the observed energy band, suggesting a single-zone emission region from its appearance. However, the C3PO method has revealed that the pulsating emission varies differently within \(\pm 15\%\) of the pulsation peak. The result suggests that, instead of being a single hot zone, the pulsating accretion flow consists of at least two representative emission regions. Accordingly, we divided the entire data into these two phase intervals and performed further C3PO analysis procedures for each, separately. For each phase interval, the C3PO method provided an extra pair of spectra that represent the actual shape of the components consisting of the overall spectrum of NGC 300 ULX-1. One coincides with the pulsation and the other does not. Thanks to this extra spectral information that was not available in the previous studies, we have successfully put more stringent restrictions on the spectral modelings and resolved several model degeneracies that have been reported in the same data set. The stable component is explained with a geometrically thin standard disk model with \(0.25\pm 0.03\) keV peak temperature and a \(720^{+220}_{-120}\) km inner radius. As for the variable component, the best-fit model is a combination of Comptonization of a \(0.14\pm 0.04\) keV blackbody and its Figure 14: Schematic pictures of the possible accretion geometry (top) and spectral variability (bottom) of NGC 300 ULX-1. extra up-scattered power law. It disfavors some models that were successful in the previous studies, especially those extending infinitely to the lower energy band. Furthermore, its continuum is found to be slightly hard in the brighter phase. The obtained disk parameters gave a mass accretion rate of \(1.3\times 10^{20}\) g sec\({}^{-1}\), which is \(\sim 50\) times the Eddington rate of a \(1.4M_{\odot}\) mass neutron star. Using this accretion rate and assuming the disk-inner radius to be equivalent to the magnetospheric radius, we estimated the spin-up rate, the X-ray luminosity of the rotating flow, the spherization radius, and the dipole magnetic field strength of this system. All of the values are consistent with the observed values, numerical simulations, and estimations made by other independent methods. Considering the results, we have proposed that the system consists of a geometrically thin accretion disk truncated by a \(2-7\times 10^{12}\) G magnetic field and an inner-precessing accretion flow exhibiting a hard continuum with \(1.1\times 10^{40}\) erg sec\({}^{-1}\) luminosity. The latter flow is likely forming a funnel-like geometry with two representative temperature regions, and its inner-hot part is observed only when the opening cone points toward the observer. The authors would like to thank all of the members of XMM-Newton and NuSTAR teams for their devotion to instrumental calibration and spacecraft operation. This research has been supported by JSPS KAKENHI grant numbers JP19K21054, JP19K21884, JP20H01941, and JP20H01947, in part by JSPS Grant-in-Aid for Scientific Research (A) JP21H04488 and Multidisciplinary Cooperative Research Program in CCS, University of Tsukuba. This work was also supported by MEXT as "Program for Promoting Researches on the Supercomputer Fugaku" (Toward a unified view of the universe: from large-scale structures to planets, JPMXP1020200109), and by Joint Institute for Computational Fundamental Science (JICFuS). XMM, NuSTAR NUMPY (Harris et al., 2020), ASTROPY (Astropy Collaboration et al., 2018), VEUSZ ([https://veusz.github.io/](https://veusz.github.io/)), MATPLOTLIB (Hunter, 2007), ROOT ([https://root.cern/](https://root.cern/)), HEASOFT ([https://heasarc.gsfc.nasa.gov/docs/software/heasoft/](https://heasarc.gsfc.nasa.gov/docs/software/heasoft/)), and SAS ([https://www.cosmos.esa.int/web/xmm-newton/download-and-install-sas](https://www.cosmos.esa.int/web/xmm-newton/download-and-install-sas))
2309.08521
Uncertainties too large to predict tipping times of major Earth system components from historical data
One way to warn of forthcoming critical transitions in Earth system components is using observations to detect declining system stability. It has also been suggested to extrapolate such stability changes into the future and predict tipping times. Here, we argue that the involved uncertainties are too high to robustly predict tipping times. We raise concerns regarding (i) the modeling assumptions underlying any extrapolation of historical results into the future, (ii) the representativeness of individual Earth system component time series, and (iii) the impact of uncertainties and preprocessing of used observational datasets, with focus on nonstationary observational coverage and gap filling. We explore these uncertainties in general and specifically for the example of the Atlantic Meridional Overturning Circulation. We argue that even under the assumption that a given Earth system component has an approaching tipping point, the uncertainties are too large to reliably estimate tipping times by extrapolating historical information.
Maya Ben-Yami, Andreas Morr, Sebastian Bathiany, Niklas Boers
2023-09-15T16:31:46Z
http://arxiv.org/abs/2309.08521v2
# Uncertainties too large to predict tipping times ###### Abstract Observations are increasingly used to detect critical slowing down (CSD) in potentially multistable components of the Earth system in order to warn of forthcoming critical transitions in these components. In addition, it has been suggested to use the statistical changes in these historical observations to extrapolate into the future and predict the tipping time. Here we argue that this extrapolation is too sensitive to uncertainties to give robust results. In particular, we raise concerns regarding (1) the modelling assumptions underlying the approaches to extrapolate results obtained from analyzing historical data into the future, (2) the representativeness of individual time series representing the variability of the respective Earth system components, and (3) the effect of uncertainties and preprocessing of the employed observational datasets, with focus on non-stationary observational coverage and the way gaps are filled. We explore these uncertainties both qualitatively and quantitatively for the Atlantic Meridional Overturning Circulation (AMOC). We argue that even under the assumption that these natural systems have a tipping point that they are getting closer to, the different uncertainties are too large to be able to estimate the time of tipping based on extrapolation from historical data. ## Introduction In recent years there has been increasing focus on Earth system components that can potentially undergo abrupt transitions in response to anthropogenic forcing. In particular, research has focused on so-called "tipping elements", which are systems that have been suggested to exhibit bistability, implying they could abruptly transition between multiple stable equilibrium states when a critical forcing threshold is passed (Mckay et al., 2022; Boers et al., 2022). Such systems are, for example, the Amazon rainforest, the Antarctic ice sheets, the Greenland ice sheet (GIS), or the Atlantic Meridional Overturning Circulation (AMOC). The collapse of these tipping elements would have severe impacts on the climate from local to regional scales, and their research is thus of high priority. However, both the probability of tipping and the degree of warming under which it might happen remain highly uncertain for these tipping elements (IPCC, 2022; Wang et al., 2023). This is in part due to the lack of such abrupt transitions in the recent observational records, and in part due to the difficulty of modelling such non-linear systems using comprehensive coupled climate models. However, paleoclimate evidence suggests that abrupt transitions in the climate system have occurred in the longer-term past (Boers et al., 2022). Despite the lack of critical transitions in the observational record, historical observations can still be used to inform us on the changes in stability of Earth system components. When changes in forcing lead multistable systems to approach a transition to a different state, they typically exhibit so-called critical slowing down (CSD), in which their response to perturbations changes in a characteristic manner (Dakos et al., 2008). Statistical changes indicating CSD have been identified in many systems, including the GIS (Boers and Rypdal, 2021), the AMOC (Boers, 2021; Michel et al., 2022), the Amazon rainforest, (Boulton et al., 2022), as well as other parts of global vegetation (Smith et al., 2022b, a). As CSD occurs when a system is becoming less stable and approaching a critical transition, the identification of these changes can be seen as a warning of approaching transitions, and so they are often called early warning signals (EWS) (Scheffer et al., 2009). It may seem natural to take an extra step and use the statistical changes in historical data not only to show an ongoing destabilization, but also to extrapolate into the future and predict a time of tipping. For example, Ditlevsen and Ditlevsen (2023), hereafter DD23) recently predicted that the AMOC would tip around mid-century. They introduced a novel and sophisticated maximum likelihood estimator (MLE) based approach to predict the tipping time and applied it to a sea-surface temperature (SST) based fingerprint of the AMOC. Whilst such approaches are theoretically interesting, their robustness to extrapolate tipping times has come under question. Though the utility of such predictions, if robust, would be undeniable, the problem lies in the multiple levels of uncertainty inherent to such extrapolations from historical data. In this work, we focus on the example of DD23's prediction for the AMOC tipping time to demonstrate the effects of three types of uncertainties: (1) the modelling assumptions underlying DD23's MLE method, (2) the representativeness of the SST fingerprint for the AMOC, and (3) the effect of uncertainties and preprocessing in SST datasets, with focus on non-stationary observational coverage and the way gaps are filled (Ben-Yami et al., 2023). Whilst it is impossible to fully quantify these uncertainties, we use DD23's MLE-based method to give quantitative examples of how the different factors influence the predicted tipping time. Although the quantitative results of this work are specific to the AMOC, these types of uncertainties will be present in any attempt to extrapolate a future tipping time from historical data. We note that Boers and Rypdal 2021 (Boers and Rypdal (2021), hereafter BR21) found that parts of the central-western Greenland ice sheet may have already passed its critical threshold. BR21 reconstructed the regional height changes from ice-core-derived melt rate data from the Jakobasyn Isbrae glacier (Trusel et al., 2018) and derived the potential landscape of the ice-sheet height by fitting a previously introduced non-linear model (Levermann and Winkelmann, 2016) to the reconstructions. While the critical slowing down detected in the fluctuations around the fixed point of the fitted model suggests that the stability of the central-western Greenland ice sheet has been declining, one might be tempted to interpret the position of the bifurcation point of the fitted model (red vertical dashed line in Fig. 3 of BR21) as a sign that the critical threshold in terms of regional air temperatures may have already been crossed in the last years. However, this latter point is subject to large uncertainties stemming from the simplifying model assumptions and the height reconstructions based on annual ice-core derived melt rates. The estimated bifurcation point of the simple model should thus not be understood as an estimate of the actual critical threshold in regional air temperatures and should certainly not be translated into a time of tipping Boers and Rypdal (2021). ## 3 Modelling assumptions DD23 present a novel, innovative and statistically optimal approach for the extrapolation of the time of AMOC tipping via maximum-likelihood methods (for details, see Ditlevsen and Ditlevsen (2023)). This approach explicitly builds on the assumption that the AMOC is well-modelled as a one-dimensional system undergoing a fold bifurcation in normal form, forced by white Gaussian noise. DD23 give a confidence interval of their assessment of the AMOC tipping time using parametric bootstrapping on the aforementioned model. However, the uncertainties associated with the mechanistic simplification of the system have not been investigated. We find that for a selection of established conceptual AMOC models, the estimation of the time of tipping is biased toward earlier times and, in some cases, constitutes a false alarm. ### 3.0.1 Suitability of the normal-form fold bifurcation model. There has long been a discussion about whether the AMOC, when investigated as a complex system under external forcing of e.g. global mean temperature (but better regional freshwater forcing), exhibits multiple stable states (Stommel, 1961; Rahmstorf et al., 2005; Lohmann et al., 2023). Transitions between such stable states could be bifurcation-induced, and thus abrupt and irreversible. The so-called fold bifurcation constitutes a minimal example of such behaviour. For instance, the conceptual Stommel-Cessi model of the AMOC features a fold bifurcation (Cessi, 1994). Taking this reasoning another step further, DD23 propose that the one-dimensional observable of AMOC strength is well represented by the following model: \[\frac{\mathrm{d}X_{t}}{\mathrm{d}t}=-A(X_{t}-m)^{2}+\lambda,\] where \(\lambda\) is the external control parameter. They argue this by remarking that all fold bifurcations are topologically equivalent to this model. This, however, is only true locally, in potentially very close proximity to the bifurcation point, while the proposed estimation method banks on the assumption that it would hold in arbitrary distance to the bifurcation point. Based on the arguments brought forth, we do not see a direct constraint on the dynamics away from the tipping point. DD23 further motivate the model choice by arguing for a good fit of model data from (Rahmstorf et al., 2005) with respect to the posited square root behaviour of \(X\sim\sqrt{\lambda}\). Given that the fitting constraints on the square root curves are weak, allowing for "noise-induced tipping" at any point along the curve, this good fit, which is not quantified, might be expected. We also note that these visual fits are based on a comparably old collection of simulations, and the square root fit does not seem to hold for more recent model data (see Figure 4 in (Lohmann et al., 2023)). When performing the same estimation of tipping time according to DD23 on data of the closely related Stommel-Cessi AMOC model, one obtains a considerable bias in tipping time estimates towards earlier times (Figure 1(c)). This should be seen as an indication that even models with an underlying fold bifurcation structure, yet not following the very specific normal form model equation above, produce time series which, when applying the method of DD23, yield biased tipping time estimates. ### 3.0.2 Linearity of forcing. DD23 introduce the notion of a multistable AMOC forced by a single linearly increasing external control parameter \(\lambda\). We argue that this does not give a complete picture of anthropogenic forcings and their effects. Human-induced global warming is not the only significant factor altering the conditions supporting the AMOC. Several studies show that radiative anomalies due to aerosol pollution likely attenuated the AMOC weakening of the past decades (Hassan et al., 2021; Menary et al., 2020). Such time-varying influences cannot be represented by a linearly evolving control parameter. Moreover, the GMT forcing itself influences the AMOC due to many different, nonlinear mechanisms, e.g. via thermal expansion, a strengthening hydrological cycle, sea-ice and ice sheet melt (with the influence of the latter in the historical period still under debate) (Swingedouw et al., 2022; Devilliers et al., 2021). It is, therefore, not sufficient to observe an approximately linear ramping in the logarithm of CO\({}_{2}\) emissions to argue for linear forcing. As DD23 say, the effective freshwater flux would serve as a better forcing parameter (Rahmstorf et al., 2005; Hofmann and Rahmstorf, 2009), and there is no evidence that it would linearly depend on GMT; e.g., Greenland runoff increases nonlinearly over time (Bamber et al., 2018; Trusel et al., 2018). While the conceptual view of only one external control parameter evolving linearly towards a critical value is sufficient to describe the sign of trends in AMOC strength and associated changes in stability, we argue that it is too simplistic to allow for an estimate of the time of collapse. Even if the AMOC is indeed approaching a fold bifurcation, the possibility of the forcing being non-linear means any extent of bias is possible in the estimation of tipping time, as extrapolation would be unwarranted. This is particularly relevant with respect to the future evolution of the control parameter in case of the AMOC, since the only scenario under which log(CO\({}_{2}\)) would continue to increase linearly is the extreme SSD5.85 (Figure S1); a scenario in which there is not only no climate mitigation but also a rapidly growing fossil fuel-based economy (Riani et al., 2017). Time scales, internal variability, and assumptions on the noise.The AMOC is known to exhibit pronounced internal variability (Latif et al., 2022) that is, in contrast to DD23's assumption, not well-represented by white noise. To address the issue of non-stationary, non-white forcing for CSD analysis, the restoring rate \(\lambda\) was estimated in Boers (2021) in a way that is robust against changes in the correlation strength of the driving noise. Moreover, the AMOC exhibits decadal internal variability that is also not represented by DD23's assumption of white noise forcing. Before the com Figure 1: Tipping time calculation for data stemming from different conceptual models. The histograms depict the estimations of the time of tipping \(t_{e}\) for 1000 model runs (see Methods for model equations). The leftmost column relates to data obtained from the fold normal form model with linear forcing parameter introduced in equations (1) and (2) of DD23. Forced by white noise, this represents the intended estimation setting. The non-stationary red noise case is added in the second column as an alternative model setting with practical relevance. In the third column, data from the Stommel-Cessi conceptual AMOC model forced by white noise is analysed. In the fourth column, a linear model, without bifurcation but with a trend in the mean state, is considered. This could represent centennial internal variability or alternatively anthropogenic forcing. Non-stationary red noise causes slowing down, leading to false alarms in the estimated tipping times. The estimations in (b)-(d) are biased because the data did not stem from the exact intended model expected by the MLE method of DD23. In contrast, the large estimator spread on data obtained from the intended model with late tipping time \(t_{e}=2150\) (teal histogram in (a)) seems to be inherent to the statistical procedure and further complicates a reliable assessment of the time of tipping via this method. The QQ plots beneath each histogram give the model fit of the derived maximum likelihood model to the data of one sample. A comparison of the QQ plots suggests that time series stemming from alternative models, including the linear one, are similarly well modelled by the proposed fold normal form model with white noise forcing as the AMOC time series of DD23 (Figure of Hinleysen and Ditleysen (2023)). Therefore, the QQ plot introduced by DD23 has little expressiveness on the question of whether biases due to alternative generating models are present in the respective estimation at hand. mencing of the destabilisation at time \(t_{0}\), DD23 assume the AMOC to resemble paths of a stationary stochastic process \(\mathbf{X}\) defined by \[\mathrm{d}X_{t}=-2\sqrt{|\lambda|A}(X_{t}-m)\mathrm{d}t+\sigma B_{t},\] where \(\lambda\) is the external control parameter and \(A\) is a time scale parameter. They estimate a value of \(2\sqrt{|\lambda|A}\approx 3.1\mathrm{year}^{-1}\), corresponding to a characteristic correlation time of \(0.32\mathrm{[year]}\). In contrast, frequency spectra of AMOC evolutions in General Circulation Models (GCMs) show strongest variability around \(5-100\) years (Figure 6 in Medhaug and Furevik (2011)). Such pronounced additional variability on long time scales is not captured by the above Ornstein-Uhlenbeck model. Internal variability independent from the model noise will thus cause large excursions from the transient mean. The proposed method is not equipped to incorporate the impact of these excursions on the estimated tipping time, since they may be misinterpreted as trends towards a tipping point (Figure 1(d)). Moreover, for quantitative extrapolations as attempted by DD23, any simplifying assumptions on the driving noise would need to be carefully checked. Since disturbances to the equilibrium state are themselves of atmospheric and oceanic origin, time-correlation of the noise should be taken into consideration, e.g. via a red noise model (Boers, 2021). Non-stationary red noise present in the system can incur substantial biases in the estimation of the tipping time and even result in false alarms of an approaching bifurcation (Figure 1(b) and (d)). ## 3 Representativeness of the SPC index for AMOC The above modelling uncertainties arise when one assumes that the time series used by DD23 is a direct representation of the AMOC. DD23 use an AMOC fingerprint based on the sea-surface temperatures (SSTs) in the sub-polar gyre (SPG). This fingerprint is based on the assumption that the so-called "warming hole" in the North Atlantic, an area which is cooling as opposed to the global warming trend detected essentially everywhere else, is caused by a weakening of the AMOC (Rahmstorf et al., 2015; Drifhout et al., 2012; Menary and Wood, 2018; Liu et al., 2020). The fingerprint itself (defined as the SSTs averaged over the SPC area minus the global SST mean, hereafter SPC index) has been supported by two lines of evidence: first, across models the historical trends in the SPC index in CMIP6 models correlate with the trends in the AMOC stream-function (at various latitudes) (Caesar et al., 2018; Menary et al., 2020); second, the SPC index time series itself is correlated with the AMOC streamfunction time series at various lag times (depending on the study either the maximum of the streamfunction is taken, or its value at different latitudes) (Rahmstorf et al., 2015; Jackson and Wood, 2018; Zhu et al., 2023; Little et al., 2020). However, both these correlations have been shown to be highly non-stationary, and are sensitive to the time period, to the forcing scenario and to the underlying processes (Little et al., 2020; Jackson and Wood, 2018). This is likely due to the fact that the warming hole is not driven solely by the AMOC, but is a result of both changes in ocean heat transport and changes in atmospheric forcing (Li et al., 2022; He et al., 2022; Ferster et al., 2022; Ghosh et al., 2022). This partial connection of the SPC to the AMOC is supported by recent studies using the Overturning in the Subpolar North Atlantic Program (OSNAP), which have shown that the Labrador Sea and the SPC play a smaller role in North-Atlantic deep water formation than previously thought (Lozier et al., 2019; Chafik et al., 2022). The non-stationarity of the correlation between AMOC streamfunction and the SPC index does not imply that the SPC index is not useful for studying the stability of the AMOC, as the SPC still plays a crucial role in the AMOC and would thus be sensitive to its stability changes (Swingedouw et al., 2022; Menary et al., 2015; Sun et al., 2021). Signs of CSD in the SPC thus still likely indicate a destabilization of the AMOC. However, the non-stationarity of the fingerprint does reduce its usefulness for exact predictions such as those done by DD23. We think that for predictive purposes, including those based on extrapolation, it is problematic to fit a simple bifurcation model representing the AMOC to a fingerprint whose correlation with the AMOC changes over the time period under consideration. To obtain a better representation of the AMOC, different proposed fingerprints should be compared. The uncertainty in fitting a model to the SPC index alone can then be inferred by comparing the results of the CSD analysis and extrapolations. olation for the different fingerprints. There is a long list of identified AMOC fingerprints in the literature, many of them as robust and commonly used as the SPC index (Jackson and Wood, 2020; Zhu et al., 2023). When one applies the same analysis as DD23 to one of these other fingerprints, the so-called dipole fingerprint (Roberts et al., 2013), the estimated tipping time varies considerably, and sometimes even goes to infinity (Figure 3 and Tables S2 and S1). As there is currently no reason to believe that one of these fingerprints represents the AMOC better than any other, this spread in the estimated tipping time shows that there is substantial uncertainty in such an estimation. It should also be noted that there has been increasing evidence for the SPC as a potential tipping point separate from the AMOC (Sgubin et al., 2017; Swingedouw et al., 2021; McKay et al., 2022). Although this SPC collapse occurs only in some coupled climate models under future warming scenarios, these models are in fact amongst the best in representing the stratification in the SPCG (Sgubin et al., 2017; Swingedouw et al., 2021). We cannot, therefore, disregard the possibility that CSD in the SPC index is in reality an indication of an approaching SPC tipping point and not an AMOC tipping point. The only way to avoid this uncertainty is to include additional AMOC fingerprints which do not rely on SPCG SSTs (Boers, 2021). ## Effect of dataset preprocessing and underlying uncertainties / non-stationary coverage In addition to the uncertainties arising from the modelling approach and the choice of AMOC fingerprint, there are also substantial uncertainties in CSD indicators that originate in the dataset preprocessing steps and in the non-stationarity Figure 2: **Variance and autocorrelation for different SST datasets and AMOC fingerprints.** The rows from top to bottom show the monthly AMOC fingerprints (a-c), variance (d-f) and autocorrelation (g-i). The columns from left to right show the values for the fingerprint from DD23 (left), the classical fingerprint from Caesar et al. 2018 (Caesar et al., 2018) (middle), and the AMOC dipole fingerprint (Jackson and Wood, 2020) (right). The dipole is defined as averaged SSTs in 45-80\({}^{\circ}\)N, 70\({}^{\circ}\)W-30\({}^{\circ}\)E minus SSTs in 0-45\({}^{\circ}\)S, 70\({}^{\circ}\)W-30\({}^{\circ}\)E. The time series are shown for four different datasets: HadSST1 (turquoise), ERSST5 (orange), HadCRUTS (blue) and HadSCST4 (pink). In figures a-c the AMOC fingerprints are offset by 3K from each other for better visibility. All CSD indicators are computed using a window size of 50 years. Note that the variance shows overall decreases in most cases, partly due to the non-stationary data coverage (Ben-Yami et al., 2023). In addition, note that the calculation of the SPC-2xGMT fingerprint in this work is slightly different than DD23 (see Methods). of SST data coverage (Ben-Yami et al., 2023). The number of SST observations has increased exponentially since 1850, and in earlier times the spatial coverage was highly inhomogeneous. This is dealt with in observational datasets in one of two ways: either the monthly grid cells without observations are left empty, or some method is used to fill the gaps. Both of these approaches can affect the statistics of the data: in the former, averaging more datapoints in later times artificially reduces the variance, and in the latter the infilling method can affect the statistics by e.g. artificially smoothing earlier times. DD23 use the HadISST1 dataset, which has been infilled using reduced space optimal interpolation (RSOI) (Rayner et al., 2003). RSOI uses a set of global empirical orthogonal functions (EOFs), and includes regularizing terms when fitting the EOFs to the data. This is done to avoid spurious large amplitudes in data-scarce regions and times, but means that the fit tends to the zero anomaly where there is no information. Although non-interpolated in-situ data is subsequently added to the RSOI reconstruction, this only improves the variance where there is enough data - in data-scarce times and regions the variability is damped by RSOI. Together with other steps of the preprocessing, this causes the variance in HadISST1 to artificially increase (see Rayner et al. (2003); Ben-Yami et al. (2023)). We first calculate the same AMOC fingerprint that DD23 use from four SST datasets: the infilled HadISST1 (Rayner et al., 2003), the non-infilled HadISST4 (Kennedy et al., 2019), HadCRUTS (Morice et al., 2021), which uses a Gaussian-process-based statistical method for infilling, and ERSSTVS (Huang et al., 2017), which uses empirical orthogonal teleconnections for infilling. All of these methods results in different variance and autocorrelation time series (see Figure 2). The variance is especially affected by the different preprocessing methods of the different datasets - only in HadISST1 does the variance increase over the whole time period - and as noted above, this increase is at least partly artificial. It is therefore not possible to determine the actual variance trend of north Atlantic SSTs before the 1970s. Whilst the autocorrelation and the restoring rate are arguably still functional indicators given the dataset properties (see Ben-Yami et al. (2023)), DD23's analysis relies on the variance, and does not take these uncertainties and especially the non-stationary observational coverage and different gap filling procedures into account. If one applies DD23's MLE method (their "method 2") to their AMOC fingerprint calculated from other SST datasets, Figure 3: **Range of tipping times** Tipping times estimated using DD23’s MLE method, both with and without optimized penalization (see Section S2 of DD23). The best estimate of the tipping time is calculated for the classical SPG index (plus), the fingerprint used by DD23 (circle) and the Dipole index (star). We use three different observational SST datasets for this analysis: HadISST1 (turogies), ERSSTVS (orange), and HadCRUTS (blue). In addition, the blue violins show the tipping times for each of the 200-member uncertainty ensemble of HadCRUTS, both with (unc pen opt) and without (unc pen=0) penalization. The plotted values can be found in tables S2 and S1. Note that out of the 200 members for the dipole fingerprint, in 47 members the tipping time went to infinity when the optimization was attempted, and they are not included here. using the code provided by DD23, one gets tipping times ranging from the 2000s for HadISST to the 3000s for ERSSIVS. If this analysis is extended to different AMOC fingerprints (see above), the tipping times range from the 2000s to beyond the year 4700 for ERSSIVS (Table S2). Finally, if we apply the method to HadCRUTS's 200-member uncertainty ensemble, we get multi-milennial uncertainty ranges with for some cases almost a quarter of the tipping times going to infinity (Table S1). This shows that the fingerprint definition and the dataset choice can cause huge uncertainties. ## Conclusions With the AMOC as an example, we have discussed multiple sources of uncertainty in the prediction of future tipping times. In particular for DD23: 1. The modelling assumptions underlying their approach for tipping time predictions are too simple and do not necessarily hold for the AMOC. We have shown that breaking these assumptions by e.g. changing the dynamical model for the AMOC or the model for the forcing introduces large biases in the tipping time estimation (Figure 3). 2. Extrapolating statistical results obtained from historical data to predict the time of tipping, DD23 make too strong assumptions on the stationarity of past trends. 3. The connection of the SST-based SPC fingerprint to the AMOC is non-stationary, and therefore is problematic for exact predictions of tipping times. Using different SST fingerprints with the HadISST dataset can change the predicted tipping time by 50 years with optimal penalization, and 70 years with no penalization (Table S2). 4. The inherent uncertainties of SST datasets and the preprocessing methods used to fill in missing data can be non-stationary, and thus affect higher-order statistics such as the variance or autocorrelation. In particular, the HadISST dataset used by DD23 is known to have an artificial variance increase. Using different SST datasets and their uncertainty ensembles, the tipping time varies by thousands of years both with or without penalization, and with penalization often goes to infinity (Tables S2 and S1). Whilst point 1 above could possibly be addressed by improvements to the prediction method, points 2, 3 and 4 are essentially impassable barriers to predicting the time of a future AMOC collapse from historical data. The available historical data is simply not accurate or precise enough to allow us to make such an extrapolation. Unfortunately, it is inevitable that these types of uncertainties will arise when attempting to extrapolate from past historical data, more or less regardless of the system under consideration. First, simplified modelling assumptions will almost always be necessary for extrapolation, since the future behaviour of the system will be different depending on the governing dynamics. Past data can inform us about the relevant model, but typically many different models can match the data, as seen above. This is also true for the case of fitting a simplified model to reconstructed central-western Greenland ice sheet height changes in BR21. Second, the problem of finding a timeseries that accurately represents the dynamics of the system is also not exclusive to the AMOC. All tipping elements are extended three dimensional systems, and we usually only have observations for a specific part of it. In BR21 the change in ice sheet height is calculated from average annual melt rates obtained from three ice cores in central-western Greenland, and in addition to uncertainties in the underlying data that are difficult to quantify, it is highly uncertain how well this location represents the whole ice sheet (Trusel et al., 2018). For vegetation systems, we can only observe them in full through satellites, and vegetation indices such as normalized difference vegetation index (NDVI) or vegetation optical depth (VOD) may not capture nuances in vegetation dynamics that would be important for robust extrapolation, alongside known issues with data pre-processing and merging procedures (Smith et al., 2023). Finally, the problems caused by non-stationary data coverage and data-processing methods described above are not unique to SST datasets. Data-processing is always necessary to assimilate and calibrate observations and proxies especially for longer records, and these methods usually focus on the accuracy of the long term trend and not on the higher-order statistics. For vegetation systems it has recently been shown that using multi-instrument timeseries can cause biases in CSD indicators, especially for aggregated time series (Smith et al., 2022b). However, it is important to emphasize that the criticisms in this work apply to attempts to predict the exact tipping time of tipping elements such as the AMOC based on extrapolating from uncertain data. CSD detection in terms of trends in robust indicators such as the autocorrelation or the restoring rate (Boers, 2021) is much less sensitive to the discussed uncertainties. For example, CSD is applicable to any sort of dynamical system that is approaching an abrupt transition under the assumption that it is associated with a co-dimension one bifurcation. In addition, fingerprints that are not an exact representation of a dynamical system will still show CSD as long as the stability of the subsystem they represent is connected to the stability of the overall system. Crucially, the uncertainties presented in this work can be taken into account by using multiple different fingerprints and propagating the dataset uncertainties to the CSD analysis. To understand the difference in the effect of these uncertainties for CSD detection and tipping time predictions, we can look at the AMOC. Ben-Yamil et al. (2023) took into account all the dataset uncertainties described in this work, and more, and found that CSD in AMOC fingerprints in terms of a restoring rate tending toward zero, is still significant even though the trends in the CSD indicators have a large spread (for example, 40% and 14% spread in the slope of the restoring rate in the HadCRUTS and ERSSTVs SPG Index, respectively). But taking into account the same observational uncertainty spread for the tipping time gives times ranging from 2050 to infinity, practically making this prediction non-informative. This is because the tipping time prediction is not only more sensitive to uncertainties, but it also needs to be a lot more specific to be reliable - the assumptions made in the calculation already presume that there will be a tipping time in the future, so the predicted time needs to be precise to provide additional information. Detection of CSD does not make statements about future tipping, only about the fact that the system is currently less stable than it was in the past. In this sense, the dashed lines in Fig. 3 of Boers (2021) were only shown to demonstrate that there has been a positive linear trend in the restoring rate \(\lambda\), interpreted as a stability decline, during the historical period investigated. It should be emphasized that these dashed lines do not represent any sort of extrapolation to the future and the time when these dashed lines cross the \(\lambda=0\) line should not be mistaken of estimates of the time of tipping. In conclusion, we showed that the uncertainties discussed in this work are too large to allow for reliable estimates of the time of tipping of major Earth system tipping elements, including the AMOC, the polar ice sheets or tropical rainforests, based on extrapolating results from historical data. We emphasize that these uncertainties, originating from underlying modelling or mechanistic assumptions as well as from the employed empirical data, need to be taken into account and propagated thoroughly before attempting to estimate a future tipping time of any potential Earth system tipping element. **Data availability** The HadISST1, HadSST4, and HadCRUTS datasets are all available at [https://www.metoffice.gov.uk/hadobs/](https://www.metoffice.gov.uk/hadobs/). The ERSSTVs operational data is available at [https://pss.lnoaa.gov/data/gridded/data.noaa.erasstv5.html](https://pss.lnoaa.gov/data/gridded/data.noaa.erasstv5.html). CO2 emissions from the SSP scenarios can be found at [https://intcat.iiasa.ac.at/SspDb/](https://intcat.iiasa.ac.at/SspDb/), and historical CO2 emission data at [https://ourworldindata.org/CO2](https://ourworldindata.org/CO2) emissions. CO2 emissions expected from current policies and targets can be found at [https://climateactiontracker.org/](https://climateactiontracker.org/). **Code availability** All code used to analyse the data and generate figures will be uploaded to [https://github.com/TUM-PIK-ESM/DD23_matter_arising](https://github.com/TUM-PIK-ESM/DD23_matter_arising). **Author contributions**: MBY, AM, SB and NB conceived and designed the study. AM carried out the analysis for the section "Modelling Assumptions" and MBY carried out the analysis for the sections "Representativeness of the SPC index for AMOC" and "Effect of dataset preprocessing and underlying uncertainties/ non-stationary coverage". All authors contributed to writing the manuscript. **Competing interests**: The authors declare that they have no competing interests. **Acknowledgments**: MBY and NB acknowledge funding by the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No.956170. NB and SB acknowledge funding by the Volkswagen foundation. This is TIPES contribution #X; the TIPES (Tipping Points in the Earth System') project has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No. 820970. NB acknowledges further funding by the German Federal Ministry of Education and Research under grant No. 01LS2001A. ## Methods ### AMOC fingerprints For each of the four employed SST datasets we compute three different SST-based fingerprints of the AMOC. First, the index introduced by DD, which is obtained by averaging SST over the SPG region and then subtracting twice the global mean SSTs. Here, the SPG region is defined as in (Caesar et al., 2018). Second, the original version of this index, introduced by Caesar et al. (2018), where the global mean SSTs are only subtracted once. Third, we employ the so-called dipole fingerprint, which is obtained by subtracting average SSTs of a large region in the southern-hemisphere Atlantic ocean (0-45\({}^{\circ}\)S\({}^{\circ}\)S, 70\({}^{\circ}\)W-30\({}^{\circ}\)E) from average SSTs in a large region in the northern-hemisphere Atlantic (45-80\({}^{\circ}\)N, 70\({}^{\circ}\)W-30\({}^{\circ}\)E). For computing the spatial averages for the HadISST data, we mask out all values of grid cells covered by sea ice following Caesar et al. (2018); Boers (2021), and also use a weighted mean to account for the dependence of the grid cell size on the latitude. ### Formulae for toy models As we have laid out in detail in Section 2, a thorough investigation of the robustness of estimations should also include their application to data stemming from other plausible models. Here we give their formulae, which have been integrated to obtain time series data for the subsequent analysis in Figure 1. The original fold bifurcation normal form model with linear forcing and white noise as discussed by DD23 reads \[\mathrm{d}X_{t} =-\left(A(X_{t}-m)^{2}+\lambda^{\mathrm{lin}}(t)\right)\mathrm{d }t+\sigma\mathrm{d}B_{t} \tag{1}\] \[\lambda^{\mathrm{lin}}(t) =\lambda_{0}(1-\Theta[t-t_{0}](t-t_{0})/\tau_{r}). \tag{2}\] As for all of the following models, it was integrated using the Euler-Mayurama scheme to obtain time series data. Instead of the fold bifurcation normal form, a specialised model might be more suitable to represent AMOC dynamics. To this end, we include the reduced form of the Stommnel-Cessi model given by \[\mathrm{d}X_{t}=\left(-aX_{t}(1-\mu^{2}(1-aX_{t})^{2})+\lambda^{\mathrm{SC}}( t)\right)\mathrm{d}t+0.8\,\sigma\mathrm{d}B_{t}, \tag{3}\] where \(\mu^{2}=7.5\) and the parameter \(a\) was chosen such that the absolute decrease of AMOC strength in advance of the bifurcation at \(\lambda^{\mathrm{SC}}_{c}\) is the same as for the fold normal form. \(\lambda^{\mathrm{SC}}(t)\) decreases linearly from \(\lambda^{\mathrm{SC}}_{0}=0\) to \(\lambda^{\mathrm{SC}}_{c}\). The time series is also inverted vertically and shifted appropriately. We also explore time series data from a model with no inherent bifurcation. Instead, a square root trend is introduced, again taking all parameters from DD23: \[\mathrm{d}X_{t}=-2\sqrt{A\lambda_{0}}\left(X_{t}-m-\sqrt{\lambda^{\mathrm{lin }}(t)/A}\right)\mathrm{d}t+\sigma\mathrm{d}B_{t} \tag{4}\] Instead of the white noise term \(\mathrm{d}B_{t}\), we also investigated non-stationary red noise in the form of \(U_{t}\mathrm{d}t\), where the Ornstein-Uhlenbeck process \(U\) is generated by \[\mathrm{d}U_{t}=-\frac{1}{\tau^{\mathrm{noise}}(t)}U_{t}\mathrm{d}t+\mathrm{ d}B_{t}. \tag{5}\] The characteristic correlation time of the noise, \(\tau^{\mathrm{noise}}\), increases linearly: \[\tau_{\mathrm{noise}}=\tau_{0}^{\mathrm{noise}}[\mathrm{year}](1-\Theta[t-t_{ 0}](t-t_{0})/\tau_{r})+\tau_{T}^{\mathrm{noise}}[\mathrm{year}]\Theta[t-t_{0}] (t-t_{0})/\tau_{r} \tag{6}\]
2309.14108
A Common Approach to Singular Perturbation and Homogenization II: Semilinear Elliptic Systems
We consider periodic homogenization of boundary value problems for second-order semilinear elliptic systems in 2D of the type $$ \partial_{x_i}\left(a_{ij}^{\alpha \beta}(x/\varepsilon)\partial_{x_j}u(x)+b_i^\alpha(x,u(x))\right)=b^\alpha(x,u(x)) \mbox{ for } x \in \Omega. $$ For small $\varepsilon>0$ we prove existence of weak solutions $u=u_\varepsilon$ as well as their local uniqueness for $\|u-u_0\|_\infty \approx 0$, where $u_0$ is a given non-degenerate weak solution to the homogenized boundary value problem, and we estimate the rate of convergence to zero of $\|u_\varepsilon-u_0\|_\infty$ for $\varepsilon \to 0$. Our assumptions are, roughly speaking, as follows: The functions $a_{ij}^{\alpha \beta}$ are bounded, measurable and $\mathbb{Z}^2$-periodic, the functions $b_i^\alpha(\cdot,u)$ and $b^\alpha(\cdot,u)$ are bounded and measurable, the functions $b_i^\alpha(x,\cdot)$ and $b^\alpha(x,\cdot)$ are $C^1$-smooth, and $\Omega$ is a bounded Lipschitz domain in $\mathbb{R}^2$. Neither global solution uniqueness is supposed nor growth restrictions of $b_i^\alpha(x,\cdot)$ or $b^\alpha(x,\cdot)$ nor higher regularity of $u_0$, and cross-diffusion is allowed. The main tool of the proofs is an abstract result of implicit function theorem type which in the past has been applied to singularly perturbed nonlinear ODEs and elliptic and parabolic PDEs and, hence, which permits a common approach to existence, local uniqueness and error estimates for singularly perturbed problems and and for homogenization problems.
Nikolai N. Nefedov, Lutz Recke
2023-09-25T13:02:52Z
http://arxiv.org/abs/2309.14108v6
###### Abstract ###### Abstract We consider perodic homogenization of boundary value problems for second-order semilinear elliptic systems in 2D of the type \[\operatorname{div}\left(A(x/\varepsilon)\nabla u(x)\right)=b(x,u(x))\text{ for }x \in\Omega.\] For small \(\varepsilon>0\) we prove existence of weak solutions \(u=u_{\varepsilon}\) as well as their local uniqueness for \(\|u-u_{0}\|_{\infty}\approx 0\), where \(u_{0}\) is a given non-degenerate weak solution to the homogenized boundary value problem, and we estimate the rate of convergence to zero of \(\|u_{\varepsilon}-u_{0}\|_{\infty}\) for \(\varepsilon\to 0\). Our assumptions are, roughly speaking, as follows: The map \(y\mapsto A(y)\) is bounded, measurable and \(\mathbb{Z}^{2}\)-periodic, the maps \(b(\cdot,u)\) are bounded and measurable, the maps \(b(x,\cdot)\) are \(C^{1}\)-smooth, and \(\Omega\) is a bounded Lipschitz domain in \(\mathbb{R}^{2}\). Neither global solution uniqueness is supposed nor growth restriction of \(b(x,\cdot)\) nor \(W^{2,2}\)-regularity of \(u_{0}\), and cross-diffusion is allowed. The main tool of the proofs is an abstract result of implicit function theorem type which in the past has been applied to singularly perturbed nonlinear ODEs and elliptic and parabolic PDEs and, hence, which permits a common approach to existence, local uniqueness and error estimates for singularly perturbed problems and and for homogenization problems. **A Common Approach to Singular Perturbation and Homogenization II: Semilinear Elliptic Systems** Nikolay N. Nefedov (Moscow) and Lutz Recke (Berlin) ## 1 Introduction In this paper we present an abstract result of implicit function theorem type (see Section 2), which in the past has been applied to singularly pertubed nonlinear ODEs and PDEs in [5, 6, 7, 18, 20, 21, 22] and, in Part I [19], to periodic homogenization of quasilinear ODE systems. In the present paper we apply it to periodic homogenization of Dirichlet problems for 2D semilinear elliptic PDE systems of the type \[\left.\begin{array}{l}\partial_{x_{i}}\Big{(}a_{ij}^{\alpha\beta}(x/ \varepsilon)\partial_{x_{j}}u^{\beta}(x)\Big{)}=b^{\alpha}(x,u(x))\text{ for }x\in\Omega,\\ u^{\alpha}(x)=0\text{ for }x\in\partial\Omega,\end{array}\right\}\alpha=1, \ldots,n \tag{1.1}\] as well as of other boundary value problems for those systems (see Section 4). Here and in the following repeated indices are to be summed over \(\alpha,\beta,\gamma,\ldots=1,\ldots,n\) and \(i,j,k,\ldots=1,2\), and \(\varepsilon>0\) is the small homogenization parameter. We assume that \[\Omega\text{ is a bounded Lipschitz domain in }\mathbb{R}^{2}, \tag{1.2}\] and \[a_{ij}^{\alpha\beta}\in L^{\infty}(\mathbb{R}^{2})\text{ and }a_{ij}^{\alpha \beta}(\cdot+z)=a_{ij}^{\alpha\beta}\text{ for all }z\in\mathbb{Z}^{2}, \tag{1.3}\] and \[u\in\mathbb{R}^{n}\mapsto b^{\alpha}(\cdot,u)\in L^{\infty}(\Omega)\text{ is }C^{1}\text{-smooth}, \tag{1.4}\] and that there exists \(a>0\) such that for all \(\varphi\in C^{\infty}(\mathbb{R}^{2};\mathbb{R}^{n})\) with compact support we have \[\int_{\mathbb{R}^{2}}a_{ij}^{\alpha\beta}(y)\partial_{y_{i}}\varphi^{\alpha}( y)\partial_{y_{j}}\varphi^{\beta}(y)dy\geq a\int_{\mathbb{R}^{2}}\partial_{y_{i}} \varphi^{\alpha}(y)\partial_{y_{i}}\varphi^{\alpha}(y)dy. \tag{1.5}\] The components of the homogenized diffusion tensor are \[\hat{a}^{\alpha\beta}_{ij}:=\int_{[0,1]^{2}}\left(a^{\alpha\beta}_{ij}(y)+a^{ \alpha\gamma}_{ik}(y)\partial_{y_{k}}v^{\gamma\beta}_{j}(y)\right)dy, \tag{1.6}\] where the correctors \(v^{\alpha\beta}_{j}\) are defined by the cell problems \[\left.\begin{array}{l}\partial_{y_{i}}\left(a^{\alpha\beta}_{ij}(y)+a^{ \alpha\gamma}_{ik}(y)\partial_{y_{k}}v^{\gamma\beta}_{j}(y)\right)=0\mbox{ for }y\in\mathbb{R}^{2},\\ v^{\alpha\beta}_{j}(\cdot+z)=v^{\alpha\beta}_{j}\mbox{ for }z\in\mathbb{Z}^{2}, \;\int_{[0,1]^{2}}v^{\alpha\beta}_{j}(y))dy=0,\end{array}\right\}\alpha,\beta= 1,\ldots,n;\;j=1,2. \tag{1.7}\] It is well-known (as a consequence of assumption (1.5) and the Lax-Milgram lemma, cf. [23, Section 2.2 and Lemma 2.2.4]) that the problem (1.7) is uniquely weakly solvable and that the homogenized diffusion coefficients \(\hat{a}^{\alpha\beta}_{ij}\) satisfy the coercivity condition (1.5) as well, i.e. \[\int_{\mathbb{R}^{2}}\hat{a}^{\alpha\beta}_{ij}\partial_{y_{i}}\varphi^{\alpha }(y)\partial_{y_{j}}\varphi^{\beta}(y)dy\geq a\int_{\mathbb{R}^{2}}\partial_{ y_{i}}\varphi^{\alpha}(y)\partial_{y_{i}}\varphi^{\alpha}(y)dy \tag{1.8}\] for all \(\varphi\in C^{\infty}(\mathbb{R}^{2};\mathbb{R}^{n})\) with compact support. Let us formulate our main result. It concerns existence and local uniqueness of weak solutions \(u=u_{\varepsilon}\) to (1.1) with \(\varepsilon\approx 0\), which are close to a given non-degenerate weak solution \(u=u_{0}\) to the homogenized problem \[\left.\begin{array}{l}\hat{a}^{\alpha\beta}_{ij}\partial_{x_{i}}\partial_{ x_{j}}u^{\beta}(x)=b^{\alpha}(x,u(x))\mbox{ for }x\in\Omega,\\ u^{\alpha}(x)=0\mbox{ for }x\in\partial\Omega,\end{array}\right\}\alpha=1,\ldots,n \tag{1.9}\] as well as the rate of convergence to zero for \(\varepsilon\to 0\) of the homogenization error \(\|u_{\varepsilon}-u_{0}\|_{\infty}\). Here and in what follows we denote \[\|u\|_{\infty}:=\max_{\alpha=1,\ldots,n}\mbox{ess sup}\{|u^{\alpha}(x)|:x\in\Omega\} \tag{1.10}\] for \(u\in L^{\infty}(\Omega;\mathbb{R}^{n})\). As usual, a vector function \(u\in W^{1,2}_{0}(\Omega;\mathbb{R}^{n})\cap L^{\infty}(\Omega;\mathbb{R}^{n})\) is called weak solution to the boundary value problem (1.1) if it satisfies the variational equation \[\int_{\Omega}\Big{(}a^{\alpha\beta}_{ij}(x/\varepsilon)\partial_{x_{j}}u^{ \beta}(x)\partial_{x_{i}}\varphi^{\alpha}(x)+b^{\alpha}(x,u(x))\varphi^{ \alpha}(x)\Big{)}dx=0\mbox{ for all }\varphi\in W^{1,2}_{0}(\Omega;\mathbb{R}^{n}),\] and similarly for (1.9) and for its linearization (1.11) and for the cell problem (1.7). **Theorem 1.1**: _Suppose (1.2)-(1.5), and let \(u=u_{0}\) be a weak solution to (1.9) such that the linearized homogenized boundary value problem_ \[\left.\begin{array}{l}\hat{a}^{\alpha\beta}_{ij}\partial_{x_{i}}\partial_{x_ {j}}u^{\beta}(x)=\partial_{u^{\gamma}}b^{\alpha}(x,u_{0}(x))u^{\gamma}(x) \mbox{ for }x\in\Omega,\\ u^{\alpha}(x)=0\mbox{ for }x\in\partial\Omega,\end{array}\right\}\alpha=1,\ldots,n \tag{1.11}\] _does not have weak solutions \(u\neq 0\). Then the following is true:_ _(i) There exist \(\varepsilon_{0}>0\) and \(\delta>0\) such that for all \(\varepsilon\in(0,\varepsilon_{0}]\) there exists exactly one weak solution \(u=u_{\varepsilon}\) to (1.1) with \(\|u-u_{0}\|_{\infty}\leq\delta\). Moreover,_ \[\|u_{\varepsilon}-u_{0}\|_{\infty}\to 0\mbox{ for }\varepsilon\to 0.\] _(ii) If \(u_{0}\in W^{2,p_{0}}(\Omega;\mathbb{R}^{n})\) with certain \(p_{0}>2\), then for any \(p>2\) we have_ \[\|u_{\varepsilon}-u_{0}\|_{\infty}=O(\varepsilon^{1/p})\mbox{ for }\varepsilon \to 0. \tag{1.12}\] **Remark 1.2**: _It is well-known that the assumptions (1.2)-(1.5) do not imply that \(u_{0}\in W^{2,p_{0}}(\Omega;\mathbb{R}^{n})\) with certain \(p_{0}>2\), in general. But several sufficient conditions for that are known, for example if the boundary \(\partial\Omega\) is \(C^{1,1}\)-smooth and if \(\hat{a}^{\alpha\beta}_{ij}=0\) for \(\alpha>\beta\) (no cross-diffusion). We conjecture that (1.12) is not true anymore, in general, if \(u_{0}\notin W^{2,2}(\Omega;\mathbb{R}^{n})\)._ **Remark 1.3**: _We conjecture that Theorem 1.1 remains to be true for any space dimension if the elliptic system is triangular, i.e. \(\hat{a}^{\alpha\beta}_{ij}=0\) for \(\alpha>\beta\) (in particular, for scalar elliptic equations). The reason for that is the following: In the present paper, which concerns space dimension two, we use K. Gregers result [10] about maximal regularity of boundary value problems for elliptic systems with non-smooth data in the pair of Sobolev spaces \(W^{1,p}_{0}(\Omega)\) and \(W^{-1,p}(\Omega)\) with \(p\approx 2\) as well as the continuous embedding \(W^{1,p}(\Omega)\hookrightarrow L^{\infty}(\Omega)\) for \(p>2\) in the case of space dimension two. But there exists a replacement of these results for triangular systems with any space dimension (cf. [9, 11]), where the Sobolev spaces are replaced by appropriate Sobolev-Campanato spaces which for any space dimension are continuously embedded into \(L^{\infty}(\Omega)\). Remark that in [9, 10, 11] more general types of boundary conditions are allowed, for example mixed Dirichlet-Robin boundary conditions. Hence, we expect that Theorem 1.1 (as well as its version with any space dimension if \(\hat{a}^{\alpha\beta}_{ij}=0\) for \(\alpha>\beta\)) is true also for those boundary conditions._ **Remark 1.4**: _In many applications the reaction functions \(b^{\alpha}\) are of the type_ \[b^{\alpha}(x,u)=\sum_{l=1}^{m}c^{\alpha}_{l}(x)d^{\alpha}_{l}(u)\mbox{ with }c^{\alpha}_{l}\in L^{\infty}(\Omega),\;d^{\alpha}_{l}\in C^{1}(\mathbb{R}^{n}),\] _and those satisfy assumption (1.4)._ **Remark 1.5**: _The integral condition (1.5) is often referred as \(V\)-ellipticity or \(V\)-coercivity, and it follows from the Legendre condition_ \[a^{\alpha\beta}_{ij}(y)\xi^{\alpha}_{i}\xi^{\beta}_{j}\geq\mbox{const}\;\xi^{ \alpha}_{i}\xi^{\alpha}_{i}\mbox{ for almost all }y\in\mathbb{R}^{2}\mbox{ and all }\xi\in\mathbb{R}^{2n},\] _and it implies the Legendre-Hadamard condition_ \[a^{\alpha\beta}_{ij}(y)\xi_{i}\xi_{j}\eta^{\alpha}\eta^{\beta}\geq\mbox{const }\;\xi_{i}\xi_{i}\eta^{\alpha}\eta^{\alpha}\mbox{ for almost all }y\in\mathbb{R}^{2}\mbox{ and all }\xi\in\mathbb{R}^{2},\;\eta\in\mathbb{R}^{n}.\] _If \(m=1\) or if the coefficients \(a^{\alpha\beta}_{ij}\) are constant (as in (1.9)), then (1.5) is equivalent to the Legendre-Hadamard condition._ **Remark 1.6**: _For \(L^{\infty}\) estimates of \(u_{\varepsilon}-u_{0}\) for scalar linear elliptic PDEs see, e.g. [2, Chapter 2.4] and [17] and for linear elliptic systems [13, Theorem 3.4], [23, Theorem 7.5.1], [25, Theorem 1.7] and [27, Theorem 1.5]. For \(L^{p}\) homogenization error estimates for scalar linear elliptic PDEs see, e.g. [17] and [26, Theorem 1.1] and for linear elliptic systems [23, Theorem 7.5.1]._ _What concerns existence and local uniqueness for nonlinear elliptic homogenization problems (without assumption of global uniqueness) we know only the result [4] for scalar semilinear elliptic PDEs of the type \(\mbox{\rm div}\,a(x/\varepsilon)\nabla u(x)=f(x)g(u(x))\), where the nonlinearity \(g\) is supposed to have a sufficiently small local Lipschitz constant (on an appropriate bounded interval). Let us mention also [14, 15], where existence and local uniqueness for a homogenization problem for the linear Poisson equation with nonlinear Robin boundary conditions in a periodically perforated domain is shown. There the specific structure of the problem (no highly oscillating diffusion coefficients) allows to apply the classical implicit function theorem._ _For periodic homogenization of linear elliptic PDEs (with small homogenization parameter \(\varepsilon\)), which are singularly perturbed (with small singular perturbation parameter \(\delta\)) see [24]._ Our paper is organized as follows: In Section 2 we consider abstract nonlinear parameter depending equations of the type \[F_{\varepsilon}(u)=0. \tag{1.13}\] Here \(\varepsilon>0\) is the parameter. We prove a result on existence and local uniqueness of a family of solutions \(u=u_{\varepsilon}\approx\bar{u}_{\varepsilon}\) to (1.13) with \(\varepsilon\approx 0\), where \(\bar{u}_{\varepsilon}\) is a family of approximate solutions to (1.13), i.e. a family with \(F_{\varepsilon}(\bar{u}_{\varepsilon})\to 0\) for \(\varepsilon\to 0\), and we estimate the norm of the error \(u_{\varepsilon}-\bar{u}_{\varepsilon}\) by the norm of the discrepancy \(F_{\varepsilon}(\bar{u}_{\varepsilon})\). This type of generalized implicit function theorems has been successfully applied to singularly perturbed nonlinear ODEs and PDEs and to homogenization of nonlinear ODEs. Contrary to the classical implicit function theorem it is not supposed that the linearized operators \(F^{\prime}_{\varepsilon}(u)\) converge for \(\varepsilon\to 0\) in the uniform operator norm. And, indeed, in the applications to singularly perturbed problems as well as to homogenization problems they do not converge for \(\varepsilon\to 0\) in the uniform operator norm (cf. Remark 3.1 below). Hence, the present paper introduces an application to semilinear elliptic PDE systems of a common approach to existence, local uniqueness and error estimates for singularly perturbed problems and for homogenization problems. Another application to periodic homogenization of quasilinear ODE systems of the type \[a(x,x/\varepsilon,u(x),u^{\prime}(x))^{\prime}=f(x,x/\varepsilon,u(x))\] has been presented in Part I [19]. In Section 3 we prove Theorem 1.1 by means of the results of Section 2. Here the main work is to construct appropriate families of approximate solutions to (1.1) with \(\varepsilon\approx 0\) with small discrepancies in appropriate function space norms. In order to apply implicit function theorems mainly one needs isomorphism properties of the linearized operators. In the setting of Section 3 they follow from K. Grogers result [10] about maximal regularity of boundary value problems for elliptic systems with non-smooth data in the pair of Sobolev spaces \(W^{1,p}_{0}(\Omega;\mathbb{R}^{n})\) and \(W^{-1,p}(\Omega;\mathbb{R}^{n})\) with \(p\approx 2\). In order to apply implicit function theorems one needs also \(C^{1}\)-smoothness of the appearing nonlinear superposition operators. In the setting of Section 3 these operators have to be well-defined and \(C^{1}\)-smooth on the Sobolev spaces \(W^{1,p}(\Omega;\mathbb{R}^{n})\) with \(p>2\), but \(p\approx 2\), and therefore we have to suppose that the space dimension is two. ## 2 An abstract result of implicit function theorem type Let \(U\) and \(V\) be a Banach spaces with norms \(\|\cdot\|_{U}\) and \(\|\cdot\|_{V}\), respectively. For \(\varepsilon>0\) let be given \[\bar{u}_{\varepsilon}\in U\text{ and }F_{\varepsilon}\in C^{1}(U;V).\] We consider the abstract equation \[F_{\varepsilon}(u)=0. \tag{2.1}\] Roughly speaking, we will show the following: If the elements \(\bar{u}_{\varepsilon}\) satisfy (2.1) approximately for \(\varepsilon\approx 0\), i.e. if \(\|F_{\varepsilon}(\bar{u}_{\varepsilon})\|_{V}\to 0\) for \(\varepsilon\to 0\), and if they are non-degenerate solutions (cf. assumption (2.2) below), then for \(\varepsilon\approx 0\) there exists exactly one solution \(u=u_{\varepsilon}\) to (2.1) with \(\|u-\bar{u}_{\varepsilon}\|_{U}\approx 0\), and \(\|u_{\varepsilon}-\bar{u}_{\varepsilon}\|_{U}=O(\|F_{\varepsilon}(\bar{u}_{ \varepsilon})\|_{V})\) for \(\varepsilon\to 0\). For that we do not suppose any convergence of the operators \(F_{\varepsilon}\) or \(F^{\prime}_{\varepsilon}(u)\) or of the elements \(\bar{u}_{\varepsilon}\) for \(\varepsilon\to 0\). Remark that in the classical implicit function theorem one cannot omit, in general, the assumption, that \(F^{\prime}_{\varepsilon}(u)\) converges for \(\varepsilon\to 0\) with respect to the uniform operator norm (cf. [12, Section 3.6]). **Theorem 2.1**: _Suppose that_ _there exist_ \(\varepsilon_{0}>0\) _and_ \(c>0\) _such that for all_ \(\varepsilon\in(0,\varepsilon_{0}]\) _the operators_ \(F^{\prime}_{\varepsilon}(\bar{u}_{\varepsilon})\) _are Fredholm of index zero from U into V, and_ \(\|F^{\prime}_{\varepsilon}(\bar{u}_{\varepsilon}))u\|_{V}\geq c\|u\|_{U}\) _for all_ \(u\in U\)__ _and_ \[\sup_{\|v\|_{U}\leq 1}\|(F^{\prime}_{\varepsilon}(\bar{u}_{\varepsilon}+u)-F^{ \prime}_{\varepsilon}(\bar{u}_{\varepsilon}))v\|_{V}\to 0\mbox{ for }\varepsilon+\|u\|_{U}\to 0 \tag{2.3}\] _and_ \[\|F_{\varepsilon}(\bar{u}_{\varepsilon})\|_{V}\to 0\mbox{ for }\varepsilon \to 0. \tag{2.4}\] _Then there exist \(\varepsilon_{1}\in(0,\varepsilon_{0}]\) and \(\delta>0\) such that for all \(\varepsilon\in(0,\varepsilon_{1}]\) there exists exactly one solution \(u=u_{\varepsilon}\) to (2.1) with \(\|u_{\varepsilon}-\bar{u}_{\varepsilon}\|_{U}\leq\delta\). Moreover, for all \(\varepsilon\in(0,\varepsilon_{1}]\) we have_ \[\|u_{\varepsilon}-\bar{u}_{\varepsilon}\|_{U}\leq\frac{2}{c}\|F_{\varepsilon }(\bar{u}_{\varepsilon}))\|_{V}. \tag{2.5}\] **Proof** Take \(\varepsilon\in(0,\varepsilon_{0}]\). Because of assumption (2.2) the linear operator \(F^{\prime}_{\varepsilon}(\bar{u}_{\varepsilon})\) is an isomorphism from \(U\) onto \(V\), and, hence, equation (2.1) is equivalent to the fixed point problem \[u=G_{\varepsilon}(u):=u-F^{\prime}_{\varepsilon}(\bar{u}_{\varepsilon}))^{-1}F _{\varepsilon}(u).\] Take \(u_{1},u_{2}\in U\). Then \[\|G_{\varepsilon}(u_{1})-G_{\varepsilon}(u_{2})\|_{V} = \left\|F^{\prime}_{\varepsilon}(\bar{u}_{\varepsilon})^{-1}\int_ {0}^{1}\left(F^{\prime}_{\varepsilon}(\bar{u}_{\varepsilon})-F^{\prime}_{ \varepsilon}(su_{1}+(1-s)u_{2})\right)ds(u_{1}-u_{2})\right\|_{V} \tag{2.6}\] \[\leq \frac{1}{c}\,\max_{0\leq s\leq 1}\|\left(F^{\prime}_{\varepsilon}( \bar{u}_{\varepsilon})-F^{\prime}_{\varepsilon}(su_{1}+(1-s)u_{2})\right)(u_ {1}-u_{2})\|_{V}.\] Here we used that (2.2) yields that \(c\|F^{\prime}_{\varepsilon}(\bar{u}_{\varepsilon})^{-1}v\|_{U}\leq\|v\|_{V}\) for all \(v\in V\). Denote \(\mathcal{B}^{r}_{\varepsilon}:=\{u\in U:\ \|u-\bar{u}_{\varepsilon}\|_{U}\leq r\}\). If \(u_{1},u_{2}\in\mathcal{B}^{r}_{\varepsilon}\), then also \(su_{1}+(1-s)u_{2}\in\mathcal{B}^{r}_{\varepsilon}\) for all \(s\in[0,1]\). Therefore it follows from (2.3) and (2.6) that there exist \(r_{0}>0\) and \(\varepsilon_{1}\in(0,\varepsilon_{0}]\) such that for all \(\varepsilon\in(0,\varepsilon_{1}]\) the maps \(G_{\varepsilon}\) are strictly contractive with contraction constant \(1/2\) on the closed balls \(\mathcal{B}^{r_{0}}_{\varepsilon}\). Moreover, for all \(\varepsilon\in(0,\varepsilon_{1}]\) and \(u\in\mathcal{B}^{r_{0}}_{\varepsilon}\) we have \[\|G_{\varepsilon}(u)-\bar{u}_{\varepsilon}\|_{U} \leq \left\|G_{\varepsilon}(u)-G_{\varepsilon}(\bar{u}_{\varepsilon}) \right\|_{U}+\left\|G_{\varepsilon}(\bar{u}_{\varepsilon})-\bar{u}_{ \varepsilon}\right\|_{U}\] \[\leq \frac{r_{0}}{2}+\left\|F^{\prime}_{\varepsilon}(\bar{u}_{ \varepsilon}))^{-1}F_{\varepsilon}(\bar{u}_{\varepsilon})\right\|_{U}\leq\frac {r_{0}}{2}+\frac{1}{c}\left\|F_{\varepsilon}(\bar{u}_{\varepsilon})\right\|_{V},\] and (2.4) yields that \(G_{\varepsilon}\) maps \(\mathcal{B}^{r_{0}}_{\varepsilon}\) into itself if \(\varepsilon_{1}\) is taken sufficiently small. Now, Banach's fixed point principle yields the existence and uniqueness assertions of Theorem 2.1, and the estimate (2.5) follows as above: \[\|u_{\varepsilon}-\bar{u}_{\varepsilon}\|_{U}\leq\|G_{\varepsilon}(u_{ \varepsilon})-G_{\varepsilon}(\bar{u}_{\varepsilon})\|_{V}+\|G_{\varepsilon}( \bar{u}_{\varepsilon})-\bar{u}_{\varepsilon}\|_{U}\leq\frac{1}{2}\|u_{ \varepsilon}-\bar{u}_{\varepsilon}\|_{V}+\frac{1}{c}\left\|F_{\varepsilon}( \bar{u}_{\varepsilon})\right\|_{V}.\] **Remark 2.2**: _In [5, 6, 7, 18, 20, 21, 22]) various versions of Theorem 2.1 are presented. They differ slightly according to which problems they are applied (ODEs or elliptic or parabolic PDEs, stationary or time-periodic solutions, semilinear or quasilinear problems, smooth or nonsmooth data, one- or multi-dimensional perturbation parameter \(\varepsilon\))._ _For another result of the type of Theorem 2.1 and its applications to semilinear elliptic PDE systems with numerically determined approximate solutions see [3, Theorem 2.1]._ If one applies Theorem 2.1, for example to boundary value problems for elliptic PDEs, then different choices of function spaces \(U\) and \(V\) and of their norms \(\|\cdot\|_{U}\) and \(\|\cdot\|_{V}\) and of the family \(\bar{u}_{\varepsilon}\) of approximate solutions are appropriate. Criteria for these choices often are the following: The family \(\bar{u}_{\varepsilon}\) of should be "simple" (for example, \(\bar{u}_{\varepsilon}\) should be \(\varepsilon\)-independent or given more less explicit in closed formulas, or to determine \(\bar{u}_{\varepsilon}\) numerically should be much cheeper than to determine the exact solution \(u_{\varepsilon}\) numerically), and the rate of convergence to zero of \(\|F_{\varepsilon}(\bar{u}_{\varepsilon})\|_{V}\) for \(\varepsilon\to 0\) should be high. The norm \(\|\cdot\|_{V}\) should be weak and the norm \(\|\cdot\|_{U}\) should be strong such that the error estimate (2.5) is strong. But at the same time the norm \(\|\cdot\|_{U}\) should be weak such that the domain of local uniqueness, which contains all \(u\in U\) with \(\|u-\bar{u}_{\varepsilon}\|_{U}\leq\delta\), is large. These criteria are contradicting, of course. Hence, in any application of Theorem 2.1 the choices of \(U\), \(V\), \(\|\cdot\|_{U}\), \(\|\cdot\|_{V}\) and \(\bar{u}_{\varepsilon}\) are compromises according to the requirements of the application. One way to find such compromises is described in Corollary 2.3 below. It delivers existence and local uniqueness of solutions \(u=u_{\varepsilon}\) to the equation \(F_{\varepsilon}(u)=0\) with \(\varepsilon\approx 0\) and \(\|u-u_{0}\|\approx 0\), where \(\|F_{\varepsilon}(u_{0})\|_{V}\) does not converge to zero for \(\varepsilon\to 0\), in general, and where the space \(U\) is not complete with respect to the norm \(\|\cdot\|\), in general. The price for that is that the estimate (2.11) of the error \(u_{\varepsilon}-u_{0}\) is with respect to the weaker norm \(\|\cdot\|\), only. **Corollary 2.3**: _Suppose (2.4). Further, let be given \(u_{0}\in U\) and a norm \(\|\cdot\|\) in \(U\) such that_ _there exist \(\varepsilon_{0}>0\) and \(c>0\) such that for all \(\varepsilon\in(0,\varepsilon_{0}]\) the operators \(F^{\prime}_{\varepsilon}(u_{0})\) are Fredholm of index zero from U into V, and \(\|F^{\prime}_{\varepsilon}(u_{0}))u\|_{V}\geq c\|u\|_{U}\) for all \(u\in U\)_ _and_ \[d:=\sup\{\|u\|:\;u\in U,\|u\|_{U}\leq 1\}<\infty, \tag{2.8}\] \[\|\bar{u}_{\varepsilon}-u_{0}\|\to 0\mbox{ for }\varepsilon \to 0,\] (2.9) \[\sup_{\|v\|_{U}\leq 1}\|(F^{\prime}_{\varepsilon}(u_{0}+u)-F^{ \prime}_{\varepsilon}(u_{0}))v\|_{V}\to 0\mbox{ for }\varepsilon+\|u\|\to 0. \tag{2.10}\] _Then there exist \(\varepsilon_{1}\in(0,\varepsilon_{0}]\) and \(\delta>0\) such that for all \(\varepsilon\in(0,\varepsilon_{1}]\) there exists exactly one solution \(u=u_{\varepsilon}\) to (2.1) with \(\|u-u_{0}\|\leq\delta\), and_ \[\|u_{\varepsilon}-u_{0}\|\leq\|\bar{u}_{\varepsilon}-u_{0}\|+\frac{4d}{c}\|F_ {\varepsilon}(\bar{u}_{\varepsilon}))\|_{V}. \tag{2.11}\] **Proof** Because of assumption (2.9) the condition (2.10) is equivalent to \[\sup_{\|v\|_{U}\leq 1}\|(F^{\prime}_{\varepsilon}(\bar{u}_{\varepsilon}+u)-F^{ \prime}_{\varepsilon}(\bar{u}_{\varepsilon}))v\|_{V}\to 0\mbox{ for } \varepsilon+\|u\|\to 0,\] and because of assumption (2.8) these two equivalent conditions are stronger that condition (2.3). And similarly, because of assumptions (2.9) and (2.10) the condition (2.7) is equivalent to condition (2.2) (with another \(\varepsilon_{0}\) in (2.2) than in (2.7) and with arbitrary smaller \(c\) in (2.2) than in (2.7), for example \(c/2\)). Hence, Theorem 2.1 yields the existence assertion of Corollary 2.3 and the error estimate \[\|u_{\varepsilon}-u_{0}\|\leq\|\bar{u}_{\varepsilon}-u_{0}\|+d\|u_{\varepsilon }-\bar{u}_{\varepsilon}\|_{U}\leq\|\bar{u}_{\varepsilon}-u_{0}\|+\frac{4d}{c} \|F_{\varepsilon}(\bar{u}_{\varepsilon})\|_{V}.\] Now let us prove the local uniqueness assertion of Corollary 2.3. Take \(\varepsilon\in(0,\varepsilon_{1}]\) and a solution \(u\in U\) to (2.1). Then \[0=F_{\varepsilon}(u)=F_{\varepsilon}(\bar{u}_{\varepsilon})+F^{\prime}_{ \varepsilon}(\bar{u}_{\varepsilon})(u-\bar{u}_{\varepsilon})+\int_{0}^{1} \left(F^{\prime}_{\varepsilon}(su+(1-s)\bar{u}_{\varepsilon})-F^{\prime}_{ \varepsilon}(\bar{u}_{\varepsilon})\right)(u-\bar{u}_{\varepsilon})ds,\] i.e. \[u-\bar{u}_{\varepsilon}=-F^{\prime}_{\varepsilon}(\bar{u}_{\varepsilon})^{-1} \left(F_{\varepsilon}(\bar{u}_{\varepsilon})+\int_{0}^{1}\left(F^{\prime}_{ \varepsilon}(su+(1-s)\bar{u}_{\varepsilon})-F^{\prime}_{\varepsilon}(\bar{u}_ {\varepsilon})\right)(u-\bar{u}_{\varepsilon})ds\right),\] i.e. \[\|u-\bar{u}_{\varepsilon}\|_{U}\leq\frac{1}{c}\left(\|F_{\varepsilon}(\bar{u}_ {\varepsilon})\|_{V}+\max_{0\leq s\leq 1}\|\left(F^{\prime}_{\varepsilon}(su+(1-s) \bar{u}_{\varepsilon})-F^{\prime}_{\varepsilon}(\bar{u}_{\varepsilon})\right)( u-\bar{u}_{\varepsilon})\|_{V}\right). \tag{2.12}\] But (2.10) yields that \[\max_{0\leq s\leq 1}\|\left(F^{\prime}_{\varepsilon}(su+(1-s)\bar{u}_{ \varepsilon})-F^{\prime}_{\varepsilon}(\bar{u}_{\varepsilon})\right)(u-\bar{ u}_{\varepsilon})\|_{V}=o(\|u-\bar{u}_{\varepsilon}\|_{U})\text{ for }\varepsilon+\|u-\bar{u}_{\varepsilon}\|\to 0.\] Therefore (2.9) implies that \[\max_{0\leq s\leq 1}\|\left(F^{\prime}_{\varepsilon}(su+(1-s)\bar{u}_{ \varepsilon})-F^{\prime}_{\varepsilon}(\bar{u}_{\varepsilon})\right)(u-\bar{ u}_{\varepsilon})\|_{V}=o(\|u-\bar{u}_{\varepsilon}\|_{U})\text{ for }\varepsilon+\|u-u_{0}\|\to 0.\] Hence, if \(\varepsilon\) and \(\|u-u_{0}\|\) are sufficiently small, then (2.12) yields that \(\|u-\bar{u}_{\varepsilon}\|_{U}\leq\delta\), and the local uniqueness assertion of Theorem (2.1) implies that \(u=u_{\varepsilon}\). **Remark 2.4**: _In most of the applications of Corollary 2.3 to PDEs the element \(u_{0}\) and the norm \(\|\cdot\|\) are a priori given, and one has to choose Banach spaces \(U\) and \(V\) such that the PDE problem is equivalent to an abstract equation \(F_{\varepsilon}(u)=0\) with Fredholm maps \(F_{\varepsilon}\in C^{1}(U;V)\), and one has to construct a family \(\bar{u}_{\varepsilon}\) with the properties (2.2), (2.4) and (2.8)-(2.10). But at the beginning one does not know if existence and local uniqueness for \(\varepsilon\approx 0\) and \(\|u-u_{0}\|\approx 0\) is true or not for the given PDE problem. If not, then one is trying to choose and to construct something, which does not exist._ _For example, in Theorem 1.1, which is the result of an application of Corollary 2.3 to problem (1.1), the spaces \(U\) and \(V\) and the family \(\bar{u}_{\varepsilon}\) are hidden only, they do not appear in the formulation of Theorem 1.1._ ## 3 Proof of Theorem 1.1 In this section we will prove Theorem 1.1 by means of Corollary 2.3. For that we use the objects of Theorem 1.1: The bounded Lipschitz domain \(\Omega\subset\mathbb{R}^{2}\), the diffusion coefficient functions \(a^{\alpha\beta}_{ij}\in L^{\infty}(\mathbb{R}^{2})\) with (1.3) and (1.5), the reaction functions \(b^{\alpha}:\Omega\times\mathbb{R}^{n}\to\mathbb{R}\) with (1.4), the correctors \(v^{\alpha\beta}_{i}\in W^{1,2}_{\text{loc}}(\mathbb{R}^{2};\mathbb{R}^{n})\), which are defined by the cell problems (1.7), the homogenized diffusion coefficients \(\hat{a}^{\alpha\beta}_{ij}\in\mathbb{R}\), which are defined in (1.6) and which satisfy (1.8), and the weak solution \(u_{0}\in W^{1,2}_{0}(\Omega;\mathbb{R}^{n})\) to the homogenized boundary value problem (1.9). As usual, for \(p\geq 2\) we denote by \(W^{1,p}_{0}(\Omega;\mathbb{R}^{n})\) the closure with respect to the norm \[\|u\|_{1,p}:=\left(\sum_{\alpha=1}^{n}\sum_{i=1}^{2}\int_{\Omega}|\partial_{x _{i}}u^{\alpha}(x)|^{p}dx\right)^{1/p}\] of the set of all \(C^{\infty}\)-maps \(u:\Omega\to\mathbb{R}^{n}\) with compact support. And \(W^{-1,p}(\Omega;\mathbb{R}^{n}):=W^{1,q}_{0}(\Omega;\mathbb{R}^{n})^{*}\) is the dual space to \(W^{1,q}_{0}(\Omega;\mathbb{R}^{n})\) with \(1/p+1/q=1\) with dual space norm \[\|f\|_{-1,p}:=\sup\{\langle f,\varphi\rangle_{1,q}:\ \varphi\in W^{1,q}_{0}( \Omega;\mathbb{R}^{n}),\|\varphi\|_{1,q}\leq 1\},\] where \(\langle\cdot,\cdot\rangle_{1,q}:W^{-1,p}(\Omega;\mathbb{R}^{n})\times W^{1,q }_{0}(\Omega;\mathbb{R}^{n})\to\mathbb{R}\) is the dual pairing. Further, we introduce linear bounded operators \(A_{0}:W^{1,2}_{0}(\Omega;\mathbb{R}^{n})\to W^{-1,2}(\Omega;\mathbb{R}^{n})\) and, for \(\varepsilon>0\), \(A_{\varepsilon}:W^{1,2}_{0}(\Omega;\mathbb{R}^{n})\to W^{-1,2}(\Omega; \mathbb{R}^{n})\) by \[\left.\begin{array}{l}\langle A_{0}u,\varphi\rangle_{1,2}:=\int_{\Omega} \hat{a}_{ij}^{\alpha\beta}\partial_{x_{j}}u^{\beta}(x)\partial_{x_{i}}\varphi ^{\alpha}(x)dx,\\ \langle A_{\varepsilon}u,\varphi\rangle_{1,2}:=\int_{\Omega}a_{ij}^{ \alpha\beta}(x/\varepsilon)\partial_{x_{j}}u^{\beta}(x)\partial_{x_{i}} \varphi^{\alpha}(x)dx,\end{array}\right\}\mbox{ for all }u,\varphi\in W^{1,2}_{0}(\Omega; \mathbb{R}^{n}). \tag{3.1}\] Because of assumption (1.3) and the Holder inequality we have the following: For any \(p\geq 2\) the restrictions of \(A_{0}\) and \(A_{\varepsilon}\) to \(W^{1,p}_{0}(\Omega;\mathbb{R}^{n})\) map \(W^{1,p}_{0}(\Omega;\mathbb{R}^{n})\) into \(W^{-1,p}(\Omega;\mathbb{R}^{n})\), and \[\|A_{0}u\|_{-1,p}+\|A_{\varepsilon}u\|_{-1,p}\leq\mbox{const}\;\|u\|_{1,p} \mbox{ for all }u\in W^{1,p}_{0}(\Omega;\mathbb{R}^{n}),\] where the constant does not depend on \(\varepsilon\), \(p\) and \(u\). For \(\varepsilon>0\) and \(u\in W^{1,2}_{0}(\Omega;\mathbb{R}^{n})\) define \(\tilde{u}_{\varepsilon}(y):=u(\varepsilon y)\), if \(\varepsilon y\in\Omega\), and \(\tilde{u}_{\varepsilon}(y):=0\), if \(\varepsilon y\notin\Omega\). Then (1.5) implies that \[\langle A_{\varepsilon}u,u\rangle_{2} = \int_{\Omega}a_{ij}^{\alpha\beta}(x/\varepsilon)\partial_{x_{j} }u^{\beta}(x)\partial_{x_{i}}\varphi^{\alpha}(x)dx=\frac{1}{\varepsilon}\int_ {\mathbb{R}^{2}}a_{ij}^{\alpha\beta}(y)\partial_{y_{j}}\tilde{u}_{\varepsilon} ^{\beta}(y)\partial_{y_{i}}\tilde{u}^{\alpha}(y)dy\] \[\geq \frac{a}{\varepsilon}\int_{\mathbb{R}^{2}}\partial_{y_{i}}\tilde{ u}_{\varepsilon}^{\alpha}(y)\partial_{y_{i}}\tilde{u}^{\alpha}(y)dy=a\|u\|_{1,2}^{2}\] and similarly for \(A_{0}\). Therefore K. Groger's results [10, Theorems 1 and 2] imply that there exists \(p_{1}>2\) such that for all \(\varepsilon>0\) and all \(p\in[2,p_{1}]\) the linear operators \(A_{0}\) and \(A_{\varepsilon}\) are bijective from \(W^{1,p}_{0}(\Omega;\mathbb{R}^{n})\) onto \(W^{-1,p}(\Omega;\mathbb{R}^{n})\) and that \[\|A_{0}^{-1}f\|_{1,p}+\|A_{\varepsilon}^{-1}f\|_{1,p}\leq\mbox{const}\;\|f\|_{ -1,p}\mbox{ for all }f\in W^{-1,p}(\Omega;\mathbb{R}^{n}), \tag{3.2}\] where the constant does not depend on \(\varepsilon\), \(p\) and \(f\). In particular, we have \[u_{0}\in W^{1,p_{1}}_{0}(\Omega;\mathbb{R}^{n}) \tag{3.3}\] because \(u_{0}\) is a weak solution to the homogenized boundary value problem (1.9) with diffusion coefficients satisfying (1.8). **Remark 3.1**: _(i) Estimates of the type (3.2) often are called Meyers' estimates (or estimates of Groger-Meyers type) because of the initiating paper [16] of N.G. Meyers._ _(ii) It is easy to verify that the linear operators \(A_{\varepsilon}\) do not converge for \(\varepsilon\to 0\) in the uniform operator norm in \({\cal L}(W^{1,p}_{0}(\Omega;\mathbb{R}^{n});W^{-1,p}(\Omega;\mathbb{R}^{n}))\) for certain \(p\geq 2\), in general (see [8, Remark 8.4] and Lemma 3.3 below)._ Finally, we introduce a superposition operator \(B:L^{\infty}(\Omega;\mathbb{R}^{n})\to L^{\infty}(\Omega;\mathbb{R}^{n})\) by \[[B(u)](x):=\left(b^{1}(x,u(x)),\ldots,b^{n}(x,u(x))\right)\mbox{ for almost all }x\in\Omega. \tag{3.4}\] Here and in what follows we consider the function space \(L^{\infty}(\Omega;\mathbb{R}^{n})\) with the norm \(\|\cdot\|_{\infty}\) (defined in (1.10)), as usual. Remark that for any \(p>2\) the operator \(B\) can be considered as a map from \(W^{1,p}(\Omega;\mathbb{R}^{n})\) into \(W^{-1,p}(\Omega;\mathbb{R}^{n})\) because the space \(W^{1,p}(\Omega;\mathbb{R}^{n})\) is continuously embedded into the space \(L^{\infty}(\Omega;\mathbb{R}^{n})\) (because the dimension of \(\Omega\) is two), and the space \(L^{\infty}(\Omega;\mathbb{R}^{n})\) is continuously embedded into the space \(W^{-1,p}(\Omega;\mathbb{R}^{n})=W^{-1,q}_{0}(\Omega;\mathbb{R}^{n})^{*}\) via \[\langle u,\varphi\rangle_{1,q}:=\int_{\Omega}u(x)\varphi(x)dx\mbox{ for }u\in L^{\infty}(\Omega;\mathbb{R}^{n})\mbox{ and }\varphi\in W^{1,q}_{0}(\Omega;\mathbb{R}^{n}).\] Because of assumption (1.4) we have that the nonlinear operator \(B\) is \(C^{1}\)-smooth from \(L^{\infty}(\Omega;\mathbb{R}^{n})\) into \(L^{\infty}(\Omega;\mathbb{R}^{n})\) and, hence, from \(W^{1,p}(\Omega;\mathbb{R}^{n})\) into \(W^{-1,p}(\Omega;\mathbb{R}^{n})\), and \[[B^{\prime}(u)v](x):=\left(\partial_{u^{\gamma}}b^{1}(x,u(x)v^{\gamma}(x)), \ldots,\partial_{u^{\gamma}}b^{n}(x,u(x))v^{\gamma}(x)\right)\text{ for almost all }x\in\Omega,\] and for all \(u\in L^{\infty}(\Omega;\mathbb{R}^{n})\) we have \[\left.\begin{array}{l}\lim_{\|w\|_{\infty}\to 0}\|B(v+w)-B(v)\|_{-1,p}=0 \text{ uniformly with respect to }\|v-u\|_{\infty}\leq 1,\\ \lim_{\|v\|_{\infty}\to 0}\|(B^{\prime}(u+v)-B^{\prime}(u))w\|_{-1,p}=0 \text{ uniformly with respect to }\|w\|_{1,p}\leq 1.\end{array}\right\} \tag{3.5}\] Moreover, using the notation (3.1) and (3.4) we get \[A_{0}u_{0}+B(u_{0})=0. \tag{3.6}\] Now we introduce the abstract setting of Corollary 2.3 for the boundary value problem (1.1). We take the \(p_{0}\) from assertion (ii) in Theorem 1.1 and the \(p_{1}\) from above and fix \(p\) and \(q\) as follows: \[2<p\leq\min\{p_{0},p_{1}\},\;q:=\frac{p}{p-1}. \tag{3.7}\] The Banach spaces \(U\) and \(V\) and their norms are defined by \[U:=W^{1,p}_{0}(\Omega;\mathbb{R}^{n}),\;V:=W^{-1,p}(\Omega; \mathbb{R}^{n})=W^{1,q}_{0}(\Omega;\mathbb{R}^{n})^{*},\] \[\|\cdot\|_{U}:=\|\cdot\|_{1,p},\;\|\cdot\|:=\|\cdot\|_{\infty}, \;\|\cdot\|_{V}:=\|\cdot\|_{-1,p}.\] Because the space dimension is two, the assumption (2.8) of Corollary 2.3 is satisfied in this setting. Further, the \(C^{1}\)-smooth operators \(F_{\varepsilon}:U\to V\) of Theorem 2.1 are defined by \[F_{\varepsilon}(u):=A_{\varepsilon}u+B(u).\] With these choices a vector function \(u\) is a weak solution to the boundary value problem (1.1) if and only if \(u\) belongs to the function space \(U\) and satisfies the operator equation \(F_{\varepsilon}(u)=0.\) Here we used [10] again. Finally, we define the exceptional approximate solution \(u_{0}\in U\) of Corollary 2.3 to be the solution \(u_{0}\) of the homogenized boundary value problem (1.9), which is given in Theorem 1.1. In order to prove Theorem 1.1 we have choose the family \(\bar{u}_{\varepsilon}\in W^{1,p}_{0}(\Omega;\mathbb{R}^{n})\) such that the assumption (2.2) is satisfied in the setting introduced above, i.e. that there exist \(\varepsilon_{0}>0\) and \(c>0\) such that \[\left.\begin{array}{l}\text{for all }\varepsilon\in(0,\varepsilon_{0}]\text{ the operators }A_{\varepsilon}+B^{\prime}(u_{0})\text{ are Fredholm}\\ \text{of index zero from }W^{1,p}_{0}(\Omega;\mathbb{R}^{n})\text{ into }W^{-1,p}(\Omega; \mathbb{R}^{n}),\text{ and }\\ \|(A_{\varepsilon}+B^{\prime}(u_{0}))u\|_{-1,p}\geq c\|u\|_{1,p}\text{ for all }u\in W^{1,p}_{0}(\Omega;\mathbb{R}^{n}),\end{array}\right\} \tag{3.8}\] that the assumption and (2.9) is satisfied in the setting introduced above, i.e. \[\|\bar{u}_{\varepsilon}-u_{0}\|_{\infty}\to 0\text{ for }\varepsilon\to 0, \tag{3.9}\] that the assumption and (2.10) is satisfied in the setting introduced above, i.e. \[\lim_{\varepsilon+\|u\|_{\infty}\to 0}\|(B^{\prime}(u_{0}+u)-B^{\prime}(u_{0}))v\| _{-1,p}=0\text{ uniformly with respect to }\|v\|_{1,p}\leq 1, \tag{3.10}\] and, for proving assertion (i) of Theorem 1.1, that \[\|A_{\varepsilon}\bar{u}_{\varepsilon}+B(\bar{u}_{\varepsilon})\|_{-1,p}=o(1) \text{ for }\varepsilon\to 0, \tag{3.11}\] and, for proving assertion (ii) of Theorem 1.1, that \[\|\bar{u}_{\varepsilon}-u_{0}\|_{\infty}+\|A_{\varepsilon}\bar{u}_{\varepsilon}+B (\bar{u}_{\varepsilon})\|_{-1,p}=O(\varepsilon^{1/p})\ \text{for}\ \varepsilon\to 0. \tag{3.12}\] and \[\lim_{\|u\|_{\infty}\to 0}\|(B^{\prime}(u_{0}+u)-B^{\prime}(u_{0}))v\|_{-1,p}=0 \ \text{uniformly with respect to}\ \|v\|_{1,p}\leq 1 \tag{3.13}\] and \[\|A_{\varepsilon}\bar{u}_{\varepsilon}+B(u_{0})\|_{-1,p}=o(1)\ \text{for}\ \varepsilon\to 0, \tag{3.14}\] and \[\|\bar{u}_{\varepsilon}-u_{0}\|_{\infty}+\|A_{\varepsilon}\bar{u}_{\varepsilon }+B(u_{0})\|_{-1,p}=O(\varepsilon^{1/p})\ \text{for}\ \varepsilon\to 0. \tag{3.15}\] Moreover, (3.13) is true because of (3.5), and because of (3.6) the conditions (3.14) and (3.15) are equivalent to \[\|A_{\varepsilon}\bar{u}_{\varepsilon}-A_{0}u_{0}\|_{-1,p}=o(1)\ \text{for}\ \varepsilon\to 0, \tag{3.16}\] and \[\|\bar{u}_{\varepsilon}-u_{0}\|_{\infty}+\|A_{\varepsilon}\bar{u}_{ \varepsilon}-A_{0}u_{0}\|_{-1,p}=O(\varepsilon^{1/p})\ \text{for}\ \varepsilon\to 0. \tag{3.17}\] Hence, we have to verify (3.8), and for proving assertion (i) of Theorem 1.1 we have to construct a family \(\bar{u}_{\varepsilon}\in W^{1,p}_{0}(\Omega;\mathbb{R}^{n})\) with (3.9) and (3.16), and for proving assertion (ii) of Theorem 1.1 we have construct a family \(\bar{u}_{\varepsilon}\in W^{1,p}_{0}(\Omega;\mathbb{R}^{n})\) with (3.17). ### Construction of approximate solutions with (3.9) and (3.16) In this subsection we will construct a family \(\bar{u}_{\varepsilon}\in W^{1,p}_{0}(\Omega;\mathbb{R}^{n})\) with (3.9) and (3.16). For that we will do some calulations which are well-known in homogenization theory (cf., e.g. [23, Chapter 3.2]), we present them for the convenience of the reader. For \(\varepsilon>0\) we set \[\Omega_{\varepsilon}:=\left\{x\in\Omega:\ \inf_{y\in\partial\Omega}\left((x_{1} -y_{1})^{2}+(x_{2}-y_{2})^{2}\right)<\varepsilon^{2}\right\}.\] It follows that \[\text{meas}\ \Omega_{\varepsilon}=O(\varepsilon)\ \text{for}\ \varepsilon\to 0, \tag{3.18}\] where \(\text{meas}\ \Omega_{\varepsilon}\) is the two-dimensional Lebesque measure of \(\Omega_{\varepsilon}\). Further, we take a family \(\eta_{\varepsilon}\) of cut-of functions of size \(\varepsilon\), i.e. of \(C^{\infty}\)-functions \(\Omega\to\mathbb{R}\) such that \[\left.\begin{array}{l}0\leq\eta_{\varepsilon}(x)\leq 1\ \text{for}\ x\in\Omega,\\ \eta_{\varepsilon}(x)=1\ \text{for}\ x\in\Omega\setminus\Omega_{2\varepsilon},\\ \eta_{\varepsilon}(x)=0\ \text{for}\ x\in\Omega_{\varepsilon},\\ \sup\left\{\varepsilon\left|\partial_{x_{i}}\eta_{\varepsilon}(x)\right|:\ \varepsilon>0,\ x\in\Omega,\ i=1,2\right\}<\infty.\end{array}\right\} \tag{3.19}\] Finally, we take a mollifier function, i.e. a \(C^{\infty}\)-function \(\rho:\mathbb{R}^{2}\to\mathbb{R}\) such that \[\rho(x)\geq 0\ \text{and}\ \rho(x)=\rho(-x)\ \text{for all}\ x\in\mathbb{R}^{2},\ \rho(x)=0\ \text{for}\ x_{1}^{2}+x_{2}^{2}\geq 1, \int_{\mathbb{R}^{2}}\rho(x)dx=1,\] and for \(\delta>0\) we define linear smoothing operators \(S_{\delta}:L^{1}(\Omega)\to C^{\infty}(\mathbb{R}^{2})\) by \[[S_{\delta}u](x):=\int_{\Omega}\rho_{\delta}(x-y)u(y)dy\ \text{with}\ \rho_{\delta}(x):=\rho(x/\delta)/\delta^{2}.\] **Lemma 3.2**: _(i) For all \(r\geq 1\) and \(u\in L^{r}(\Omega)\) we have_ \[\lim_{\delta\to 0}\int_{\Omega}|u(x)-[S_{\delta}u](x)|^{r}dx=0 \tag{3.20}\] _(ii) For all \(r\geq 1\) there exists \(c_{r}>0\) such that for all \(\delta>0\), \(i=1,2\) and \(u\in L^{r}(\Omega)\) we have_ \[\int_{\Omega}|[S_{\delta}u](x)|^{r}\,dx \leq \int_{\Omega}|u(x)|^{r}\,dx, \tag{3.21}\] \[\int_{\Omega}|[\partial_{x_{i}}S_{\delta}u](x)|^{r}\,dx \leq \frac{c_{r}}{\delta^{r}}\int_{\Omega}|u(x)|^{r}\,dx,\] (3.22) \[\sup_{x\in\Omega}|[S_{\delta}u](x)|^{r} \leq \frac{c_{r}}{\delta^{2}}\int_{\Omega}|u(x)|^{r}\,dx. \tag{3.23}\] **Proof** Assertion (i) is proved e.g. in [1, Lemma 1.1.1]. In order to prove assertion (ii) take \(\delta>0\), \(r,s>1\) with \(1/r+1/s=1\), and take \(u\in L^{r}(\Omega)\). Then the Holder inequality implies that for all \(x\in\Omega\) we have \[|[S_{\delta}u](x)|=\left|\int_{\Omega}u(y)\rho_{\delta}(x-y)^{1/r}\rho_{ \delta}(x-y)^{1/s}dy\right|\leq\left(\int_{\Omega}|u(y)|^{r}\rho_{\delta}(x-y) dy\right)^{1/r}.\] Here we used that \(\int_{\mathbb{R}^{2}}\rho_{\delta}(x-y)dy=\int_{\mathbb{R}^{2}}\rho(z)dz=1.\) It follows that \[|[S_{\delta}u](x)|^{r}\leq\frac{1}{\delta^{2}}\int_{\Omega}|u(y)|^{r}\rho((x- y)/\delta)dy\leq\mbox{const }\frac{1}{\delta^{2}}\int_{\Omega}|u(y)|^{r}dy,\] i.e. (3.23) is proved. Further, we have \[\int_{\Omega}|[S_{\delta}u](x)|^{r}dx\leq\int_{\Omega}\int_{\Omega}|u(y)|^{r} \rho_{\delta}(x-y))dydx=\int_{\Omega}\int_{\Omega}\rho_{\delta}(x-y))dx|u(y)| ^{r}dy\leq\int_{\Omega}|u(y)|^{r}dy,\] i.e. (3.21) is proved. Finally, because of \(\int_{\mathbb{R}^{2}}|\partial_{x_{i}}\rho_{\delta}(x-y)|dx=\frac{1}{\delta^{ 3}}\int_{\mathbb{R}^{2}}|\partial_{x_{i}}\rho((x-y)/\delta)|dy=\frac{1}{\delta }\int_{\mathbb{R}^{2}}|\partial_{x_{i}}\rho(z)|dz\) one gets similarly \[|[\partial_{x_{i}}S_{\delta}u](x)| \leq \left(\int_{\Omega}|u(y)|^{r}|\partial_{x_{i}}\rho_{\delta}(x-y)| dy\right)^{1/r}\left(\int_{\Omega}|\partial_{x_{i}}\rho_{\delta}(x-y)|dy \right)^{1/s}\] \[\leq \mbox{const }\frac{1}{\delta^{1/s}}\left(\int_{\Omega}|u(y)|^{r}| \partial_{x_{i}}\rho_{\delta}(x-y)|dy\right)^{1/r}\] and, hence, \[\int_{\Omega}|[\partial_{x_{i}}S_{\delta}u](x)|^{r}\,dx \leq \mbox{const }\frac{1}{\delta^{r/s}}\int_{\Omega}|u(y)|^{r}\int_{\Omega}| \partial_{x_{i}}\rho_{\delta}(x-y)|dx\;dy\] \[\leq \mbox{const }\frac{1}{\delta^{r}}\int_{\Omega}|u(y)|^{r}dy,\] i.e. (3.22) is proved. It is well-known (cf., e.g. [23, Chapter 2.2]) that, if the exponent \(p_{1}>2\) is taken sufficiently small, we have \[v_{i}^{\alpha\beta}\in W^{1,p_{1}}_{\rm loc}(\mathbb{R}^{2}). \tag{3.24}\] Using this and (3.3) and (3.7), we define, for \(\varepsilon>0\), linear operators \(K_{\varepsilon}:W^{1,p}(\Omega;\mathbb{R}^{n})\to W^{1,p}_{0}(\Omega;\mathbb{R} ^{n})\) by \[[K_{\varepsilon}u]^{\alpha}(x):=\varepsilon\eta_{\varepsilon}(x)[S_{\delta_{ \varepsilon}}\partial_{x_{k}}u^{\gamma}](x)v_{k}^{\alpha\gamma}(x/\varepsilon) \mbox{ with }\delta_{\varepsilon}:=\varepsilon^{1/4}. \tag{3.25}\] **Lemma 3.3**: _For all \(u\in W^{1,p}_{0}(\Omega;\mathbb{R}^{n})\) we have_ \[\lim_{\varepsilon\to 0}\langle A_{\varepsilon}(u+K_{\varepsilon}u)-A_{0}u, \varphi\rangle_{1,q}=0\mbox{ uniformly with respect to }\|\varphi\|_{1,q}\leq 1, \tag{3.26}\] _and for all \(\varphi\in W^{1,q}_{0}(\Omega;\mathbb{R}^{n})\) we have_ \[\lim_{\varepsilon\to 0}\langle A_{\varepsilon}(u+K_{\varepsilon}u)-A_{0}u, \varphi\rangle_{1,q}=0\mbox{ uniformly with respect to }\|u\|_{1,p}\leq 1. \tag{3.27}\] **Proof** For \(\alpha,\beta=1,\ldots,n\) and \(i,j,k=1,2\) we define \(\mathbb{Z}^{2}\)-periodic functions \(b^{\alpha\beta}_{ij}\in L^{p}_{\rm loc}(\mathbb{R}^{2})\) and \(c^{\alpha\beta}_{ij}\in W^{2,p}_{\rm loc}(\mathbb{R}^{2})\) and \(\phi^{\alpha\beta}_{ijk}\in W^{1,p}_{\rm loc}(\mathbb{R}^{2})\) (the functions \(\phi^{\alpha\beta}_{ijk}\) are sometimes are called dual or flux correctors) by \[b^{\alpha\beta}_{ij}(y):=a^{\alpha\beta}_{ij}(y)+a^{\alpha\gamma}_{ik}(y) \partial_{y_{k}}v^{\gamma\beta}_{j}(y)-\hat{a}^{\alpha\beta}_{ij} \tag{3.28}\] and \[\Delta c^{\alpha\beta}_{ij}(y)=b^{\alpha\beta}_{ij}(y),\;\int_{[0,1]^{2}}c^{ \alpha\beta}_{ij}(y)dy=0 \tag{3.29}\] and \[\phi^{\alpha\beta}_{ijk}(y):=\partial_{y_{i}}c^{\alpha\beta}_{jk}(y)-\partial _{y_{j}}c^{\alpha\beta}_{ik}(y). \tag{3.30}\] From (1.6) and (3.28) follows that \(\int_{[0,1]^{2}}b^{\alpha\beta}_{ij}(y)dy=0\), therefore problem (3.29) is uniquely strongly solvable with respect to \(c^{\alpha\beta}_{ij}\). Further, from (1.7) follows that \(\partial_{y_{i}}b^{\alpha\beta}_{ij}=0\). Hence, (3.29) implies that \(\partial_{y_{i}}c^{\alpha\beta}_{ij}=0\). Therefore (3.29) and (3.30) yield that \[\partial_{y_{i}}\phi^{\alpha\beta}_{ijk}=b^{\alpha\beta}_{jk}\mbox{ and }\phi^{\alpha\beta}_{ijk}=-\phi^{\alpha\beta}_{kji}. \tag{3.31}\] Take a test function \(\varphi\in C^{\infty}(\Omega;\mathbb{R}^{n})\). Using (3.31) we get \[\varepsilon\partial_{x_{k}}\left(\phi^{\alpha\beta}_{ijk}(x/\varepsilon) \partial_{x_{i}}\varphi^{\beta}(x)\right)=b^{\alpha\beta}_{ij}(x/\varepsilon) \partial_{x_{i}}\varphi^{\beta}(x) \tag{3.32}\] (this is [23, formula (3.1.5)]). Now, we take \(u\in W^{1,p}_{0}(\Omega;\mathbb{R}^{n})\), insert (3.25) into \(\langle A_{\varepsilon}(u+K_{\varepsilon}u)-A_{0}u,\varphi\rangle_{1,q}\) and calculate as follows: \[\langle A_{\varepsilon}(u+K_{\varepsilon}u)-A_{0}u,\varphi\rangle _{1,q}\] \[=\int_{\Omega}\left(a^{\alpha\beta}_{ij}(x/\varepsilon)\partial_{ x_{j}}\left(u^{\beta}+\varepsilon\eta_{\varepsilon}[S_{\delta_{\varepsilon}} \partial_{x_{k}}u^{\gamma}]v^{\beta\gamma}_{k}(x/\varepsilon)\right)-\hat{a}^{ \alpha\beta}_{ij}\partial_{x_{j}}u^{\beta}\right)\partial_{x_{i}}\varphi^{ \alpha}dx\] \[=\int_{\Omega}\left(\left(a^{\alpha\beta}_{ij}(x/\varepsilon)- \hat{a}^{\alpha\beta}_{ij}\right)\partial_{x_{j}}u^{\beta}+a^{\alpha\beta}_{ij} (x/\varepsilon)\eta_{\varepsilon}[S_{\delta_{\varepsilon}}\partial_{x_{k}}u^{ \gamma}]\partial_{y_{j}}v^{\beta\gamma}_{k}(x/\varepsilon)\right)\partial_{x_{i }}\varphi^{\alpha}dx\] \[\qquad+\varepsilon\int_{\Omega}a^{\alpha\beta}_{ij}(x/\varepsilon) \partial_{x_{j}}(\eta_{\varepsilon}(x)[S_{\delta_{\varepsilon}}\partial_{x_{k}} u^{\gamma}](x))v^{\beta\gamma}_{k}(x/\varepsilon)\partial_{x_{i}}\varphi^{ \alpha}(x)dx\] \[=\int_{\Omega}\left(a^{\alpha\beta}_{ij}(x/\varepsilon)-\hat{a}^{ \alpha\beta}_{ij}+a^{\alpha\gamma}_{ik}(x/\varepsilon)\partial_{y_{k}}v^{ \gamma\beta}_{j}(x/\varepsilon)\right)\eta_{\varepsilon}(x)[S_{\delta_{ \varepsilon}}\partial_{x_{j}}u^{\beta}](x)\partial_{x_{i}}\varphi^{\alpha}(x)dx\] \[\qquad+\int_{\Omega}\left(a^{\alpha\beta}_{ij}(x/\varepsilon)- \hat{a}^{\alpha\beta}_{ij}\right)\left(\partial_{x_{j}}u^{\beta}(x)-\eta_{ \varepsilon}(x))[S_{\delta_{\varepsilon}}\partial_{x_{j}}u^{\beta}](x) \right)\partial_{x_{i}}\varphi^{\alpha}(x)dx\] \[\qquad+\varepsilon\int_{\Omega}a^{\alpha\beta}_{ij}(x/\varepsilon) \partial_{x_{j}}(\eta_{\varepsilon}(x)[S_{\delta_{\varepsilon}}\partial_{x_{k}} u^{\gamma}](x))v^{\beta\gamma}_{k}(x/\varepsilon)\partial_{x_{i}}\varphi^{ \alpha}(x)dx. \tag{3.33}\] We insert (3.28) and (3.32) into (3.33), integrate by parts and use that \(\phi^{\alpha\beta}_{kij}(x/\varepsilon)\partial_{x_{k}}\partial_{x_{i}}\varphi^{ \alpha}(x)=0\) (cf. (3.31)) and get \[\langle A_{\varepsilon}(u+K_{\varepsilon}u)-A_{0}u,\varphi\rangle_ {1,q}\] \[=\int_{\Omega}b^{\alpha\beta}_{ij}(x/\varepsilon)\eta_{ \varepsilon}(x)[S_{\delta_{\varepsilon}}\partial_{x_{j}}u^{\beta}](x) \partial_{x_{i}}\varphi^{\alpha}(x)dx\] \[\qquad+\int_{\Omega}\left(a^{\alpha\beta}_{ij}(x/\varepsilon)- \hat{a}^{\alpha\beta}_{ij}\right)\left(\partial_{x_{j}}u^{\beta}(x)-\eta_{ \varepsilon}(x))[S_{\delta_{\varepsilon}}\partial_{x_{j}}u^{\beta}](x)\right) \partial_{x_{i}}\varphi^{\alpha}(x)dx\] \[\qquad+\varepsilon\int_{\Omega}a^{\alpha\beta}_{ij}(x/\varepsilon )\partial_{x_{j}}(\eta_{\varepsilon}(x)[S_{\delta_{\varepsilon}}\partial_{x_{ k}}u^{\gamma}](x))v^{\beta\gamma}_{k}(x/\varepsilon)\partial_{x_{i}}\varphi^{ \alpha}(x)dx\] \[=\varepsilon\int_{\Omega}\left(-\phi^{\alpha\beta}_{kij}(x/ \varepsilon)+a^{\alpha\gamma}_{ik}(x/\varepsilon)v^{\gamma\beta}_{j}(x/ \varepsilon)\right)\partial_{x_{k}}(\eta_{\varepsilon}(x)[S_{\delta_{ \varepsilon}}\partial_{x_{j}}u^{\beta}](x))\partial_{x_{i}}\varphi^{\alpha}(x )dx\] \[\qquad+\int_{\Omega}\left(a^{\alpha\beta}_{ij}(x/\varepsilon)- \hat{a}^{\alpha\beta}_{ij}\right)\left(\partial_{x_{j}}u^{\beta}(x)-\eta_{ \varepsilon}(x)[S_{\delta_{\varepsilon}}\partial_{x_{j}}u^{\beta}](x)\right) \partial_{x_{i}}\varphi^{\alpha}(x)dx. \tag{3.34}\] Remark that no boundary integrals appeared after the integration by parts because of the cut-off functions \(\eta_{\varepsilon}\) (no matter if the test function \(\varphi\) vanishes on \(\partial\Omega\) or not). Let us estimate the the right-hand side of (3.34). Because of (3.3) and (3.24) and the Holder inequality we have \[\left|\varepsilon\int_{\Omega}\left(-\phi^{\alpha\beta}_{kij}(x/ \varepsilon)+a^{\alpha\gamma}_{ik}(x/\varepsilon)v^{\gamma\beta}_{j}(x/ \varepsilon)\right)\partial_{x_{k}}\eta_{\varepsilon}(x)[S_{\delta_{ \varepsilon}}\partial_{x_{j}}u^{\beta}](x)\partial_{x_{i}}\varphi^{\alpha}(x) dx\right|\] \[\leq\mbox{const }\varepsilon\left(\sum_{i=1}^{2}\int_{\Omega_{ \varepsilon}}|\partial_{x_{i}}\eta_{\varepsilon}(x)|^{p}dx\right)^{1/p}\sum_{ \beta=1}^{n}\sum_{j=1}^{2}\|[S_{\delta_{\varepsilon}}\partial_{x_{j}}u^{\beta }]\|_{\infty}\|\varphi\|_{1,q}\] \[\leq\mbox{const }\frac{\varepsilon^{1/p}}{\delta_{\varepsilon}^{2/p }}\|u\|_{1,p}\|\varphi\|_{1,q}=\mbox{const }\varepsilon^{1/2p}\|u\|_{1,p}\|\varphi\|_{1,q} \tag{3.35}\] (here we used (3.18), (3.19) and (3.23)) and \[\left|\varepsilon\int_{\Omega}\left(-\phi^{\alpha\beta}_{kij}(x/ \varepsilon)+a^{\alpha\gamma}_{ik}(x/\varepsilon)v^{\gamma\beta}_{j}(x/ \varepsilon)\right)\eta_{\varepsilon}(x)[\partial_{x_{k}}S_{\delta_{ \varepsilon}}\partial_{x_{j}}u^{\beta}](x)\partial_{x_{i}}\varphi^{\alpha}(x) dx\right|\] \[\leq\mbox{const }\frac{\varepsilon}{\delta_{\varepsilon}}\|u\|_{1,p}\| \varphi\|_{1,q}=\mbox{const }\varepsilon^{3/4}\|u\|_{1,p}\|\varphi\|_{1,q} \tag{3.36}\] (here we used (3.22)) and \[\left|\int_{\Omega}\left(a^{\alpha\beta}_{ij}(x/\varepsilon)- \hat{a}^{\alpha\beta}_{ij}\right)(1-\eta_{\varepsilon}(x))\left[S_{\delta_{ \varepsilon}}\partial_{x_{j}}u^{\beta}](x)\partial_{x_{i}}\varphi^{\alpha}(x) dx\right|\] \[\leq\mbox{const }\left(\int_{\Omega_{\varepsilon}}|1-\eta_{ \varepsilon}(x)|^{p}dx\right)^{1/p}\sum_{\beta=1}^{n}\sum_{j=1}^{2}\|[S_{ \delta_{\varepsilon}}\partial_{x_{j}}u^{\beta}]\|_{\infty}\|\varphi\|_{W^{1,q}}\] \[\leq\mbox{const }\frac{\varepsilon^{1/p}}{\delta_{\varepsilon}^{2/p }}\|u\|_{1,p}\|\varphi\|_{1,q}=\mbox{const }\varepsilon^{1/2p}\|u\|_{1,p}\|\varphi\|_{1,q} \tag{3.37}\] (here we used (3.18) and (3.23)), where the constants do not depend on \(\varepsilon\), \(u\) and \(\varphi\). Further, we have \[\left|\int_{\Omega}\left(a_{ij}^{\alpha\beta}(x/\varepsilon)-\hat{a }_{ij}^{\alpha\beta}\right)\left(\partial_{x_{j}}u^{\beta}(x)-[S_{\delta_{ \varepsilon}}\partial_{x_{j}}u^{\beta}](x)\right)\partial_{x_{i}}\varphi^{ \alpha}(x)dx\right|\] \[\leq\text{const}\left(\sum_{\beta=1}^{n}\sum_{j=1}^{2}\int_{ \Omega}\left|\partial_{x_{j}}u^{\beta}(x)-[S_{\delta_{\varepsilon}}\partial_{ x_{j}}u^{\beta}](x)\right|^{p}dx\right)^{1/p}\|\varphi\|_{1,q}, \tag{3.38}\] where the constant does not depend on \(\varepsilon\), \(u\) and \(\varphi\) again. But the right-hand side of (3.38) is \(o(1)\) for \(\varepsilon\to 0\) uniformly with respect to \(\|\varphi\|_{1,q}\leq 1\) (cf. (3.20)). Hence, (3.26) is proved. In order to prove (3.27) we change the estimate (3.38) as follows: \[\left|\int_{\Omega}\left(a_{ij}^{\alpha\beta}(x/\varepsilon)- \hat{a}_{ij}^{\alpha\beta}\right)\left(\partial_{x_{j}}u^{\beta}(x)-[S_{ \delta_{\varepsilon}}\partial_{x_{j}}u^{\beta}](x)\right)\partial_{x_{i}} \varphi^{\alpha}(x)dx\right|\] \[\leq\text{const}\int_{\Omega}\left|\left(\partial_{x_{j}}u^{ \beta}(x)-[S_{\delta_{\varepsilon}}\partial_{x_{j}}u^{\beta}](x)\right) \partial_{x_{i}}\varphi^{\alpha}(x)\right|dx\] \[=\text{const}\int_{\Omega}\left|\partial_{x_{j}}u^{\beta}(x) \left(\partial_{x_{i}}\varphi^{\alpha}(x)-[S_{\delta_{\varepsilon}}\partial_{ x_{i}}\varphi^{\alpha}](x)\right)\right|dx\] \[\leq\text{const}\left(\sum_{\alpha=1}^{n}\sum_{i=1}^{2}\int_{ \Omega}\left|\partial_{x_{i}}\varphi^{\alpha}(x)-[S_{\delta_{\varepsilon}} \partial_{x_{i}}\varphi^{\alpha}](x)\right|^{q}dx\right)^{1/q}\|u\|_{1,p}, \tag{3.39}\] where the constants do not depend on \(\varepsilon\), \(u\) and \(\varphi\) again. This time the right-hand side of (3.39) is \(o(1)\) for \(\varepsilon\to 0\) uniformly with respect to \(\|u\|_{1,p}\leq 1\). Hence, (3.27) is proved. Now we are prepared to define the needed family \(\bar{u}_{\varepsilon}\in W^{1,p}_{0}(\Omega;\mathbb{R}^{n})\), which satisfies (3.9) and (3.16), as follows: \[\bar{u}_{\varepsilon}^{\alpha}(x):=u_{0}^{\alpha}(x)+[K_{\varepsilon}u_{0}]^{ \alpha}(x)=u_{0}^{\alpha}(x)+\varepsilon\eta_{\varepsilon}(x)[S_{\delta_{ \varepsilon}}\partial_{x_{k}}u_{0}^{\gamma}](x)v_{k}^{\alpha\gamma}(x/ \varepsilon)\text{ with }\delta_{\varepsilon}:=\varepsilon^{1/4}. \tag{3.40}\] Because of (3.3), (3.24), (3.7) and (3.23) we have that \[\|\bar{u}_{\varepsilon}-u_{0}\|_{\infty}\leq\text{const}\ \frac{\varepsilon}{ \delta_{\varepsilon}^{2/p}}\|u_{0}\|_{1,p}\leq\text{const}\ \varepsilon^{(2p-1)/2p},\] where the constants do not depend on \(\varepsilon\). Hence, (3.9) is satisfied. Further, condition (3.16) follows from (3.26). ### Construction of approximate solutions with (3.17) In this subsection we will construct a family \(\bar{u}_{\varepsilon}\in W^{1,p}_{0}(\Omega;\mathbb{R}^{n})\) with (3.17) under the assumption that \[u_{0}\in W^{2,p_{0}}(\Omega;\mathbb{R}^{n}). \tag{3.41}\] Because of (3.41) in the definition of the family \(\bar{u}_{\varepsilon}\) we do not need the smoothing operators \(S_{\delta}\), i.e. this time we define \[\bar{u}_{\varepsilon}^{\alpha}(x):=u_{0}^{\alpha}(x)+\varepsilon\eta_{ \varepsilon}(x)\partial_{x_{k}}u_{0}^{\gamma}(x)v_{k}^{\alpha\gamma}(x/ \varepsilon), \tag{3.42}\] and because of (3.24) and (3.41) we have that \[\|\bar{u}_{\varepsilon}-u_{0}\|_{\infty}=O(\varepsilon)\text{ for }\varepsilon \to 0.\] In order to verify (3.17) we proceed as in (3.34): \[\int_{\Omega}\Big{(}a^{\alpha\beta}_{ij}(x/\varepsilon)\partial_{x_{ j}}\bar{u}^{\beta}_{\varepsilon}(x)-\hat{a}^{\alpha\beta}_{ij}\partial_{x_{j}}u^{ \beta}_{0}(x)\Big{)}\partial_{x_{i}}\varphi^{\alpha}(x)dx\] \[=\int_{\Omega}\Big{(}a^{\alpha\beta}_{ij}(x/\varepsilon)\partial_ {x_{j}}\left(u^{\beta}_{0}+\varepsilon\eta_{\varepsilon}\partial_{x_{k}}u^{ \gamma}_{0}v^{\beta\gamma}_{k}(x/\varepsilon)\right)-\hat{a}^{\alpha\beta}_{ij }\partial_{x_{j}}u^{\beta}_{0}\Big{)}\,\partial_{x_{i}}\varphi^{\alpha}dx\] \[=\varepsilon\int_{\Omega}\left(-\phi^{\alpha\beta}_{ijk}(x/ \varepsilon)+a^{\alpha\gamma}_{ik}(x/\varepsilon)v^{\gamma\beta}_{j}(x/ \varepsilon)\right)\partial_{x_{k}}(\eta_{\varepsilon}(x)\partial_{x_{j}}u^{ \beta}_{0}(x))\partial_{x_{i}}\varphi^{\alpha}(x)dx\] \[\qquad+\int_{\Omega}\left(a^{\alpha\beta}_{ij}(x/\varepsilon)- \hat{a}^{\alpha\beta}_{ij}\right)\left(\partial_{x_{j}}u^{\beta}_{0}(x)-\eta _{\varepsilon}(x)\partial_{x_{j}}u^{\beta}_{0}(x)\right)\partial_{x_{i}} \varphi^{\alpha}(x)dx.\] And as in (3.35)-(3.37) one estimates as follows: \[\left|\varepsilon\int_{\Omega}\Big{(}-\phi^{\alpha\beta}_{ijk}( x/\varepsilon)+a^{\alpha\gamma}_{ik}(x/\varepsilon)v^{\gamma\beta}_{j}(x/ \varepsilon)\Big{)}\,\partial_{x_{k}}\eta_{\varepsilon}(x)\partial_{x_{j}}u^{ \beta}_{0}(x)\partial_{x_{i}}\varphi^{\alpha}(x)dx\right|\] \[\leq\text{const }\varepsilon\left(\sum_{i=1}^{2}\int_{\Omega_{ \varepsilon}}|\partial_{x_{i}}\eta_{\varepsilon}(x)|^{p}dx\right)^{1/p}\sum_{ \beta=1}^{n}\sum_{j=1}^{2}\|\partial_{x_{j}}u^{\beta}_{0}\|_{\infty}\|\varphi \|_{1,q}\leq\text{const }\varepsilon^{1/p}\|\varphi\|_{1,q}\] and \[\left|\varepsilon\int_{\Omega}\left(-\phi^{\alpha\beta}_{ijk}(x/ \varepsilon)+a^{\alpha\gamma}_{ik}(x/\varepsilon)v^{\gamma\beta}_{j}(x/ \varepsilon)\right)\eta_{\varepsilon}(x)\partial_{x_{k}}\partial_{x_{j}}u^{ \beta}_{0}(x)\partial_{x_{i}}\varphi^{\alpha}(x)dx\right|\leq\text{const } \varepsilon\|\varphi\|_{1,q}\] and \[\left|\int_{\Omega}\left(a^{\alpha\beta}_{ij}(x/\varepsilon)- \hat{a}^{\alpha\beta}_{ij}\right)\left(1-\eta_{\varepsilon}(x)\right) \partial_{x_{j}}u^{\beta}_{0}(x)\partial_{x_{i}}\varphi^{\alpha}(x)dx\right|\] \[\leq\text{const}\left(\int_{\Omega_{\varepsilon}}|1-\eta_{ \varepsilon}(x)|^{p}dx\right)^{1/p}\|\varphi\|_{1,q}\leq\text{const }\varepsilon^{1/p}\|\varphi\|_{1,q},\] where the constants do not depend on \(\varepsilon\) and \(\varphi\). Hence, (3.17) is proved. ### Verification of (3.8) The linear operators \(A_{\varepsilon}\) are isomorphisms from \(W^{1,p}_{0}(\Omega;\mathbb{R}^{n})\) onto \(W^{-1,p}(\Omega;\mathbb{R}^{n})\) (cf. (3.2)), and the linear operators \(B^{\prime}(u_{0})\) are bounded from \(L^{\infty}(\Omega;\mathbb{R}^{n})\) into \(L^{\infty}(\Omega;\mathbb{R}^{n})\) and, hence, compact from \(W^{1,p}_{0}(\Omega;\mathbb{R}^{n})\) into \(W^{-1,p}(\Omega;\mathbb{R}^{n})\). Hence, condition (3.8) is satisfied if there exists \(\varepsilon_{0}>0\) such that \[\inf\left\{\|(A_{\varepsilon}+B^{\prime}(u_{0}))u\|_{-1,p}:\;\varepsilon\in(0,\varepsilon_{0}],u\in U,\|u\|_{1,p}=1\right\}>0.\] Suppose that this is not true. Then there exist sequences \(\varepsilon_{1},\varepsilon_{2},\ldots>0\) and \(u_{1},u_{2},\ldots\in W^{1,p}_{0}(\Omega;\mathbb{R}^{n})\) such that \(\varepsilon_{l}\to 0\) for \(l\to\infty\) and \[\lim_{l\to\infty}\|(A_{\varepsilon_{l}}+B^{\prime}(u_{0}))u_{l}\|_{-1,p}=0, \tag{3.43}\] but \[\|u_{l}\|_{1,p}=1\text{ for all }l. \tag{3.44}\] Because \(W^{1,p}_{0}(\Omega;\mathbb{R}^{n})\) is reflexive and because it is compactly embedded into \(L^{\infty}(\Omega;\mathbb{R}^{n})\), without loss of generality we may assume that there exists \(u_{*}\in W^{1,p}_{0}(\Omega;\mathbb{R}^{n})\) such that the sequence \(u_{1},u_{2},\ldots\) converges to \(u_{*}\) weakly in \(W^{1,p}_{0}(\Omega;\mathbb{R}^{n})\) and \[\lim_{l\to\infty}\|u_{l}-u_{*}\|_{\infty}=0. \tag{3.45}\] From (3.45) follows that \(\|B^{\prime}(u_{0})(u_{l}-u_{*})\|_{\infty}\to 0\) and, hence, that \(\|B^{\prime}(u_{0})(u_{l}-u_{*})\|_{-1,p}\to 0\) for \(l\to\infty\). Therefore (3.43) implies that \[\lim_{l\to\infty}\|A_{\varepsilon_{l}}u_{l}+B^{\prime}(u_{0})u_{*}\|_{-1,p}=0. \tag{3.46}\] **Lemma 3.4**: _There exists \(l_{0}\in\mathbb{N}\) such that for any \(l\in\mathbb{N}\) with \(l\geq l_{0}\) there exits \(w_{l}\in W^{1,p}_{0}(\Omega;\mathbb{R}^{n})\) such that_ \[u_{l}=w_{l}+K_{\varepsilon_{l}}w_{l} \tag{3.47}\] _and_ \[\lim_{l\to\infty}\langle f,w_{l}-u_{*}\rangle_{1,p}=0\mbox{ for all }f\in W^{-1,q}( \Omega;\mathbb{R}^{n}). \tag{3.48}\] **Proof** The operators \(K_{\varepsilon_{l}}\) are bounded from \(W^{1,p}_{0}(\Omega;\mathbb{R}^{n})\) into \(W^{2,p}(\Omega;\mathbb{R}^{n})\cap W^{1,p}_{0}(\Omega;\mathbb{R}^{n})\) (cf. (3.21), (3.22) and (3.25)) and, hence, compact from \(W^{1,p}_{0}(\Omega;\mathbb{R}^{n})\) into \(W^{1,p}_{0}(\Omega;\mathbb{R}^{n})\). Therefore the operators \(I+K_{\varepsilon_{l}}\) are Fredholm of index zero from \(W^{1,p}_{0}(\Omega;\mathbb{R}^{n})\) into \(W^{1,p}_{0}(\Omega;\mathbb{R}^{n})\). Let us show that for large \(l\) the operators \(I+K_{\varepsilon_{l}}\) are injective (and, hence, bijective from \(W^{1,p}_{0}(\Omega;\mathbb{R}^{n})\) onto \(W^{1,p}_{0}(\Omega;\mathbb{R}^{n})\)). Suppose the contrary. Then without loss of generality we may assume that there exists a sequence \(\bar{w}_{1},\bar{w}_{2},\ldots\in W^{1,p}_{0}(\Omega;\mathbb{R}^{n})\) such that \[\bar{w}_{l}+K_{\varepsilon_{l}}\bar{w}_{l}=0\mbox{ and }\|\bar{w}_{l}\|_{ \infty}=1\mbox{ for all }l. \tag{3.49}\] It follows that \(A_{\varepsilon_{l}}(\bar{w}_{l}+K_{\varepsilon_{l}}\bar{w}_{l})=0\) for all \(l\), and because of (3.27) we get that for any \(\varphi\in W^{1,q}_{0}(\Omega;\mathbb{R}^{n})\) we have that \[0=\lim_{l\to\infty}\langle A_{\varepsilon_{l}}(\bar{w}_{l}+K_{\varepsilon_{l} }\bar{w}_{l})-A_{0}\bar{w}_{l},\varphi\rangle_{1,q}=-\lim_{l\to\infty}\langle A _{0}\bar{w}_{l},\varphi\rangle_{1,q},\] i.e. the sequence \(A_{0}\bar{w}_{1},A_{0}\bar{w}_{2},\ldots\) converges to zero weakly in \(W^{-1,p}(\Omega;\mathbb{R}^{n})\). But (1.8) yields that \(A_{0}\) is an isomorphism from \(W^{1,p}_{0}(\Omega;\mathbb{R}^{n})\) onto \(W^{-1,p}(\Omega;\mathbb{R}^{n})\), hence the sequence \(\bar{w}_{1},\bar{w}_{2},\ldots\) converges to zero weakly in \(W^{1,p}_{0}(\Omega;\mathbb{R}^{n})\) and, therefore, strongly in \(L^{\infty}(\Omega;\mathbb{R}^{n})\). But this contradicts to (3.49). Finally, let us prove (3.48). Take a test function \(\varphi\in W^{1,q}_{0}(\Omega;\mathbb{R}^{n})\). Because of (3.27), (3.46) and (3.47) we have \[-\langle B^{\prime}(u_{0})u_{*},\varphi\rangle_{1,q}=\lim_{l\to\infty}\langle A _{\varepsilon_{l}}u_{l},\varphi\rangle_{1,q}=\lim_{l\to\infty}\langle A_{ \varepsilon_{l}}(w_{l}+K_{\varepsilon_{l}}w_{l}),\varphi\rangle_{1,q}=\lim_{l \to\infty}\langle A_{0}w_{l},\varphi\rangle_{1,q}. \tag{3.50}\] Hence, the sequence \(A_{0}w_{1},A_{0}w_{2},\ldots\) converges weakly in \(W^{-1,p}(\Omega;\mathbb{R}^{n})\), therefore the sequence \(w_{1},w_{2},\ldots\) converges weakly in \(W^{1,p}_{0}(\Omega;\mathbb{R}^{n})\). In particular, the sequence \(w_{1},w_{2},\ldots\) is bounded in \(W^{1,p}_{0}(\Omega;\mathbb{R}^{n})\). Hence, (3.23), (3.25) and (3.45) yield that \[\|w_{l}-u_{*}\|_{\infty} \leq \|u_{l}-u_{*}\|_{\infty}+\|K_{\varepsilon_{l}}w_{l}\|_{\infty}\] \[\leq \mbox{const }\left(\|u_{l}-u_{*}\|_{\infty}+\varepsilon_{l}\sum_{ \gamma=1}^{n}\sum_{i=1}^{2}\|S_{\delta_{l}}\partial_{x_{i}}w_{l}^{\gamma}\|_{ \infty}\right)\] \[\leq \mbox{const }\left(\|u_{l}-u_{*}\|_{\infty}+\varepsilon_{l}^{(2p-1)/2p} \|w\|_{1,p}\right)\to 0\mbox{ for }l\to\infty.\] Therefore the weak \(W^{1,p}_{0}(\Omega;\mathbb{R}^{n})\)-limit of the sequence \(w_{1},w_{2},\ldots\) equals to its strong \(L^{\infty}(\Omega;\mathbb{R}^{n})\)-limit, that is \(u_{*}\). Now, because of (3.48) and (3.50) we have that \(\langle(A_{0}+B^{\prime}(u_{0}))u_{*},\varphi\rangle_{1,q}=0\) for all \(\varphi\in W^{1,q}_{0}(\Omega;\mathbb{R}^{n})\), i.e. that \(u_{*}\) is a weak solution to the linearized boundary value problem (1.11). Hence, by assumption of Theorem 1.1, we get that \(u_{*}=0\). Therefore (3.46) implies that \(\|A_{\varepsilon_{l}}u_{l}\|_{W^{-1,p}}\to 0\) for \(l\to\infty\). But this contradicts to (3.2) and (3.44). Nonlinear natural boundary conditions In this section we show that similar to Theorem 1.1 results are true also for nonlinear natural boundary conditions, i.e. we consider the boundary vale problem \[\left.\begin{array}{l}\partial_{x_{i}}\Big{(}a^{\alpha\beta}_{ij}(x/\varepsilon) \partial_{x_{j}}u^{\beta}(x)\Big{)}=b^{\alpha}(x,u(x))\mbox{ for }x\in\Omega,\\ a^{\alpha\beta}_{ij}(x/\varepsilon)\partial_{x_{j}}u^{\beta}(x)\nu_{i}(x)=b^{ \alpha}_{0}(x,u(x))\mbox{ for }x\in\partial\Omega,\end{array}\right\}\alpha=1, \ldots,n, \tag{4.1}\] where \(\nu=(\nu_{1},\nu_{2}):\partial\Omega\to\mathbb{R}^{2}\) is the outer unit normal vector field on the boundary \(\partial\Omega\), and \(u\in\mathbb{R}^{n}\mapsto b_{0}(\cdot,u)\in L^{\infty}(\partial\Omega; \mathbb{R}^{n})\) is \(C^{1}\)-smooth. The reason, why similar to Theorem 1.1 results are true also for the boundary value problem (4.1), is easy to explain: In (3.34) we did not needed that the test functions \(\varphi\) satisfy zero boundary conditions. Our assumptions on the domain \(\Omega\), the diffusion coefficients \(a^{\alpha\beta}_{ij}\) and the reaction terms \(b^{\alpha}\) are as in Section 1. Also the homogenized diffusion coefficients are as in Section 1, i.e. defined in (1.6) via the cell problems (1.7). Hence, the homogenized boundary value problem, corresponding to (4.1), is \[\left.\begin{array}{l}\hat{a}^{\alpha\beta}_{ij}\partial_{x_{i}}\partial_{x _{j}}u^{\beta}(x)=b^{\alpha}(x,u(x))\mbox{ for }x\in\Omega,\\ \hat{a}^{\alpha\beta}_{ij}\partial_{x_{j}}u^{\beta}(x)\nu_{i}(x)=b^{\alpha}_{ 0}(x,u(x))\mbox{ for }x\in\partial\Omega,\end{array}\right\}\alpha=1,\ldots,n. \tag{4.2}\] A vector function \(u\in C^{1}(\overline{\Omega};\mathbb{R}^{n})\) called weak solution to the boundary value problem (4.1) if it satisfies the variational equation \[\int_{\Omega}\Big{(}a^{\alpha\beta}_{ij}(x/\varepsilon)\partial_ {x_{j}}u^{\beta}(x)\partial_{x_{i}}\varphi^{\alpha}(x)+b^{\alpha}(x,u(x)) \varphi^{\alpha}(x)\Big{)}dx\] \[=\int_{\partial\Omega}b^{\alpha}(x,u(x))\varphi^{\alpha}(x)d \Gamma(x)\mbox{ for all }\varphi\in W^{1,2}(\Omega;\mathbb{R}^{n}),\] where \(\Gamma\) is the Lebesgue measure on \(\partial\Omega\), and similarly for (4.2) and for its linearization \[\left.\begin{array}{l}\hat{a}^{\alpha\beta}_{ij}\partial_{x_{i}}\partial_{x _{j}}u^{\beta}(x)=\partial_{u^{\gamma}}b^{\alpha}(x,u_{0}(x))u^{\gamma}(x) \mbox{ for }x\in\Omega,\\ \hat{a}^{\alpha\beta}_{ij}\partial_{x_{j}}u^{\beta}(x)\nu_{i}(x)=\partial_{u ^{\gamma}}b^{\alpha}_{0}(x,u_{0}(x))u^{\gamma}(x)\mbox{ for }x\in\partial\Omega,\end{array}\right\}\alpha=1, \ldots,n. \tag{4.3}\] For these problems we get, similar to Theorem 1.1, the following **Theorem 4.1**: _Suppose (1.2)-(1.5), and let \(u=u_{0}\) be a weak solution to (4.2) such that (4.3) does not have weak solutions \(u\neq 0\). Then the following is true:_ _(i) There exist \(\varepsilon_{0}>0\) and \(\delta>0\) such that for all \(\varepsilon\in(0,\varepsilon_{0}]\) there exists exactly one weak solution \(u=u_{\varepsilon}\) to (4.1) with \(\|u-u_{0}\|_{\infty}\leq\delta\). Moreover,_ \[\|u_{\varepsilon}-u_{0}\|_{\infty}\to 0\mbox{ for }\varepsilon\to 0. \tag{4.4}\] _(ii) If \(u_{0}\in W^{2,p_{0}}(\Omega;\mathbb{R}^{n})\) with certain \(p_{0}>2\), then for all \(p>2\) we have_ \[\|u_{\varepsilon}-u_{0}\|_{\infty}=O(\varepsilon^{1/p})\mbox{ for all }\varepsilon\to 0. \tag{4.5}\] The proof of Theorem 4.1 is similar to that of Theorem 1.1. We indicate only the few small differences. One has to apply Corollary 2.3 again, but now in the following setting: \[U:=W^{1,p}(\Omega;\mathbb{R}^{n}),\;V:=W^{1,q}(\Omega;\mathbb{R}^{n})^{*}\mbox { with }1/p+1/q=1,\] where \[\|u\|_{U} :=\|u\|_{1,p}:=\left(\sum_{\alpha=1}^{n}\int_{\Omega}\left(|u^{\alpha }(x)|^{p}+\sum_{i=1}^{2}|\partial_{x_{i}}u^{\alpha}(x)|^{p}\right)dx\right)^{1/p},\ \|\cdot\|:=\|\cdot\|_{\infty},\] \[\|f\|_{V} :=\sup\{\langle f,\varphi\rangle_{1,q}:\ \varphi\in W^{1,q}( \Omega;\mathbb{R}^{n}),\ \|\varphi\|_{1,q}\leq 1\},\] and \(\langle\cdot,\cdot\rangle_{1,q}:W^{1,q}(\Omega;\mathbb{R}^{n})^{*}\times W^{1,q }(\Omega;\mathbb{R}^{n})\to\mathbb{R}\) is the dual pairing, again. The \(C^{1}\)-smooth operators \(F_{\varepsilon}:U\to V\) of Theorem 2.1 are defined by \[F_{\varepsilon}(u):=A_{\varepsilon}u+B(u),\] again, where the linear operators \(A_{\varepsilon}:W^{1,p}(\Omega;\mathbb{R}^{n})\to W^{1,q}(\Omega;\mathbb{R}^{ n})^{*}\) and the nonlinearity \(B:C(\overline{\Omega};\mathbb{R}^{n})\to W^{1,q}(\Omega;\mathbb{R}^{n})^{*}\) are defined by \[\langle A_{\varepsilon}u,\varphi\rangle_{1,q} := \int_{\Omega}\left(a_{ij}^{\alpha\beta}(x/\varepsilon)\partial_{ x_{j}}u^{\beta}(x)\partial_{x_{i}}\varphi(x)+u^{\alpha}(x)\varphi^{\alpha}(x) \right)dx,\] \[\langle B(u),\varphi\rangle_{1,q} := \int_{\Omega}\left(b^{\alpha}(x,u(x))-u^{\alpha}(x)\right) \varphi^{\alpha}(x)dx-\int_{\partial\Omega}b_{0}^{\alpha}(x,u(x))\varphi^{ \alpha}(x)d\Gamma(x)\] for all \(\varphi\in W^{1,q}(\Omega;\mathbb{R}^{n})\), and similarly \(A_{0}:W^{1,p}(\Omega;\mathbb{R}^{n})\to W^{1,q}(\Omega;\mathbb{R}^{n})^{*}\). For proving the error estimates (4.4) and (4.5) we use again the families of approximate solutions (3.40) and (3.42), respectively. **Remark 4.2**: _Let us draw attention to the following technical detail:_ _In Section 3, i.e. in the case of Dirichlet boundary conditions, the reason for using the cut-off functions \(\eta_{\varepsilon}\) in (3.40) and (3.42) was that the approximate solutions \(\bar{u}_{\varepsilon}\) should satisfy the Dirichlet boundary conditions. There the cut-off functions \(\eta_{\varepsilon}\) are not needed to avoid boundary integrals in (3.34) after partial integration, because there the test function \(\varphi\) could have compact support._ _But now, in Section 4, i.e. in the case of Robin boundary conditions, the reason for using the cut-off functions \(\eta_{\varepsilon}\) in (3.40) and (3.42) is to avoid boundary integrals in (3.34) after partial integration, because now the test function \(\varphi\) is not allowed to have compact support. Here the cut-off functions \(\eta_{\varepsilon}\) are not needed for certain boundary condition to be satisfied (because the approximate solutions \(\bar{u}_{\varepsilon}\) are not obliged to satisfy any boundary conditions)._ _This technical detail is important because of the following: If it would be possible to improve the choices (3.40) and (3.42) of the approximate solutions appropriately (for example, to avoid the use of cut-off functions), then, perhaps, it would be possible to prove better than (4.4) and (4.5) error estimates._
2309.12579
From Text to Trends: A Unique Garden Analytics Perspective on the Future of Modern Agriculture
Data-driven insights are essential for modern agriculture. This research paper introduces a machine learning framework designed to improve how we educate and reach out to people in the field of horticulture. The framework relies on data from the Horticulture Online Help Desk (HOHD), which is like a big collection of questions from people who love gardening and are part of the Extension Master Gardener Program (EMGP). This framework has two main parts. First, it uses special computer programs (machine learning models) to sort questions into categories. This helps us quickly send each question to the right expert, so we can answer it faster. Second, it looks at when questions are asked and uses that information to guess how many questions we might get in the future and what they will be about. This helps us plan on topics that will be really important. It's like knowing what questions will be popular in the coming months. We also take into account where the questions come from by looking at the Zip Code. This helps us make research that fits the challenges faced by gardeners in different places. In this paper, we demonstrate the potential of machine learning techniques to predict trends in horticulture by analyzing textual queries from homeowners. We show that NLP, classification, and time series analysis can be used to identify patterns in homeowners' queries and predict future trends in horticulture. Our results suggest that machine learning could be used to predict trends in other agricultural sectors as well. If large-scale agriculture industries curate and maintain a comparable repository of textual data, the potential for trend prediction and strategic agricultural planning could be revolutionized. This convergence of technology and agriculture offers a promising pathway for the future of sustainable farming and data-informed agricultural practices
Parag Saxena
2023-09-22T02:15:12Z
http://arxiv.org/abs/2309.12579v1
# From Text to Trends: A Unique Garden Analytics Perspective on the Future of Modern Agriculture ###### Abstract Data-driven insights are essential for modern agriculture. This research paper introduces a machine learning framework designed to improve how we educate and reach out to people in the field of horticulture. The framework relies on data from the horticulture Online Help Desk (HOHD), which is like a big collection of questions from people who love gardening and are part of the Extension Master Gardener Program (EMGP). This framework has two main parts. First, it uses special computer programs (machine learning models) to sort questions into categories. This helps us quickly send each question to the right expert, so we can answer it faster. Second, it looks at when questions are asked and uses that information to guess how many questions we might get in the future and what they will be about. This helps us plan on topics that will be really important. It's like knowing what questions will be popular in the coming months. We also take into account where the questions come from by looking at the Zip Code. This helps us make research that fits the challenges faced by gardeners in different places. In this paper, we demonstrate the potential of machine learning techniques to predict trends in horticulture by analyzing textual queries from homeowners. We show that NLP, classification, and time series analysis can be used to identify patterns in homeowners' queries and predict future trends in horticulture. Our results suggest that machine learning could be used to predict trends in other agricultural sectors as well. If large-scale agriculture industries curate and maintain a comparable repository of textual data, the potential for trend prediction and strategic agricultural planning could be revolutionized. This convergence of technology and agriculture offers a promising pathway for the future of sustainable farming and data-informed agricultural practices. Extension Master Gardener Program, Framework, Horticulture, Horticulture Online Help Desk, Machine learning ## I Introduction In the ever-evolving world of gardening and plant cultivation, the importance of reaching out to people effectively and educating them is more crucial than ever [1, 2]. We face a significant question: How can we improve the way we share important gardening knowledge with both beginners and experienced gardeners? Answering this question is vital for progressing in the field and helping people who are passionate about gardening at home. In this research paper, we're introducing an innovative machine learning framework designed specifically to tackle these challenges. This framework marks a new era in how we can optimize our efforts to reach and educate people about gardening. The significance of gardening in our daily lives cannot be overstated. Gardening practices are deeply connected to our well-being, from growing healthy fruits and vegetables to creating beautiful gardens that enhance our quality of life. As gardening continues to grow and evolve, people seek guidance, answers to their questions, and access to expertise to help them with their green endeavours. This is where the Extension Master Gardener Program's1 (EMGP) Horticulture Online Help Desk2 (HOHD) comes into play. It serves as a platform where gardeners can seek advice and information. However, with the increasing number of injuries, we urgently need to streamline this process to make sure questions are directed to the right experts promptly. The "why" behind this research effort stems from recognizing that improving how we share gardening knowledge has enormous potential to advance the field. By making information sharing more efficient, we can empower individuals to make informed decisions, grow healthier plants, and contribute to the sustainability of our environment. Moreover, the question of "when" to share information is crucial. Gardening knowledge often depends on the season, and being able to predict when specific information is most relevant can significantly increase its impact. The "how" part of the equation involves harnessing the power of machine learning, a revolutionary technology that has transformed many fields, to effectively address these challenges. Footnote 1: [https://mastergardener.extension.org/](https://mastergardener.extension.org/) Footnote 2: [https://www.mastergardenersmecklenburg.org/question.html](https://www.mastergardenersmecklenburg.org/question.html) Machine learning, a part of artificial intelligence, helps computers learn from information and make predictions or choices without being specifically instructed [3, 4]. By using machine learning, we can create a smart framework that automates important tasks within the EMGPs HOHD. This framework has two main tasks: The first task is text classification. Using machine learning, the framework can automatically sort questions into different categories. This helps make sure that questions are quickly and accurately sent to the experts who can provide the best answers. This way, we can avoid delays that can happen when we manually handle questions. The second task is time-series forecasting. We know that the number and types of questions can change depending on the time of year. So, the framework will predict what kinds of questions we can expect in the coming months. With this information, the EMGP can plan workshops and educational programs on topics that are likely to be important in the future. This proactive approach is a big help in making sure that we share knowledge when it's needed the most. Additionally, the framework takes into account the specific regions where questions come from, based on Zip Codes. It understands that gardening challenges can be very different in various places due to factors like climate, soil, and local practices. This regional aspect helps the EMGP customize its workshops and resources to address the unique needs and problems faced by gardeners in different parts of the country. The paper is as follows; the related works are shown in the following part. The proposed methodology and data collection are presented in Section III. The experimental analysis, which covers implementations and multiple evaluators, is done in Section IV. The experiment's results are reported in Section V, and the paper's thoughts is presented in the discussion in Section VI. Whereas, the conclusion and suggested future research are presented in Section VII. ## II Related Works In the field of horticultural research, several studies have explored the integration of machine learning and artificial intelligence techniques to address various challenges and opportunities. Nturambiriwe and Opara [5] review recent advances in machine learning methods and their integration with sensing devices for non-destructive defect detection in horticultural products, emphasizing the potential of deep learning techniques, particularly Convolutional Neural Networks (CNN), in improving defect detection systems. Similarly, Ferrao et al. [6] discuss the role of artificial intelligence in predicting flavour preferences and enhancing breeding initiatives to improve the flavour and nutritional content of horticultural crops. Haselbeck et al. [7] empirically compare machine learning and classical forecasting algorithms for horticultural sales predictions, highlighting the superiority of machine learning methods, especially XGBoost. Tripathi and Maktedar [8] explore the role of computer vision in fruit and vegetable grading, proposing a generalized framework and highlighting the potential of Support Vector Machines (SVM) for classification. Thirumagal et al. [9] introduce an IoT-based framework for smart farming in horticulture, emphasizing the prediction of water requirements using machine learning and the importance of monitoring variables like wetness and temperature. Sinha et al. [10] demonstrate digital plant species identification using neural networks, providing applications in ecology, horticulture, and medicinal fields. Kanuru et al. [11] stress the need for technology adoption in Indian agriculture, proposing GPS and IoT-based solutions for soil assessment and optimized pesticide and fertilizer usage. Banerjee et al. [12] focus on long-term and short-term price forecasting of horticultural products by introducing a Long Short-Term Memory (LSTM) model for accurate predictions. Chachar et al. [13] explore epigenetic modifications for horticultural plant improvement, emphasizing the role of machine learning in enhancing our understanding of epigenetic regulation. Melesse et al. [14] introduce a machine learning-based digital twin for monitoring fruit quality evolution in the food supply chain, highlighting the potential of thermal imaging techniques in minimizing fruit waste. This research stands out from existing research in the field by introducing a novel machine learning framework specifically designed to enhance horticultural education and outreach. Unlike many existing studies that primarily focus on technical aspects such as defect detection, flavor prediction, or sales forecasting, this paper addresses a critical yet often overlooked aspect of horticulture: communication and support for gardening enthusiasts. The framework leverages machine learning models to categorize questions efficiently, ensuring that individuals seeking assistance receive timely and accurate responses. Furthermore, it incorporates predictive capabilities to anticipate future queries and topics, enabling proactive planning and resource allocation for educational content. What sets this research apart is its practical relevance and potential to directly benefit gardening communities by improving the accessibility of expert guidance. Moreover, its consideration of geographic variations through Zip Code analysis demonstrates a commitment to tailoring responses to the unique challenges faced by gardeners in different regions, showcasing a holistic approach to horticultural support. ## III Methodology In this section, we will explore the methodology used to create and execute a machine learning framework tailored for enhancing outreach and educational initiatives in the field of horticulture. The methodology encompasses several key components, including data gathering, text categorization, time-series forecasting, and incorporation of regional specificity. The flowchart of the complete methodology is been presented in Fig. 1. ### _Data Collection_ In our research methodology, the foundation lies in collecting data carefully. We obtained this data from the HOHD, which is a valuable resource. This help desk receives questions from passionate volunteers who are actively involved in gardening at home. This dataset contains a wide range of questions, reflecting the many challenges that both gardening enthusiasts and experts face. It includes written descriptions of the problems, the time when the questions were asked, and where they came from, including their Zip Codes. This mix of information gives us both qualitative (descriptive) and quantitative (number-based) data. The written descriptions are vital because they tell us precisely what the questions are about. The time data helps us understand when these questions arise during the year, which is useful for spotting trends. And knowing where the questions come from helps us consider the local conditions that might affect gardening. Before we could start analyzing this data, we had to prepare it carefully. This included cleaning the data, making sure it was all in the same format, and fixing any mistakes. By doing this, we ensured that our dataset was reliable and ready for in-depth analysis. This step is crucial because it forms the strong base upon which we build our machine learning models and other analysis tools. Now that our data is ready, we can move on to the next steps. We'll be using advanced techniques like text classification, time-series forecasting, and considering specific regions as we go forward. These techniques will help us tackle the complex challenges we set out to address. To ensure our machine learning models and forecasting are reliable, we divided the dataset into three parts: a training set, a validation set, and a test set. We did this carefully to make sure we had a good mix of different types of questions and where they came from. This division helps us train our models, fine-tune their settings, and check how well they perform, all while avoiding a common problem called overfitting. ### _Text Classification_ In our quest to improve how we educate and engage with the field of horticulture, we have a crucial element in our methodology: the automation of sorting questions into specific categories [15]. This step is essential as it ensures that we can swiftly and effectively direct questions to the right experts for assistance. This process of sorting questions is known as "query classification". However, classifying text accurately is a complex task. It involves correctly matching the words in a question to specific categories related to horticulture. To tackle this challenge precisely, we've employed machine learning, a powerful technology that combines natural language understanding with smart algorithms. This dynamic combination helps our system understand the subtleties of how questions are asked, considering the language used and the context in which it's used. One crucial step in our approach is extracting meaningful information from the questions. We use techniques like Term Frequency-Inverse Document Frequency (TF-IDF) [16] and word embeddings [17]. These methods capture the essence and hidden meanings within the questions. These captured features then serve as the foundation for our sorting models. They enable the system to identify patterns and relationships in the questions, which is key to making accurate classifications. Selecting the right machine learning algorithms is another vital aspect. We've chosen various techniques, ranging from traditional Decision Trees (DT) [18], Naive Bayes (NB) [19], and Logistic Regression (LR) [20]. This diversity in techniques ensures that our system can adapt to different types of questions and categories within horticulture. To gauge how well our sorting models are doing, we use a set of performance measures. These include precision, recall, F1-score, and accuracy. They give us a comprehensive view of how effectively the system is categorizing questions into specific horticultural topics. Ensuring that our models work consistently and can handle new, unseen questions is a top priority. To achieve this, we use cross-validation, a technique where we test the models with different parts of the data. This process not only confirms the model's reliability but also helps us identify areas where it can be improved. This iterative refinement process ensures that our framework remains robust and adaptable. ### _Time-Series Forecasting_ In this critical section, we delve into the art and science of sorting and categorizing text, all while preserving the specialized terminology that underpins our work. Our data carries with it a temporal aspect, meaning it changes over time. To deal with this inherent characteristic, we've integrated a robust time-series forecasting component into our methodology. This component holds the key to foreseeing both the volume and the nature of queries we can expect in the months ahead. This capability is essential for us because it Fig. 1: Flowchart of the framework empowers us to proactively plan our educational initiatives and outreach efforts. It ensures we stay ahead of the game. Accurate forecasting is the linchpin of our entire framework. It guarantees that our EMGP can anticipate and cater to the evolving needs of horticulturalists. This proactive approach, firmly rooted in data-driven insights, facilitates the timely organization of workshops, the creation of relevant resources, and the allocation of resources where they are most needed. To accomplish this precise forecasting, we've leveraged a variety of time-series models. One such model is the Autoregressive Integrated Moving Average (ARIMA) model [21], known for its ability to capture patterns and trends over time. Additionally, we've harnessed Long Short-Term Memory (LSTM) [22] networks, a type of neural network, which excels at capturing complex and nonlinear patterns in sequential data. Our models were trained using data that includes information about time, query counts, and other relevant features that influence how queries behave over time. This contextual information is crucial for our models to make well-informed predictions. In our quest for excellence, we've assessed the reliability and accuracy of our time-series forecasts using established metrics like the Mean Absolute Error (MAE) and Root Mean Squared Error (RMSE). These metrics help us measure the accuracy of our predictions, ensuring that our program can effectively address the anticipated needs of horticulturalists. ### _Integration of Regional Specificity_ The integration of regional specificity in this research represents a significant and innovative contribution to the field of horticulture. By recognizing that gardening practices are inherently influenced by geographic factors such as climate, soil types, and local environmental conditions, the research takes a proactive step towards addressing the unique challenges faced by gardeners in different regions. To achieve this, the study leveraged data related to Zip Codes, a spatially informative parameter, and employed advanced analytical techniques such as Moran's I [23] and Geary's C [24]. These statistical methods allowed the research team to assess how gardening-related questions were spatially distributed, effectively identifying clusters of queries in specific areas. This approach holds immense promise as it not only acknowledges regional variations in gardening practices but also empowers horticultural educators and outreach programs to tailor their responses and resources to address these specific challenges effectively. By understanding the localized needs of gardeners, the study paves the way for the development of region-specific educational content, recommendations, and gardening solutions. Ultimately, this integration of regional specificity enriches the horticultural landscape by ensuring that horticultural support and guidance are both relevant and effective in diverse geographical contexts. ## IV Experimental Analysis This section comprises the implementation details, statistical analysis, validation, scalability, model training, and hyperparameter tuning. ### _Implementation Details_ In this section, we dive into the nitty-gritty of how we put our machine learning framework into action to improve horticultural outreach and education. We made sure to follow the industry's best practices and paid meticulous attention to detail throughout the process. The primary programming language we used for most of our work was Python, which is well-known and versatile. Python's vast collection of libraries and tools proved invaluable for various aspects of our research. Specifically, we relied on libraries like sci-kit-learn, TensorFlow, and Keras for tasks related to machine learning and deep learning. Scikit-learn provided us with a comprehensive set of tools for tasks like getting our data ready, extracting useful information, and building models. TensorFlow and Keras, on the other hand, played a vital role in creating and training our machine learning and deep learning models. These libraries formed a sturdy foundation for tasks such as text classification and time-series forecasting. For time-series forecasting, we expanded our toolkit by incorporating specialized libraries like StatsModels, which is known for its statistical modelling capabilities. We used StatsModels to implement the ARIMA model, a crucial part of our forecasting efforts. Additionally, we harnessed Prophet3, an open-source forecasting tool developed by Facebook, to improve our predictions, especially when dealing with seasonal and holiday-related effects. Our implementation process was not haphazard; it adhered to well-established software development principles. This involved writing clean, modular code to enhance maintainability and allow for easy future updates. We also made sure to use version control, specifically Git, to keep track of changes and facilitate collaboration among team members. Moreover, we emphasized the importance of reproducibility throughout our work. By documenting our code, methods, and parameter settings comprehensively, we aimed to empower other researchers and practitioners to replicate and build upon our work with confidence. Footnote 3: [http://facebook.github.io/prophet/](http://facebook.github.io/prophet/) ### _Statistical Analysis, Validation, and Scalability_ In parallel with building our machine learning framework, we took a multi-faceted approach to ensure it was robust, effective, and practical for real-world use. This section combines elements of statistical analysis, validation, and scalability considerations that bolstered the reliability of our approach. #### Iv-B1 Statistical Analysis Statistical analysis played a pivotal role in helping us uncover valuable insights from our dataset. We embarked on a journey to understand the trends, patterns, and relationships within our data. Descriptive statistics, such as the mean query volume and standard deviation, provided us with crucial information about how queries were distributed. These metrics helped us understand the typical query and how much it varied from the average. Inferential statistics, including t-tests and chi-squared tests, allowed us to determine the significance of differences across different subject areas and regions. By quantifying these differences, we gained a deeper understanding of how query patterns varied across various aspects of horticulture and different geographic locations. #### Iv-B2 Validation and Testing Ensuring the reliability of our framework was a thorough and multi-faceted process. We wanted to make sure that our machine learning models and time-series forecasting techniques could perform well and be trusted for real-world use. Our machine learning models underwent rigorous cross-validation, where they were tested on data they had never seen before. This helped us assess how well they could generalize to new situations. Similarly, our time-series forecasting models underwent intense validation against historical data. We put a strong emphasis on the accuracy and reliability of our forecasts to ensure they were dependable for proactive planning. Beyond quantitative validation, we also recognized the importance of qualitative feedback. We conducted user testing involving horticultural experts and program volunteers. This user-centred approach allowed us to gather valuable insights into the usability and practical effectiveness of our framework. User feedback served as a guide, helping us make refinements and enhancements to ensure our framework met the genuine needs and expectations of its end-users. #### Iv-B3 Scalability and Practicality An effective framework should be able to handle increasing demands and real-world constraints. We carefully evaluated how our framework would scale when faced with higher query volumes and expanded geographic coverage. These stress tests helped us assess how well our framework performed under heavier workloads. Additionally, we conducted benchmarking exercises to ensure our framework could operate effectively within resource constraints and meet operational requirements. This comprehensive assessment of scalability and practicality was central to our commitment to delivering a solution that not only expelled in a controlled research environment but was also ready for seamless integration into the EMGP's daily operations. ### _Model Training and Hyperparameter Tuning_ The core strength of our framework lies in the meticulous process of training our models and fine-tuning their hyperparameters. This phase bridged the gap between designing our methodology and creating robust, high-performing machine learning and forecasting models. Model training was an iterative journey, involving multiple rounds of training and refinement. We applied this process to both our machine learning algorithms and time-series forecasting models, each requiring a tailored approach. Our models underwent extensive training using carefully prepared datasets, allowing them to learn intricate patterns and relationships within the data. Hyperparameter tuning, a critical aspect of model optimization, took centre stage. We used techniques like grid search and random search to explore the vast space of model parameters, seeking configurations that maximized performance. Our goal was to strike a balance between model complexity and the ability to generalize well to new data. This involved fine-tuning hyperparameters related to model architecture, learning rates, and regularization techniques. The result of this rigorous training and tuning effort was a set of models ready for deployment. These models embodied the culmination of data-driven insights, statistical rigour, and iterative refinement, poised to transform horticultural outreach and education for the better. ## V Result Analysis In this section, we delve into a thorough examination of the outcomes of our machine learning framework's implementation. We've carried out various tests and evaluations to assess its performance and reliability. Our main goal was to gain valuable insights from the data and determine how well our framework works. To start, we looked at how queries were distributed in our dataset. Table I shows the calculated mean query volume, which turned out to be around 120 queries per month on average. This number is crucial because it tells us how many queries we can expect in a typical month. Additionally, we found that the standard deviation, which measures how much the number of queries varies from month to month, is approximately 25 queries per month. This information helps us understand the dataset's characteristics, such as how frequently queries occur and how consistent they are over time. ### _Validation and Testing_ Our framework underwent rigorous validation and testing to ensure that it performs reliably and effectively. We wanted to make sure it could accurately categorize queries and forecast future trends. In text classification, we tested three models as shown in Table II: DT, NB, and LR. All models showed high accuracy rates, with DT achieving 91% accuracy, NB achieving 83%, and LR having 88%. These models are excellent at sorting queries into predefined categories, making the query routing process more efficient. For time-series forecasting, we used ARIMA and Prophet models (Table II). These models accurately predicted query volumes and trends. The ARIMA model had an MAE of 12 and a RMSE of 15, while the Prophet model performed even better with an MAE of 10 and an RMSE of 13. These low error metrics demonstrate that our models are reliable for forecasting, which is essential for planning educational initiatives. Fig. 2 displays the data's monthly and weekly forecast. Fig. 3 depicts the weekly forecast for the year 2022, in contrast. \begin{table} \begin{tabular}{l c c} \hline **Metric** & **Value** \\ \hline Mean Query Volume & 120 queries/month \\ Standard Deviation & 25 queries/month \\ \hline \end{tabular} \end{table} TABLE I: STATISTICAL ANALYSIS RESULTS ### _Scalability and Practicality_ We also considered the real-world applicability of our framework. Can it handle increased query volumes, expand its geographic coverage, and operate within resource constraints? Our scalability tests showed that our framework can effectively accommodate a 150% increase in query volumes while maintaining good performance as shown in Table III. Furthermore, it successfully incorporated data from 10 new regions, showing its scalability in terms of geographic coverage. It's essential to know that our framework operates well without exceeding available resources, indicating its practical viability for real-world deployment. Meeting program requirements was crucial for us, and we found that our framework aligns perfectly with these requirements. This underscores its readiness for practical implementation on a larger scale. ### _Additional Results_ Table IV specifies the type of spatial cluster identified in each region, which helps categorize the regional challenges and influences on gardening practices. These are the specific gardening-related topics that were prevalent in each identified cluster. They highlight the gardening issues and questions that gardeners in these regions commonly face. Table V outlines the dominant gardening preferences in each region, reflecting the unique challenges and opportunities gardeners encounter based on their geographical location. Table VI outlines the localized needs of gardeners in different regions and provides recommendations to address those needs. It emphasizes tailoring horticultural support to meet specific regional challenges and conditions. These tables and explanations highlight the significance of integrating regional specificity into horticultural research. By identifying spatial clusters, understanding regional variations, and offering region-specific recommendations, horticultural outreach and education can become more effective and relevant to gardeners in diverse geographical contexts. This proactive approach ensures that horticultural support and guidance are both regionally adapted and highly effective. ## VI Discussion To develop a similar framework for big agriculture, several key steps need to be taken. Firstly, a comprehensive data collection effort is essential, drawing from various sources such as farm managers, automated systems, and field reports to build a repository of questions, concerns, and observations from large-scale operations. Secondly, the NLP model must be tailored specifically to the intricacies of agricultural jargon, concerns, and large-scale issues. This may involve creating a custom vocabulary or training the model on a specialized dataset of agricultural data. Thirdly, classification categories \begin{table} \begin{tabular}{l l} \hline **Region** & **Dominant Gardening Preferences** \\ \hline Northeast & Cold-Weather Crops, Season Extension, Perennials \\ South & Drought-Tolerant Plants, Pest Control, Warm-Season Crops \\ West & Native Plants, Keriscago, Wildflower Gardens \\ Midwest & Soil Amendment, Crop Rotation, Vegetable Gardening \\ \hline \end{tabular} \end{table} TABLE V: REGIONAL GARDENING PREFERENCES \begin{table} \begin{tabular}{l l l} \hline **Test** & **Result** \\ \hline Increased Query Volumes & Handled 150\% growth effectively \\ Expanded Geographic Coverage & Incorporated 10 new regions \\ Resource Constraints & Framework performs well within limits \\ Operational Requirements & Meets program requirements \\ \hline \end{tabular} \end{table} TABLE III: SCALABILITY AND PRACTICALITY \begin{table} \begin{tabular}{l l l} \hline **Region** & **Cluster Type** & **Identified Topics** \\ \hline Northeast & Climate and & Cold-Weather Crops, Frost Protection, Season Extension & Season Extension Techniques \\ South & Climate and Pest & Drought-Tolerant Plants, Pest Control, Warm-Season Crops \\ West & Environmental & Native Plants, Keriscago, Wildfire-Resilience & Resistant Landscapes \\ Midwest & Soil and Crop & Soil Amendment, Crop Rotation, Vegetable Gardening \\ \hline \end{tabular} \end{table} TABLE IV: SPATIAL CLUSTERS OF GARDENING-RELATED QUERIES Fig. 3: Weekly Forecast of 2022 Fig. 2: Monthly vs Weekly Forecast must be broadened to accommodate the complexity of big agriculture operations, with specific issues potentially being broken down into more detailed subcategories. Additionally, integrating additional data streams, such as satellite imagery, weather data, or IoT sensor data, is crucial to enhance prediction accuracy and gain deeper insights into operations. Finally, implementing a feedback loop is essential for continuous system improvement, with adjustments made based on the accuracy of predictions and the effectiveness of solutions. The framework benefits for the EMGP are significant. Firstly, it enables efficient query handling by utilizing NLP and classification to swiftly categorize incoming questions. This ensures that queries are directed to the right experts or resources, allowing EMGP to manage their workload more effectively and provide constituents with timely and accurate responses. Additionally, the system's capability to identify trends through question analysis and time-series predictions is invaluable. It empowers EMGP to proactively prepare responses and educational campaigns for recurring seasonal issues, thereby preventing problems before they arise. Furthermore, the framework aids in knowledge base expansion by using classified questions to build a comprehensive FAQ section tailored to common concerns, saving time and resources while providing constituents with readily accessible information. Moreover, by predicting trends, the framework assists EMGP in allocating resources efficiently based on anticipated demands, ensuring that they have the necessary expertise to address the diverse needs of their constituents. Lastly, the system promotes continuous learning by identifying emerging problems, facilitating ongoing improvement to maintain high-quality service. The potential benefits for big agriculture after implementing such a framework are numerous. Predictive maintenance becomes possible, allowing for the anticipation of machinery or infrastructure issues before they become significant, resulting in cost savings and reduced downtime. Furthermore, optimized resource use is achievable by predicting issues like pest infestations or soil problems, enabling better allocation of resources and timely interventions. Yield optimization is also within reach, as proactive problem-solving can lead to improved crop yields. The cost savings can be substantial, as addressing issues proactively is often more cost-effective than reacting to problems as they arise. Additionally, the system can support tailored training modules by identifying trends, enhancing employee skills, and knowledge for better outcomes. Lastly, the insights gained from trend analysis can inform strategic planning, ensuring that big agriculture operations are prepared for future challenges and changes in the industry. ## VII Conclusion and Future Works In conclusion, our extensive analysis shows that our machine learning framework is a powerful and reliable tool for improving horticultural outreach and education. By using advanced techniques like text classification and time-series forecasting, along with scalable and finely-tuned models, our framework becomes an indispensable resource for enhancing the EMGP's efforts. This means it can provide timely, region-specific, and expert-guided horticultural advice, greatly improving the support available to horticulturists. As we look ahead, there are several areas for future research and development. First, we plan to refine our models and algorithms to handle even more complex questions and a wider range of horticultural topics. We also want to incorporate additional data sources, such as weather information and reports on pests and diseases, to make our forecasting models even more accurate. Furthermore, we aim to improve the user interface and accessibility of our framework to ensure it's easy for everyone to use. Adding natural language processing enhancements to support conversational interactions can make it more user-friendly. We also understand the importance of continually monitoring and adapting to changing horticultural trends and challenges. This means we'll keep collecting and analyzing data to fine-tune our models and keep them relevant over time. Ultimately, our goal remains to empower horticulturists across different regions with the knowledge and support they need for successful gardening. As we move forward with our future work, we're dedicated to fulfilling the EMGP's mission of promoting sustainable horticultural practices and nurturing gardening enthusiasts. ## VIII Declarations ### **Funding:** No funds, grants, or other support was received. *Conflict of Interest:** The authors declare that they have no known competing for financial interests or personal relationships that could have appeared to influence the work reported in this paper. ### **Data Availability:** Data will be made on reasonable request. ### **D. Code Availability:**
2309.15216
A Comparative Study of Filters and Deep Learning Models to predict Diabetic Retinopathy
The retina is an essential component of the visual system, and maintaining eyesight depends on the timely and accurate detection of disorders. The early-stage detection and severity classification of Diabetic Retinopathy (DR), a significant risk to the public's health is the primary goal of this work. This study compares the outcomes of various deep learning models, including InceptionNetV3, DenseNet121, and other CNN-based models, utilizing a variety of image filters, including Gaussian, grayscale, and Gabor. These models could detect subtle pathological alterations and use that information to estimate the risk of retinal illnesses. The objective is to improve the diagnostic processes for DR, the primary cause of diabetes-related blindness, by utilizing deep learning models. A comparative analysis between Greyscale, Gaussian and Gabor filters has been provided after applying these filters on the retinal images. The Gaussian filter has been identified as the most promising filter by resulting in 96% accuracy using InceptionNetV3.
Roshan Vasu Muddaluru, Sharvaani Ravikumar Thoguluva, Shruti Prabha, Tanuja Konda Reddy, Suja Palaniswamy
2023-09-26T19:21:09Z
http://arxiv.org/abs/2309.15216v3
# Auto-grading C programming assignments with CodeBERT and Random Forest Regressor ###### Abstract Grading coding assignments manually is challenging due to complexity and subjectivity. However, auto-grading with deep learning simplifies the task. It objectively assesses code quality, detects errors, and assigns marks accurately, reducing the burden on instructors while ensuring efficient and fair assessment. This study provides an analysis of auto-grading of the C programming assignments using machine learning and deep learning approaches like regression, convolutional neural networks (CNN) and long short-term memory (LSTM). Using a code-based transformer word embedding model called CodeBERT, the textual code inputs were transformed into vectors, and the vectors were then fed into several models. The testing findings demonstrated the efficacy of the suggested strategy with a root mean squared error (RMSE) of 1.89. The contrast between statistical methods and deep learning techniques is discussed in the study. CodeBERT, convolutional neural networks (CNN), long short-term memory (LSTM), validation loss, regression. ## I Introduction Majority of the educational institutions follow a grading system where the marks of each student are entered manually. By using a set of predetermined criteria or rubrics, a human grader evaluates a student's work as part of the manual grading process. This contrasts with auto-grading, which rates the work using computer programs and Artificial Intelligence. Manual grading has several drawbacks, including: Heavy time-consumption where hand grading can take a lot of time, especially when dealing with big courses or tasks. The time and effort required to read through and assess each student's work can be challenging. Inconsistency: The grading process can be inconsistent because variousgraders interpret and apply the grading criteria in different ways. This can lead to a variety of grades for the same assignment, from different evaluators. Bias: The subjective nature of the grading process may be compromised by the grader's own prejudices or preferences when manual gradingis involved. For instance, a grader could intentionally providehigher marks to work that is consistent with their own opinions or ideals. Lack of prompt feedback: Manual grading--refrequently takes some time to offer students with detailed feedback, which can delay the learning process and make it harder for students to make timely improvements to theirwork. Looking at all these factors, it is necessary to put forth an effective auto grading system. Essays, programming projects, multiple-choice questions, short answer questions, and other forms of assignments can all be graded using autograding. In comparison to traditional grading techniques, autograding can offer a number of advantages, including a reduction in the time and efforts needed to evaluate assignments, the ability to give students instant feedback, and a lower risk of bias in grading. The fact that autograding offers both students and teachers a number of advantages makes it a crucial practice in contemporary education. Once the autograding model learns the patterns required for the grading process, it is much simpler than a human grader grading assignments or tests. This will surely become a normalcy in the near future. The assignment will simply be uploaded onto the grading AI platform and it will do the rest of the job efficiently in seconds. Students will have varying styles of writing and sometimes it may affect the grading system due to recognition errors but for coding assignments where the code can just be typed, the results will be much more accurate. Deep Learning has evolved over the years, as it canhandle complex data and sometimes showing much better results in comparison to machine learning models. CodeBERT is a pre-trained language model created by Microsoft which is appreciated because of its unique feature's such as detecting similarity in code, code summarization andcode completion. The steps done by CodeBERT to convert the C codes into vectors were to first tokenize the code into tokens taking syntax into account, then segmentation which involved breaking down the long words to small words to reduce the input size followed by masking and data augmentation. Because of the advantageous features of CodeBERT, it proved to be the best word embedding pre- processing model for this research. The objective of this paper is a thorough comparative examination of several AI Models used to grade C programming assignments using CodeBERT, with a particular emphasis on how well they do in auto grading. The study's specific objective is to use CodeBERT to convert unprocessed C programs into vectors, then apply Random Forest regressor, Extra Trees Regressor, KNN and deep learning models (CNN and LSTM) to predict grades from 1 to 10. With a focus on contrasting the performance of machine learning and deep learning methodologies, the goal is to provide a clear evaluation of the efficiency and applicability of different AI models. By attaining this goal, the study hopes to enhance automated systems for grading C programming assignments and give insights on the advantages and drawbacks of various AI models in this situation. The paper is structured as follows: Section II describes the literature survey which mentions various methods used to similar to the ones presented in this paper and discussing their work. Section III introduces the data and how it was gathered along with its understandings. Section IV discusses about the methodology that has been employed in this paper. This is followed by discussion of the results and their analysis and lastly, Section VI talks about the conclusion and future scope. ## II Literature Survey Understanding how a person sees or evaluates a solution is crucial in order to comprehend the grading process. The strategy differs from person to person. It was advised to use rubrics to ensure some consistency. According to Srikant et al [1-3], evaluating a computer program in accordance with a rubric is a crucial component of an auto-grading system. A score (quantitative measure) on such a rubric frequently relates to the programmer's ability to solve problems. The machinelearning approach for automatically grading programs was presented by Srikant et al [3] utilizing a novelfeature language that encapsulates the key elements thathuman experts consider when assigning a mark. An open-source code was developed to help recruiters save efforts on grading the code submitted by applicants. A precision of 89% was achieved, which was proved to be better than the precision obtained while using the bag of words concept [4]. Scientific Production and visualization analysis was done using Bibliometrix and VOSviewer software for an automated grading system for essay writing. Visualization Analysis was done after applying deep learning models such as CNN to identify sentence similarity in [5]. Thiago et al. [6] developed a system, 'SiameseQAT' to find duplication descriptions written by workers about a bug report. A recall of 85% and a AUROC value of 84% was obtained which was a good improvement in comparison to other works. A cloud service was created which evaluates a student's drawing and gives a grade. Maysa et al. [7] used neural networks to develop this model and gave an accuracy result of 91%. An interpretable deep learning system for automatically scoring request for proposals (RFPs) was proposed. The suggested method extracts information from the text of RFPs and predicts a score for each proposal by combining natural language processing (NLP) methods and a convolutional neural network (CNN). To train and test their model, the authors compiled a dataset of RFPs and the accompanying scores. When comparing anticipated and real scores, they discovered that their suggested strategy worked better than other cutting-edge methods, reaching a correlation coefficient of 0.81 [8]. The study offered by Ardakani et al. [9], a data-driven method for predicting the feelings and attitudes represented in written language, known as affective text categorization analysis where Natural language processing (NLP) techniques were employed by the authors to extract features from a dataset of news articles and social media posts, such as word frequency, word co-occurrence, and grammatical structure. The deep learning and natural language processing (NLP) automated essay grading system that was improved suggested the method uses NLP techniques like part-of- speech tagging and dependency parsing to extract information from essay text, including syntactic and semantic data. To train and test their system, which uses a deep learning model, especially a convolutional neural network (CNN), to predict the grade of an essay, the authors employed dataset of essays and their accompanying grades. Thesuggested solution outperformed other cutting-edge techniques in grading essays with an accuracy of 86.2% [10]. To evaluate source code files at various resolutions and find plagiarism in source code files for C, C++, and Java, the Discrete Wavelet Transform (DWT) was proposed [11]. The software has been subjected to static code analysis in order to find harmful blocks [12]. The effectiveness of static code analysis was further improved by adding compiler transformations [13]. To enhance the grading of essays, an unsupervised word sense disambiguation (WSD) method is recommended [15]. In order to determine whether a code's transformations had an effect on how well neural program analyzers worked, Md. Rabin and Md. Alipour [16] worked on neural code analyzers. They discovered that the size of the code blocks and the amount of modifications that must be used for the analysis affect the performance of the neural code analyzers. Eye-tracking was used to analyze the code blocks, according to Chandrika & Amudha [14]. The originality of this paper lies in its comparison of deep learning and statistical methods using the word embedding model, CodeBERT, which was then fed into deep learning models through which a comparison between a few regressors was observed. ## III Data Description The data used in this research is a set of C programs which were collected from undergraduate, freshman, and sophomore students. The data set comprises of 765 rows and 2 columns: (i) 'Code' column contains all the C programs, (ii) 'Score' column has the marks awarded for each program. The focus is mainly on programs such as insertion, deletion and searching for an element in an array as well as the linked list implementations of stack and queue, all coded in C language. The dataset was split into three sets: a training set,a validation set, and a testing set. The data was split in such away that 50% of the data is used for training, 25% for validation & the rest 25% for testing. The goal is to analyze as to whether the statistical methods perform better than the deep learning models in order to give a more accurate prediction of the marks. ### _Dataset creation_ The marks allotted were entered manually for each of these programs based on certain predefined criteria such as logic, output, syntax and the general semantics of how each program has been coded. The dataset was made manually by inducing errors into the code, and corresponding marks had been allocated to the code according to the marking scheme shown in Table. 1. The marking scheme shows that in case of no output, 2 markswill be deducted and in case there is no logic to be found forthe corresponding code, 3 marks will be deducted. Similarly,if a program was only half completed, then, only 3 marks were awarded. The minimum marks awarded to every program was around 3 to 4. ### _Data exploration_ The dataset was based on a set of 5 questions: Implement the 1) stack operations using linked list, 2) queue operation using linked list, 3) insertion of an element in an array, 4) deletion of an element in an array, 5) operations on an array when the element is even or odd. The data deals with natural language texts consisting of around 200,000 words including special characters such as syntaxes like semicolon or brackets. The maximum number if words in one row were found to be around 1400. There were no null values found in the data as the marks have been manually entered for each program. As for the pre-processing, the text data which represents several C programming codes used the CodeBERT word embedding from the Microsoft's pretrained model. The pre-processing was followed by the train-test split where the feature ("Code") and its corresponding labels ("Score") were split into three sets: a training set, a validation set, and a testing set. ### _System Architecture_ The incoming text data was converted into a vector space with the help of the word embedding model, CodeBERT. Fig. 1. shows that these vectors were then fed into various machine learning models as well as deep learning models. The purpose of bringing in the deep learning models was to carry out experiments to check whether they perform better than statistical approaches, or if they overfit the data. ## IV Methodology The paper brings a comparative analysis with the help of two different approaches namely machine learning and deep learning where both the following approaches take the vectors as input coming from the pre-processing word embedding model, CodeBERT. ### _Statistical approach_ The statistical models used under this approach were the machine learning regressor models. The regressors used were: Random Forest (RF), Extreme Gradient boosting (XGBoost), Ridge Regression and K-nearest neighbor(KNN) regressor. Hyperparameter tuning was performed using GridsearchCV of Python sklearn package to find the best set of hyper-parameters for all the models. Various hyper-parameters such as the depth of the tree, learning rate and other regularization parameters were considered. It was fit to the training data which was then reshaped to have a single dimension equal to the number of features. The hyperparameters were tuned using cross-validation with 5 folds, and the best set of hyperparameters were determined based on the mean score across all folds. After doing so, these regressors were used to predict the marks(score) awarded for the corresponding code snippets. ### _Deep Learning approach_ The deep learning approach of the analysis deals with 2 models: CNN and LSTM. This paper intends to carry out experiments for the CNN and LSTM models to see if they perform well with sequential data. The model for the CNN architecture consisted of a 1-dimensional convolutional layer with 32 filters, followed by a max pooling layer. A fully linked layer with 64 units and ReLU activation received the output of the pooling layer after it had been flattened. The finallayer had a single output unit with linear activation. As the input data consists of textual codes, a sequential model was implemented where the experiment of working with LSTM model was carried out. The model consisted of a Long Short-Term Memory (LSTM) layer with 128 units, 20% dropout rate and 20% recurrent dropout rate. The output of the LSTM layer was passed through a fully connected layer with 64 units and ReLU activation. Finally, the output layer had a single output unit with ReLU activation. Since the output range of the system architecture is numerical, the use of ReLU activation function was found to be better than otheractivation functions such as sigmoid and tanh. Out of the machine learning models, the RF regressor was found to be the best to make an accurate prediction on the marks awarded. Since the random forest regressor was able to produce good results among the machine learning models, a novelty of combining the deep learning model along with the random forest was brought up and implemented to see if they would produce better results. For combining the CNN model with the random forest, the fully connected layer of the CNN model was swapped with the RF as shown in Fig. 2. A). Similarly, Fig. 2. B). shows the Fig. 1: System architecture for the model combination of the LSTM model with the random forest regressor after the LSTM layer was swapped with the regressor and output of the marks predicted. After plotting the loss curves for CNN, CNN with random forest, LSTM and LSTM with random forest shown in Fig. 3., it was seen that the validation loss for the models, CNN and CNN with random forest were not consistently improving and are thus overfitting some parts of the training data. As for the LSTM model, it was demonstrated that there was a stability in the model but was unable to learn the training data and thus again fell under the overfitting category. The loss curves for the LSTM with random forest regressor indicated that the model was able to learn the training data till 4 epochs and later became stable by falling under the regular fit category. The optimal number of epochs for these 4 models were found to be around 40 epochs. ## V Results and Analysis In this section, the analysis of the statistical models as well as the deep learning models discussed in Section IV is exhibited and a comparative analysis of the performance metrics are put forward. The models were evaluated based on the following metrics: Root Mean Squared Error (RMSE), Mean Absolute Error (MAE), Mean Absolute Percentage Error (MAPE) and Coefficient of Determination (R2) as shown in Table II. The statistical models as discussed in the previous section were tuned using hyperparameter tuning and then evaluated using the R2 score and MAPE of which are demonstrated in the same Table II. Although random forest Fig. 3: Loss curves for various models implemented Fig. 2: A) Architecture for CNN with RF showed overfitting as depicted in the high variation of test and train R2 score, the MAE and RMSE of Random Forest Regressor was the least, indicating better accuracy in comparison to the other models. The other models did not show overfitting, however had higher test RMSE value. The combination of LSTM with random forest also gave satisfying results with respect to RMSE due to the sequential nature of LSTM For the random forest regressor that gave the best results out of the other statistical regressors, the combination of hyperparameters suggested that the best performing model preferred smaller values for the minimum samples needed to divide an internal node and the minimum samples needed to be at the node, indicating that themodel was able to learn fine-grained details of the training data. The best performing model did not require any limit onthe maximum depth of the tree. The model, which employed 100 trees, a typical default value, provided an appropriate balance between calculation time and performance. All the statistical models (machine learning regressors) shown in Fig. 4. were overfitting the data except for the ridge regressor. However, the RMSE of the models were taken into consideration. Since, the RMSE for the ridge regressor being 1.94 exceeds that of the random forest regressor having a value of 1.89, the random forest regressor was an appropriate model for the data. Hence, the random forest regressor was chosen despite having overfit the data and the experiment with the combination of the deep learning approach with that of the statistical approach was implemented. While hyperparameter tuning was performed on the machine learning models, the deep learning models were also trained with a regularization technique called the early stopping method which was based on validation loss for 50 epochs with a batch size of 64. Fig. 5 shows that the RMSE for the random forest regressor and LSTM with random forest were found to be similar and better in comparison to the other deep learning models. Both of these models also showed a similar performance for the R2 score as they were able to explain 89% and 88% of the training set and 26% and 20% respectively on the test set. However, both the models were overfitting the data but the LSTM with random forest regressor was showing a greater overfitting compared to that of the random forest regressor alone. The statistical model also achieved a greater performance in a lesser training time compared to the other models. The research carried out in [17] also used CodeBERT where a RMSE value of 1.56 using the random forest was achieved. However, since the data used in this paper was a superset of the data in [17] and with more variety and different code snippets, the resulting RMSE achieved in this paper's work was higher than the latter. The data used for this work had inconsistencies and was almost twice as large as the data used in [17] leading to more overfitting. ## VI Conclusion and Future scope The research objective of correctly forecasting the grades given for the relevant C programming codes was accomplished using the statistics and deep learning methodologies. The Random Forest regressor was determined to be the model with the best performance when using a statistical method, whereas the LSTM model, which performs well due to its sequential nature, produced the best results when using a deep learning technique. The experiments conducted throughout the paper, however, revealed that the transformations from the text space to the R-n space and back to the R space resulted in a loss of information, which prevented the deep learning models from being able to explain much of a variance. In contrast, the Random Forest regressor was found to be the best of both approaches because it was seen to have a better RMSE and was able to explain more of the training data as well. This was due to the fact that the statistical models did not result in the prediction losing information, making the statistical method the best strategy as expected. The deep learning models overdid and were unable to perform well in comparison to the statistical models, which typically do not require a large dataset, because the dataset that was obtained was rather small. The work of this paper could be extended and improvised to overcome the limitations caused due to the dataset being small by expanding and refining the dataset. This way the overfitting situation could also be solved. The future scope of this work would be to employ more code Fig. 4: Coefficient of Determination (R2) of Statistical models for test and train Fig. 5: Coefficient of Determination (R2) of Deep learning models for test and train based transformer word embedding models such as CodeT5, CuBERT and code2vec and further perform a comparative analysis and thus arrive at the best autograding system for C programs. However, this could further be extended to other programming languages like Java, python and C++ as well. ## Acknowledgment Amrita Vishwa Vidyapeetham provided the necessary infrastructure and support for the conduct of this research activity, as well as for the production of the publication, for which the authors are grateful. The professors of the freshmen and sophomore year students helped the writers collect the necessary data, and for that the authors would also want to express their sincere gratitude.
2309.03312
Composite (multi)ferroic order
The formalism of composite and intertwined orders has been remarkably successful in discussing the complex phase diagrams of strongly correlated materials and high-$T_c$ superconductors. Here, we propose that composite orders are also realized in ferroelectric and ferromagnetic materials when lattice anisotropy is taken into account. This composite order emerges above the ferroic phase transition, and its existence is determined by the easy axis of magnetization or polarization, respectively. In multiferroic materials, where polarization and magnetization are coupled, composites of both orders are possible. This formalism of composite orders naturally accounts for magnetoelectric monopole, toroidal, and quadrupole orders. More broadly, composite orders may explain precursor phenomena in incipient (multi)ferroic materials, arising at temperatures above the ferroic phase transition and potentially contributing to the characterization of currently hidden orders.
R. Matthias Geilhufe
2023-09-06T18:48:09Z
http://arxiv.org/abs/2309.03312v3
# Hidden composite (multi)ferroic order ###### Abstract Various complex materials develop ordered states that are not detectable by conventional experimental probes. Yet, hidden orders typically exhibit indirect signs of their existence, even if their full microscopic nature has not been unraveled. Recently, the formalism of composite and intertwined orders has been remarkably successful in discussing the complex phase diagrams of strongly correlated materials and high-\(T_{c}\) superconductors. A generalization of the formalism to other ordered states of matter has remained largely unexplored. Here, we show that conventional ferromagnetic and ferroelectric materials can exhibit quadrupole orders, emerging above the critical temperature for the transition into the ferroic phase. The existence of this transition depends on the anisotropy of the ferroic phase. We show that the quadrupole magnetic and electric orders couple to the shear elastic constant, which explains experimental findings for elastic precursors of ferromagnetic and ferroelectric phase transitions, showing a softening of shear modes in various materials. Furthermore, we extend our formalism to strongly coupled multiferroic materials, which can form composites of magnetic and ferroelectric orders. This gives rise to novel kinds of truly hidden orders, not interacting with electric, magnetic, or strain as well as insights into the formation of toroidal moments in multiferroics. As the multipolar and composite orders discussed here are emerging above the ferroic transition temperatures, they might be relevant for explaining precursor phenomena in incipient (multi)ferroic materials. ## I Introduction Condensed matter is composed of a multitude of ions and electrons, often arranging themselves in regular patterns at low temperatures. Some of these ordering phenomena, e.g. crystalline, magnetic or (anti)ferroelectric orders, have been known for long and can be verified experimentally with high precision. Others, such as ferrotoroidicity[1; 2; 3] are likely to exist by symmetry arguments, but have not been convincingly verified in experiments yet. Hidden orders are intriguing ordering phenomena which are known to occur, but cannot be understood from conventional experimental probes. A prime example for hidden order is a phase transition emerging in URu\({}_{2}\)Si\({}_{2}\) at \(\approx 17.5\) K, which has remained under debate for more than 30 years[4; 5; 6; 7; 8; 9]. Another example is the pseudogap phase in the high T\({}_{c}\)-superconductors, emerging in the vicinity of a complex phase diagram[10; 11; 12; 13]. Recently, composite orders, described by tensorial order parameters, have been discussed as a potential class of hidden orders[14]. Simultaneously, several types of composite orders have been identified in complex phase diagrams of high-T\({}_{c}\) superconductors, where it has been found that some "competing" orders do not occur independently, but instead emerge as composites of one or more parent phases[15; 16; 17; 18; 19]. Interestingly, these composite orders can exist above the transition temperature of the primary phase[16], e.g., nematic order[20], charge-4e superconductors[21], or nodal superconducting order [22]. Therefore, an application of the same formalism to ferroic materials might be as insightful, in particular, as the hidden order phase in URu\({}_{2-x}\)Fe\({}_{x}\)Si\({}_{2}\), has been shown to emerge as the high-temperature phase above an antiferromagnetic order, in the region around \(x\approx 0.1\)[7]. In fact, signatures of hidden order have been found in multiferroic materials, i.e., materials hosting several ferroic phases simultaneously[23; 24; 25; 26; 27]. For example, Bhowal _et al._ identified a \(k\)-space toroidal order in the prototypical ferroelectric PbTiO\({}_{3}\) emerging from inversion-symmetry breaking dipoles in real space [28]. In this letter, we extend the successful theory of composite orders from superconductivity to ferroic and multiferroic materials. We show that, depending on the Figure 1: \(Z_{2}\) classification of ferromagnets and ferroelectrics. Depending on the easy axis, a quadrupole order transforming as \(E_{g}\) emerges above the ferroic transition temperature. The anisotropy of the cubic free energy for a vectorial order \(\mathbf{X}\) (e.g. magnetization \(\mathbf{M}\) or polarization \(\mathbf{P}\)) for an easy axis of (a) (100) and (b) (111). (c) illustrates the phase diagram depending on two positive fourth order parameters \(\beta_{1}\) and \(\beta_{2}\). anisotropy, a composite or multipolar order can emerge in the vicinity of a conventional ferromagnetic or ferroelectric phase transition. We develop this theory on the example of the cubic crystalline symmetry and find that cubic ferroelectrics and ferromagnets undergo a \(Z_{2}\) classification scheme, with respect to the presence or absence of a quadrupole order. Here, the \(Z_{2}\) classification is a direct consequence of the two possible easy axes for magnetization and polarization. Furthermore, we show that the concept of composite orders can be extended to multiferroic materials, where polarization and magnetization are coupled. In cubic symmetry, this coupling gives rise to three potential composite orders, one of which is the magnetoelectric toroidal order[14]. ## II Cubic ferromagnets and ferroelectrics The magnetization \(\mathbf{M}\) is a pseudo vector, i.e., it is even under inversion and odd under time-reversal (see Table 1). In contrast, the polarization \(\mathbf{P}\) transforms as an ordinary vector, being odd under inversion and even under time-reversal. In the absence of lattice anisotropy, a phenomenological theory only incorporates the strength of \(\mathbf{M}\) and \(\mathbf{P}\), giving rise to the inversion and time-reversal invariant free energies \(f(M)=\alpha(T-T_{c})M^{2}+\beta M^{4}\) or \(f(P)=\alpha(T-T_{c})P^{2}+\beta P^{4}\), with \(\alpha,\beta>0\). For high temperatures \(T>T_{c}\), the free energy is minimized by the trivial solutions \(M=0\) and \(P=0\), respectively. For temperatures below the critical temperature \(T<T_{c}\), a finite value of magnetization or polarization is observed. The material undergoes a transition into a ferromagnet or a ferroelectric. Taking into account lattice anisotropy, the free energy also depends on the direction of polarization or magnetization. In cubic symmetry, it is given by [29; 30; 31; 32] \[f(\mathbf{X})=\alpha(T-T_{c})\mathbf{X}^{2}+\beta_{1}\left(X_{x}^{4}+X_{ y}^{4}+X_{z}^{4}\right)\\ +\beta_{2}\left(X_{x}^{2}X_{y}^{2}+X_{x}^{2}X_{z}^{2}+X_{y}^{2}X_ {z}^{2}\right), \tag{1}\] where \(\mathbf{X}=\mathbf{P},\mathbf{M}\) denotes either the magnetization \(\mathbf{M}\) or the polarization \(\mathbf{P}\). For simplicity, we assume second order phase transitions and assume \(\alpha,\beta_{1},\beta_{2}>0\). This choice also avoids incorporating higher order terms to obtain minima at finite values of \(X_{i}\). The fourth order terms \(\beta_{1}\) and \(\beta_{2}\) control the shape of the anisotropy energy. In the ferroic phase (\(T<T_{c}\)), the free energy is minimized for a magnetization or polarization pointing along the Cartesian axes if \(2\beta_{1}<\beta_{2}\), as shown in Fig. 1(a). In contrast, for \(2\beta_{1}>\beta_{2}\) a magnetization or polarization pointing along the diagonal is energetically favored (Fig. 1 (b)). We note that cubic magnetic anisotropy has been known for long [33; 34], with Fig. 1(a) qualitatively describing iron and Fig. 1(b) nickel. More examples are given in Table 2. We continue by arguing that (1) also contains a hidden quadrupole order, beyond the standard ferroic phases. This becomes apparent, by considering composites \(\phi_{k}=\sum_{ij}c_{ij}^{k}X_{i}X_{j}\) and writing the cubic free energy of (1) as a second order polynomial in \(\phi_{k}\). Specifically, we choose the point group \(O_{h}\) describing full cubic symmetry and define symmetry adapted bilinears: \(\phi_{A_{1g}}=\frac{1}{\sqrt{3}}\left(X_{x}^{2}+X_{y}^{2}+X_{z}^{2}\right)\), transforming as the identity representation (\(A_{1g}\)) as well as \(\phi_{E_{g};1}=\frac{1}{\sqrt{2}}\left(X_{x}^{2}-X_{y}^{2}\right)\) and \(\phi_{E_{g};2}=\frac{1}{\sqrt{6}}\left(X_{x}^{2}+X_{y}^{2}-2X_{z}^{2}\right)\), transforming as the two-dimensional representation \(E_{g}\). In terms of these bilinears, equation (1) becomes \[f(\phi_{A_{1g}},\mathbf{\phi}_{E_{g}})=\sqrt{3}\alpha(T-T_{c})\phi_ {A_{1g}}\\ +\left(\beta_{1}+\beta_{2}\right)\phi_{A_{1g}}^{2}+\frac{(2\beta _{1}-\beta_{2})}{2}\mathbf{\phi}_{E_{g}}^{2}, \tag{2}\] where we use the multicomponent order parameter \(\mathbf{\phi}_{E_{g}}=\left(\phi_{E_{g};1},\phi_{E_{g};2}\right)\). Both, \(\phi_{A_{1g}}\) and \(\mathbf{\phi}_{E_{g}}\) are higher rank order parameters. \(\phi_{A_{1g}}\) does not break crystalline symmetries. As \(\beta_{1}+\beta_{2}>0\), it only lowers the free energy for \(T<T_{c}\) and is therefore not realized. In contrast, for \(T>T_{c}\) the free energy can be lowered by a non-zero value of the quadrupole order \(\mathbf{\phi}_{E_{g}}^{2}\) if \(2\beta_{1}<\beta_{2}\). This situation is shown in Figure 1(a). A phase diagram of the composite phase with order parameter \(\mathbf{\phi}_{E_{g}}\) is shown in Figure 1(c). To verify that \(\mathbf{\phi}_{E_{g}}\) gets realized dynamically, we start from the free energy (2), add a gradient term of the form \(f_{\rm grad}=-\frac{\mathbf{X}^{\partial}\mathbf{\phi}_{E_{g}}\mathbf{X}}{2}\) and perform the Hubbard \begin{table} \begin{tabular}{l c} \hline \hline \multicolumn{2}{c}{Time-reversal \(\mathcal{T}\) Inversion \(\mathcal{P}\)} \\ \hline Polarization \(\mathbf{P}\) & + & - \\ Magnetization \(\mathbf{M}\) & - & + \\ \hline Quadrupole electric \(P_{\alpha}P_{\beta}\) & + & + \\ Quadrupole magnetic \(M_{\alpha}M_{\beta}\) & + & + \\ Composite multiferroic \(P_{\alpha}M_{\beta}\) & - & - \\ \hline Coupling to strain & + & + \\ Coupling to toroidal moment & - & - \\ \hline \hline \end{tabular} \end{table} Table 1: Transformation behavior under the fundamental symmetries time-reversal and inversion. Stratonovich transformation [42] to obtain \[f(\mathbf{X},\phi_{A_{1g}},\mathbf{\phi}_{E_{g}})\\ =\mathbf{X}\left(-\frac{\partial^{2}}{2}+\frac{r}{2}+\mathrm{M}_{A_{1g}} \phi_{A_{1g}}+\mathrm{M}_{E_{g}}\mathbf{\phi}_{E_{g}}\right)\mathbf{X}\\ +\frac{1}{2\left(\beta_{1}+\beta_{2}\right)}\phi_{A_{1g}}^{2}+ \frac{1}{\left(2\beta_{1}-\beta_{2}\right)}\mathbf{\phi}_{E_{g}}^{2}. \tag{3}\] Here, we absorbed the temperature dependent second order coefficient into \(\frac{r}{2}=\alpha(T-T_{c})\). The matrices \(\mathrm{M}_{A_{1g}}\) and \(\mathrm{M}_{E_{g}}\) are defined to satisfy \(\left(\mathbf{X}\cdot\mathrm{M}_{A_{1g}}\cdot\mathbf{X}\right)^{2}=\phi_{A_{1g}}^{2}\) and \(\left(\mathbf{X}\cdot\mathrm{M}_{E_{g}}\cdot\mathbf{X}\right)^{2}=\mathbf{\phi}_{E_{g}}^{2}\), respectively. For temperatures above the ferroic transition, \(T>T_{c}\), the fields \(\mathbf{X}\) are fluctuating and can be integrated out, leading to the effective action in the fields \(\phi_{A_{1g}}\) and \(\mathbf{\phi}_{E_{g}}\), \[\mathcal{S}^{\mathrm{eff}}=\log\det\left[-\frac{\partial^{2}}{2} +\frac{r}{2}+\mathrm{M}_{A_{1g}}\phi_{A_{1g}}+\mathrm{M}_{E_{g}}\mathbf{\phi}_{E_{ g}}\right]\\ +\left(\beta_{1}+\beta_{2}\right)\phi_{A_{1g}}^{2}+\frac{\left(2 \beta_{1}-\beta_{2}\right)}{2}\mathbf{\phi}_{E_{g}}^{2}. \tag{4}\] The effective action gives rise to the corresponding gap equations for \(\mathbf{\phi}_{E_{g}}\), by evaluating \(\frac{\delta\mathcal{S}^{\mathrm{eff}}}{\delta\mathbf{\phi}_{E_{g}}}=0\). Assuming a slowly varying field \(\mathbf{\phi}_{E_{g}}\approx\mathrm{const}\) one deduces \(\frac{\mathbf{\phi}_{E_{g}}}{2\beta_{1}-\beta_{2}}=A\mathcal{S}^{4-d}\mathbf{\phi}_{E _{g}}\), which has a non-trivial solution if \[\xi(T)=\left[\left|2\beta_{1}-\beta_{2}\right|A\right]^{-\frac{1}{4-d}}, \tag{5}\] with \(A\) a constant. The correlation length \(\xi(T)\) diverges when the system undergoes the long-range order in the ferroic phase, \(\xi(T)\to\infty\) for \(T\to T_{c}\). However, as we derived equation (5) from the disorderd phase (\(T>T_{c}\)), it shows a further transition into the quadrupole phase with order parameter \(\mathbf{\phi}_{E_{g}}\) for a temperature \(T_{c_{2}}>T_{c}\). We continue by discussing properties of the quadrupole phase. \(\mathbf{\phi}_{E_{g}}\) breaks the cubic symmetry and is even under inversion and time-reversal, regardless of its origin (magnetization or polarization), see Table 1. Hence, it neither couples linearly to an external electric nor magnetic field. Instead, the electric or magnetic fields couple quadratically to their respective order parameters, which would be visible in a modification of the fourth order susceptibility, as recently discussed on the example of URu\({}_{2}\)Si\({}_{2}\)[8]. However, \(\mathbf{\phi}_{E_{g}}\) couples linearly to the \(E_{g}\) components of the strain tensor, \(\eta_{1}\sim\eta_{11}-\eta_{22}\) and \(\eta_{2}\sim\eta_{11}+\eta_{22}-2\eta_{3}\)[43]. In turn, these two components couple to the shear modulus \(\frac{1}{2}\left(C_{11}-C_{12}\right)\). Elastic anomalies as precursors of a ferromagnetic phase transition have been verified in various materials, typically coupling to the Young's modulus [44]. In addition, in La\({}_{0.67}\)Sr\({}_{0.33}\)MnO\({}_{3}\)[45] the shear velocity (attenuation) was found to show a dip (peak) shortly before the ferromagnetic transition, while it has a peak (dip) at \(T_{c}\). La\({}_{0.67}\)Sr\({}_{0.33}\)MnO\({}_{3}\) exhibits a magnetic easy axis of (100) [39]. Even though its crystal structure deviates from a cubic structure, it can be seen as a candidate for realizing a magnetic quadrupole order. In contrast, while modifications of the Young's modulus can clearly be revealed in CoS\({}_{2}\)[46] (easy axis (111) [40; 41]), corresponding modifications in the shear modulus are absent. Similar behavior has been verified in ferroelectrics. For example, in PbSc\({}_{0.5}\)Ta\({}_{0.5}\)O\({}_{3}\) a precursor regime was identified ranging over \(\approx 100\)\(K\) above the ferroelectric transition, exhibiting a softening of the shear elastic constant, which was initially explained by a higher order coupling of coexisting order parameters [47]. In contrast, the composite ferroelectric phase couples linearly to the shear strain, potentially explaining the strength of the effect. ## III Multiferroics We continue by extending the model of composite orders to multiferroics. The free energy then contains a contribution from the ferroelectric phase \(f(\mathbf{P})\), the ferromagnetic phase \(f(\mathbf{M})\), and their coupling \(f_{c}\), \[f(\mathbf{P},\mathbf{M})=f(\mathbf{P})+f(\mathbf{M})+f_{c}. \tag{6}\] In a time-reversal invariant system with cubic symmetry, the coupling term contains three linearly independent contributions, given by \[f_{c}=\gamma_{1}\left(P_{x}^{2}M_{x}^{2}+P_{y}^{2}M_{y}^{2}+P_{z }^{2}M_{z}^{2}\right)\\ +\gamma_{2}\left(P_{x}P_{y}M_{x}M_{y}+P_{y}P_{z}M_{y}M_{z}+P_{z}P_ {x}M_{z}M_{x}\right)\\ +\gamma_{3}\left(P_{x}^{2}(M_{y}^{2}+M_{z}^{2})+P_{y}^{2}(M_{z}^{ 2}+M_{x}^{2})\right.\\ +\left.P_{z}^{2}(M_{x}^{2}+M_{y}^{2})\right). \tag{7}\] As the formation of composite orders in the magnetic and ferroelectric degrees of freedom has been discussed before, we now focus on the potential of forming multiferroic composites \(\langle P_{\alpha}M_{\beta}\rangle\), emerging from the multiferroic coupling \(f_{c}\). As before, we decompose \(f_{c}\) into symmetry adapted bilinears \(\psi_{k}=\sum_{ij}c_{ij}^{k}P_{i}M_{j}\). Due to the individual symmetries of polarization and magnetization, the bilenars \(\psi_{k}\) are odd under time-reversal \(\mathcal{T}\) and odd under inversion \(\mathcal{P}\). In cubic symmetry, the relevant symmetry adapted bilinears are \(\mathbf{\psi}_{E_{u}^{-}}=\left(\frac{1}{\sqrt{2}}\left[P_{x}M_{x}-P_{y}M_{y} \right],\frac{1}{\sqrt{6}}\left[P_{x}M_{x}+P_{y}M_{y}-2P_{z}M_{z}\right]\right)\), \(\mathbf{\psi}_{T_{u}^{-}}=\frac{1}{\sqrt{2}}\mathbf{P}\times\mathbf{M}\), and \(\mathbf{\psi}_{T_{u}^{-}}=\frac{1}{\sqrt{2}}\left(P_{y}M_{z}+P_{z}M_{y},P_{z}M_{x}+P_ {x}M_{z},P_{x}M_{y}+P_{y}M_{x}\right)\). In terms of \(\mathbf{\psi}_{E_{u}^{-}}\), \(\mathbf{\psi}_{T_{u}^{-}}\), and \(\mathbf{\psi}_{T_{u}^{-}}\) the multiferroic coupling \(f_{c}\) is expressed as follows, \[f_{c}=\frac{3\gamma_{1}}{2}\mathbf{\psi}_{E_{u}^{+}}^{2}+\frac{2\gamma_{3}-\gamma_{1} -\gamma_{2}}{2}\mathbf{\psi}_{T_{u}^{-}}^{2}+\frac{2\gamma_{3}+\gamma_{1}+\gamma_{2}} {2}\mathbf{\psi}_{T_{u}^{-}}^{2}. \tag{8}\] As before, we obtain a free energy with second order contributions in the bilinears, which serve as order parameters of the three corresponding types of composite multiferroic order. These phases can be realized if the coefficient of the second order term becomes negative. From equation (8) it follows that \(\mathbf{\psi}_{E_{-}^{-}}^{2}\) gets stabilized for an attractive interaction \(\gamma_{1}<0\). The phase \(\mathbf{\psi}_{T_{2u}^{-}}^{2}\) emerges for a combination of attractive and repulsive interactions, given by \(2\gamma_{3}<-\gamma_{1}-\gamma_{2}\). In contrast, \(\mathbf{\psi}_{T_{1u}^{-}}^{2}\) can emerge for purely repulsive interactions, as long as \(2\gamma_{3}<\gamma_{1}+\gamma_{2}\). The dominant phase in the coexisting regimes is determined by microscopics. A phase diagram in the three parameters \(\gamma_{1}\), \(\gamma_{2}\), and \(\gamma_{3}\) is given in Figure 2. In the following, we discuss properties of the composite multiferroic orders. As the composite multiferroic orders are both odd under time-reversal and inversion symmetry (Table 1) their physical consequences are more elusive, compared to the composite electric and magnetic orders. We start with the phase described by the order parameter \(\mathbf{\psi}_{T_{1u}^{-}}\), which transforms as a time-reversal odd vector. It corresponds to the formation of a macroscopic toroidal moment in the sample [1; 2; 3], independently of a macroscopic magnetic or ferroelectric order. Hence, it could be found as a high temperature phase in a strongly coupled multiferroic material, or at low temperatures in incipient multiferroics. By symmetry, the toroidal order couples to the curl of a magnetic field, \(\sim\mathbf{\psi}_{T_{1u}^{-}}\cdot\nabla\times\mathbf{B}\). Furthermore, the toroidal order induces a magnetoelectric effect, as can be seen from the polarizability \(\mathbf{P}=\chi^{e}\epsilon_{0}\mathbf{E}+\eta\mathbf{B}\times\mathbf{\psi}_{T_{1u}^{-}}\)[1]. In contrast, to the toroidal order, composites \(\mathbf{\psi}_{T_{2u}^{-}}\) and \(\mathbf{\psi}_{E_{-}^{-}}\) only occur for attractive interactions between magnetization and polarization. However, in these cases, the Landau theory requires an extension to higher orders, making stability arguments more complex. Both order parameters are hidden orders, as they neither couple to the electric or magnetic fields nor to strain or a hypothetical toroidal field. ## IV Summary and Outlook In summary, we showed the existence of composite orders in ferromagnetic, ferroelectric, and multiferroic materials. Starting with cubic ferromagnets and ferroelectrics, we found that a fourth-order Landau expansion of the order parameter gives rise to a \(Z_{2}\) classification, between materials with an easy axis along the (111) (trivial) and (100) (non-trivial) direction. In the non-trivial case with a (100) easy axis, a composite order with \(E_{g}\) symmetry occurs above the ferroelectric or ferromagnetic phase transition. Using the Hubbard-Stratonovich transformation and slowly varying order parameter fields, we determine that the composite order emerges at higher temperatures than the transition temperature of the ferroelectric or ferromagnetic phase transition. This results from the existance of a non-trivial solution at finite correlation length, which tends to diverge, approaching the ferroelectric or ferromagnetic transition temperature. By symmetry, the \(E_{g}\) quadrupole order should couple to e.g. the shear strain applied to the respective material. In the second step, we discussed multiferroics, i.e., materials with coexisting polarization and magnetization. Incorporating a fourth order coupling term in the Landau theory, we showed that the coupling term can be decomposed into three different bilinears, with \(T_{1u}\), \(T_{2u}\), and \(E_{u}\) symmetries. Specifically, the \(T_{1u}\) bilinear can be stabilized as the order parameter of a composite toroidal phase even if all multiferroic coupling constants are positive, i.e., repulsive. As before, this toroidal order is expected to emerge as a high temperature phase, before the material exhibits a ferromagnetic and ferroelectric order. As a result, we could show that the theory of composite orders, primarily discussed for superconducting states[15; 16; 17; 18; 19] can also be extended to ferroic materials. It shows that complex phase diagrams in these materials as well as hidden order phases could be a result of a primary phase, with a simple ferroic order. In particular, this interpretation becomes interesting when the primary order is not seen at temperature above 0 K, e.g., in incipient ferroelectrics. Still, a composite order above 0 K is allowed, if the incipient ferroelectric falls into the non-trivial class with an easy axis pointing along (100). ## V Acknowledgements We acknowledge inspiring discussions with Alexander Balatsky, Nicola Spaldin, Wolfram Hergert, Paul Erhart, Dominik Juraschek, and Naoto Nagaosa. We are grateful for support by Chalmers University of Technology and the Swedish Research Council (VR starting Grant No. 2022-03350).
2309.10408
Unsupervised Learning via Network-Aware Embeddings
Data clustering, the task of grouping observations according to their similarity, is a key component of unsupervised learning -- with real world applications in diverse fields such as biology, medicine, and social science. Often in these fields the data comes with complex interdependencies between the dimensions of analysis, for instance the various characteristics and opinions people can have live on a complex social network. Current clustering methods are ill-suited to tackle this complexity: deep learning can approximate these dependencies, but not take their explicit map as the input of the analysis. In this paper, we aim at fixing this blind spot in the unsupervised learning literature. We can create network-aware embeddings by estimating the network distance between numeric node attributes via the generalized Euclidean distance. Differently from all methods in the literature that we know of, we do not cluster the nodes of the network, but rather its node attributes. In our experiments we show that having these network embeddings is always beneficial for the learning task; that our method scales to large networks; and that we can actually provide actionable insights in applications in a variety of fields such as marketing, economics, and political science. Our method is fully open source and data and code are available to reproduce all results in the paper.
Anne Sophie Riis Damstrup, Sofie Tosti Madsen, Michele Coscia
2023-09-19T08:17:48Z
http://arxiv.org/abs/2309.10408v1
# Unsupervised Learning via Network-Aware Embeddings ###### Abstract Data clustering, the task of grouping observations according to their similarity, is a key component of unsupervised learning - with real world applications in diverse fields such as biology, medicine, and social science. Often in these fields the data comes with complex interdependencies between the dimensions of analysis, for instance the various characteristics and opinions people can have live on a complex social network. Current clustering methods are ill-suited to tackle this complexity: deep learning can approximate these dependencies, but not take their explicit map as the input of the analysis. In this paper, we aim at fixing this blind spot in the unsupervised learning literature. We can create network-aware embeddings by estimating the network distance between numeric node attributes via the generalized Euclidean distance. Differently from all methods in the literature that we know of, we do not cluster the nodes of the network, but rather its node attributes. In our experiments we show that having these network embeddings is always beneficial for the learning task; that our method scales to large networks; and that we can actually provide actionable insights in applications in a variety of fields such as marketing, economics, and political science. Our method is fully open source and data and code are available to reproduce all results in the paper. ## 1 Introduction Finding patterns in unlabeled data - a task known as unsupervised learning - is useful when we need to build understanding from data Hastie et al. (2009). Unsupervised learning includes grouping observations into clusters according to some criterion represented by a quality or loss function Gan et al. (2020) - data clustering. Applications range from grouping of genes with related expression patterns in biology Ranade et al. (2001), finding patterns in tissue images in medicine Filipovych et al. (2011), or segment customers for marketing purposes. Popular data clustering algorithms include DBSCAN Ester et al. (1996), OPTICS Ankerst et al. (1999), k-Means, and more. Modern data clustering approaches rely on deep learning and specifically deep neural networks Ajalbout et al. (2018); Aggarwal et al. (2018); Pang et al. (2021); Ezugwu et al. (2022), or denoising with autoencoders Nawaz et al. (2022); Cai et al. (2022). However, these approaches work in (deformations of) Euclidean spaces - where dependencies between the dimensions of the analysis can be learned Mahalanobis (1936); Xie et al. (2016) -, but the problem to be tackled here is fundamentally non-Euclidean Bronstein et al. (2017). Graph Neural Networks (GNN) Scarselli et al. (2008); Wu et al. (2022); Zhou et al. (2020a) work in non-Euclidean settings, and they are the focus of this paper. To see why, consider product adoption in a social network - with an example in Figure 1. We want to find product clusters depending on the people who buy them. However, the purchase decision of each person is influenced by their acquaintances in a complex social network. By using the information in the social network, we could cluster what could have appeared as otherwise independent vectors. In Figure 1 products (a) and (b) are clearly related to each other and so are products (c) and (d). To perform this clustering task we need to generate network-aware embeddings: to use the network's topology as the space in which observations live, which is the basis to estimate their similarities and, ultimately, their clusters. This is the main objective of this paper: to cluster node attributes on a complex network. We base our solution on previous research that established ways to estimate the distance Coscia et al. (2020); Coscia (2020; 2022) and (co)variance (correlation) Coscia (2021b); Devriendt et al. (2022) between numeric node attributes on a complex network. The contributions of this paper are threefold. First, our problem definition is innovative. GNNs almost universally share the assumption that the entities worth analyzing are the nodes of the network, and that their attributes refer to the same entity as the node. This is not the case here: node attributes are entities in their own right, and the nodes of the graph represent the _dimension_ of the analysis, not the observations. As a result, when used in tasks related to clustering, GNNs are mostly used to find clusters of nodes Bo et al. (2020); Tsitsulin et al. (2020); Bianchi et al. (2020); Zhou et al. (2020). GNN-based clustering seeks to find node embeddings Perozzi et al. (2014b); Hamilton et al. (2017), but we are interested in finding node _attribute_ embeddings. When node attributes are taken into account in GNNs, they always serve the purpose of aiding the classification of nodes rather than clustering the attributes themselves Perozzi et al. (2014a); Zhang et al. (2019); Wang et al. (2019); Lin et al. (2021); Cheng et al. (2021); Yang et al. (2023), which is not the objective here. GNN clustering is an evolution of the classical problem of community discovery Fortunato (2010); Rossetti and Cazabet (2018); Fortunato and Hric (2016). To the best of our knowledge, there are no known cases of algorithms dedicated to cluster observations whose dimensions can be mapped on a complex network structure by using that network structure to generate embeddings. The community discovery literature shares with GNN clustering the use of node attributes to classify the nodes Leskovec et al. (2010); Yang et al. (2013); Bothorel et al. (2015); Chunaev (2020) or provide a ground truth for the communities Peel et al. (2017), not to cluster the attributes themselves. Second, we create a pipeline integrating a distance measure between observations on a graph with a full data clustering process. To the best of our knowledge, this is the first pipeline directly addressing the problem we want to study: to cluster node attributes. Finally, we show in our experimental section that our node attribute clustering pipeline performs better than the alternatives on synthetic data and real world data with a ground truth. Having network embeddings is always beneficial for the learning task and can enhance dimensionality reduction techniques such as t-distributed Stochastic Neighbor Embedding (tSNE) by providing a more accurate depiction of the complex space in which the observations live. Our network embeddings can also improve deep learning techniques such as graph autoencoders. We also show that calculating network embeddings with our technique is scalable and we present a few case studies showing how our method can be applied in such diverse fields as macroeconomics, politics, and marketing. The code and data to reproduce our results is available as supplementary material and on the web1. Footnote 1: [https://www.michelecoscia.com/?page_id=2275](https://www.michelecoscia.com/?page_id=2275) ## 2 Methodology ### Data Model The framework, illustrated in Figure 2, needs two main components: a graph \(G\) and the set of observations \(O\) we want to classify into clusters. Figure 1: A toy example of product adoption in a social network. Nodes are people, connected to their friends. Node color determines how strongly they adopt a product (dark = high engagement, light = low engagement). (a-d) Different products. The graph \(G=(V,E)\) is composed by a set of nodes \(V\) and a set of edges \(E\subseteq V\times V\). Each edge is a pair of nodes \((u,v)\in V\). The edges can be weighted, i.e. they can be triples \((u,v,w)\), with \(w\in R^{+}\) being any non-negative weight. The weight represents the capacity of an edge Coscia (2021a), meaning that the higher the weight the closer the two connected \(u,v\) nodes are. We can build network embeddings on multi-layer networks Coscia (2022), networks with multiple different qualitative types of edges. This means that an edge could be represented by a quadruple \((u,v,w,t)\), with \(t\in T\) representing the type (layer) of the edges. One necessary requirement is that \(G\) must have a single connected component: all pair of nodes in \(G\) needs to be reachable via paths through the edges of \(G\). The embeddings cannot be calculated in networks with disconnected components. We also need the absence of self-loops, i.e. edges connecting a node to itself. For simplicity, we work with undirected graphs, i.e. \((u,v)=(v,u)\). As for \(O\) this is a set of observations or data points. Each observation \(o\in O\) is a vector of length \(|V|\) - i.e. it is a node attribute assigning a value for each node \(v\in V\). One can consider \(V\) as being the dimensions of each observation in \(O\) and \(G\) being an object that describes the complex interdependencies between these dimensions. ### Problem Definition We now formally define the problem we are intending to solve, as it is different from the classical approach of graph neural networks and graph clustering. **Definition 1** (Problem Definition).: Let \(G=(V,E)\) be a connected undirected graph, with \(V\) being the nodeset and \(E\subseteq V\times V\). Let \(O\) be a set of numerical vectors of length \(|V|\) - the attributes of the nodes of \(G\). Find the function \(f:G\times O\to P\) returning the partition \(P\) such that \(\operatorname*{arg\,min}_{O}\delta=f(G,O)\), \(\delta\) being the function calculating the distance between pairs of observations on \(G\). In other words, we want to find the partition \(P\) of \(O\) such that the graph distance \(\delta\) over \(G\) of observations within the same group in \(P\) is minimized - excluding trivial solutions that put each observation in a singleton cluster. This definition hinges on \(\delta\): the ability to calculate the graph distances between two \(o_{1},o_{2}\in O\). There are many possible non-trivial options for \(\delta\), and the next section provides a reasonable one as one of the main contributions of this paper. ### Network Distances One key step to perform unsupervised learning via clustering is to estimate the distances between the observations. That is, given observations \(o_{1}\) and \(o_{2}\), we want to have a function \(\delta_{o_{1},o_{2}}\) quantifying the distance between them. Sufficiently close observations may be part of the same cluster. One could get better results by transforming observations in \(O\) so that their noisy estimates can be better handled by \(\delta\) - see Section 2.4. Here instead we consider the fact that one could choose a different \(\delta\) function that better conforms to one's expectation of proximity between observations. The simplest case is using the Euclidean distance, which assumes that all dimensions used to record observations in \(O\) are independent and equally important. Here, we assume that observations in \(O\) live in a complex space with interdependencies between the dimensions of analysis mapped by a graph \(G\). If this assumption is correct, then the distance between \(o_{1}\) and \(o_{2}\) needs to take \(G\) into account: to estimate how far two observations are we need to know how to traverse \(G\) to move Figure 2: Our full workflow. Red colors track the flow of the information coming from the graph, and blue colors track the information coming from the observations. from the \(o_{1}\) position to the \(o_{2}\) position in this complex space. We want to calculate a Generalized Euclidean (GE) distance, that can take any possible dimension interdependency into account. Notation-wise, our functions becomes \(\delta_{o_{1},o_{2},G}\), since it requires \(G\) to be estimated. For this paper, \(\delta_{o_{1},o_{2},G}\) is based on a solution Coscia (2020) to the node vector distance problem Coscia et al. (2020). In GE, one can use the pseudoinverse Laplacian (\(L^{+}\)) to calculate the effective resistance between two arbitrary \(o_{1}\) and \(o_{2}\) vectors. The Laplacian matrix is \(L=D-A\), with \(A\) being the adjacency matrix of \(G\) and \(D\) being the diagonal matrix containing the degrees of the nodes of \(G\): \[\delta_{o_{1},o_{2},G}=\sqrt{(o_{1}-o_{2})^{T}L^{+}(o_{1}-o_{2})}.\] Previous work shows that this formula gives a good notion of distance between \(o_{1}\) and \(o_{2}\) on a network Coscia (2020). For instance, it can recover the infection and healing parameters in a Susceptible-Infected-Recovered (SIR) model by comparing two temporal snapshots of an epidemic. Calculating \(L^{+}\) is computationally expensive, in the order of \(\mathcal{O}(|V|^{3})\) but we do not need to compute it explicitly, as we show in Section 3.4. We can also work with multilayer networks - networks with multiple qualitatively different types of edges Kivela et al. (2014); Boccaletti et al. (2014) - by defining a multilayer \(L\). This is achieved by calculating the Laplacian of the supra-adjacency matrix Porter (2018); Coscia (2022). We can define \(B\) as a \(|V|\times|E|\) incidence matrix telling us which node is connected to which edge. Then, \(L=BWB^{T}\), with \(W\) being the diagonal matrix containing the weights of each edge \(e\in E\). In this case, \(E\) can contain both regular intra-layer edges as well as the inter-layer couplings connecting nodes from one layer to nodes in the other layers. ### Clustering Following Figure 2, we now describe all remaining components of the framework. Note that, with the exception of the clustering step, none of the components is strictly speaking mandatory: each can be removed and we can still cluster the data. However, each step performs a useful function and has a role in improving the final result - as Section 3.2 shows. The logical steps are: clean noise and then reduce the dimensions in \(O\) to get better-separated clusters that are easier to find with a classical clustering algorithm. Both steps should use information from \(G\). For this reason, the first step (cleaning noise) is done via a Graph Autoencoder (GAE) Kramer (1991); Kipf & Welling (2016b); and the second step (dimensionality reduction) via tSNE Van der Maaten & Hinton (2008) using GE as the spatial metric instead of a non-network metric. #### 2.4.1 Cleaning Noise An autoencoder (AE) creates embeddings generated with a deep neural network formed by an encoder and a decoder. Since for the hidden layers we use graph convolution Kipf & Welling (2016a) on \(G\), the AE is actually a GAE. Our choice of graph convolution for the hidden layers - both encoder and decoder - is the GraphSAGE Hamilton et al. (2017) operator, with SoftSign activation function and a sigmoid normalization of the last layer of the decoder. The autoencoder is trained via backpropagation using cross entropy loss. We could use different activation functions, and different graph convolution approaches - for instance Graph Attention Networks Velickovic et al. (2017). We picked our components as they are the ones performing the best in our validation. #### 2.4.2 Dimensionality Reduction tSNE creates shallow embeddings and works best when reducing to a low number of dimensions - here we set it to two. We could apply tSNE directly to the GAE output. However that would mean that tSNE is seeking for the best representation of the data in an Euclidean space, which is not appropriate because we know the dimensions of our observations are related to each other in \(G\). Luckily, the tSNE algorithm is agnostic to the function used to calculate the distance between two observations. We can provide our GE function as the metric over which tSNE operates. The role of GE is to take the complex interdependencies between dimensions expressed by the graph \(G\) into account for tSNE. In this way, tSNE can optimize cluster separation in the complex space defined by \(G\). If the real clusters of \(O\) are correlated with \(G\)'s topology, using GE instead of any other metric space will lead to a significant performance increase. #### 2.4.3 Cluster Detection The last step is to perform the clustering itself. We choose DBSCAN Ester et al. (1996) due to its simplicity, low time-complexity, and ability to find non-convex clusters of arbitrary shapes. When refer to the full framework as GAE+GE+tSNE. If we do not perform the noise cleaning step, then our framework becomes GE+SNE. We can also use an Euclidean space to perform tSNE, skipping the GE step and obtaining GAE+tSNE. DBSCAN also needs to define in which metric space to operate -- just like tSNE. So one could use the GE metric space here as well. However, this cannot be done if we performed dimensionality reduction with tSNE, because GE can only work on the original dimensions in \(G\). For this reason, a GAE+GE+tSNE+GE is impossible. One could make a GAE+GE framework, skipping tSNE and using GE directly in DBSCAN. However, in Section3.2 we show that the synergy between GE and tSNE is strong and it is the factor that drives the performance. Some of the components of our framework could be swapped with others, to maximize the performance in specific settings. For instance, one could replace the GAE to clean noise with a generative adversarial network Creswell et al. (2018), or tSNE to reduce dimensionality with principal component analysis (PCA) or non-negative matrix factorization, or the DBSCAN clustering step with OPTICS, k-Means or any other specialized clustering technique. However, this is not a relevant dimension of analysis for this paper, because none of these alternative components could replace the network embeddings we provide with GE, which is the fundamental contribution of this paper, and this is why we do not test how much, e.g., using PCA instead of tSNE can improve the performance. ## 3 Experiments ### Setup To contextualize the performance of our framework in a network-aware clustering, we perform our validation tests in two-steps. First, we investigate the performance of each method in isolation. The methods we consider are either the various parts of our framework, or potential alternative methods. To sum them up, the isolated components/baselines are (clusters are always extracted via DBSCAN): * Baseline: clusters \(O\) using an Euclidean space (no network information, no \(O\) preprocess). * GE: clusters \(O\) using the GE space. * tSNE: clusters \(O\) by first reducing each observation to two dimensions using tSNE. * N2V: clusters \(O\) multiplied by the node embeddings obtained from node2vec Grover & Leskovec (2016) (we set \(p=q=1\), making this equivalent to DeepWalk Perozzi et al. (2014b), since different \(p\) and \(q\) values did not lead to significantly different results). * GAE: clusters \(O\), after passing it through our graph autoencoder, described in Section 2.4. Since these methods can be combined in a larger framework, as we do in Section 2.4, in the second step we do so. In practice, the second step is an ablation study where we investigate the effect of removing each component from the framework. Since GAE is the method performing the best in isolation, it is taken as the baseline for the second step. We aim to see that specifically the removal of the GE component should have a negative impact on performance. ### Validation with Synthetic Data In this section we create synthetic networks in which the data clusters are obvious and we test the ability of our pipeline to recover them. We create a stochastic blockmodel network (SBM) with \(|K|\) communities, each containing \(50\) nodes. The average degree of the nodes in the SBM is equal to \(20\). Each node has, on average \(d_{out}\) connections pointing outside its own cluster and \(20-d_{out}\) connections pointing to inside the cluster. Each observation \(o\) is a vector of length \(|V|=50|K|\) and corresponds with a \(k_{o}\in K\) community in the graph. The values in \(o\) are extracted from two random uniform distributions. The values corresponding to nodes that belongs to \(k_{o}\) comes from a random uniform in the domain \([.5:1[\), while values corresponding to nodes from outside \(k_{o}\) come from the domain \([0:.5]\), Therefore, we expect that \(O\) has \(|K|\) natural clusters \(C\), with each \(c\in C\) corresponding to a \(k\in K\) - we can therefore evaluate the clustering performance via the Adjusted Mutual Information (AMI) Vinh et al. (2009) between the clusters we obtain and the pre-planted communities of the network. To estimate how resilient to noise the methods are, we apply a gaussian noise to each \(o\), coming from a normal distribution with average zero and standard deviation \(\sigma\). The higher the \(\sigma\) the more noise there is and the performance of the clustering methods should decrease accordingly. We investigate performance across different values of noise in the observations (\(\sigma\)), noise in the network structure (\(d_{out}\)), size of the network (\(|V|\)), and number of observations (\(|O|\)). For each experiment we change the focus parameter keeping the others to their default values, which are: \(\sigma=1\), \(d_{out}=2\), \(|V|=200\), \(|O|=300\). We repeat each experiment for \(10\) independent runs and we report average and standard deviation of the results. We start by analyzing the effect of noise in the observations (\(\sigma\)). We start from Figure 2(a), showing the results with each method in isolation. First, all methods vastly outperform the baseline: in this network setting, not knowing about the underlying network and assuming that dimensions are unrelated leads to poor performance. The only exception is when \(\sigma=0\): if there is no noise at all, the network information is irrelevant. However, this is a wildly unrealistic scenario. Second, the method performing by far better than anything else is GAE. As expected, performing embeddings with a deep graph neural network is the current state of the art. Finally, all other methods (tSNE, GE, and N2V) perform roughly in the same class. N2V has a slight edge, showing how even shallow graph embeddings are well performing. However, tSNE dimensionality reduction is powerful enough to be on par performance with other network-aware methods, even if it does not consider any network information at all. We now move to the second step (Figure 2(b)), testing our composite framework. We replicate the GAE performance, to contextualize between the two analysis. For high levels of noise, there is no large difference between the various methods, with the full GAE+GE+tSNE framework ranking first. However, if noise is not strong enough to completely swamp the signal in \(O\), then the best performing method is actually the combination of GE with tSNE. This is a genuine synergy between the two methods, because adding the GAE to them lowers performance, and the GAE+tSNE method is strictly lower than GAE+GE+tSNE. In this scenario, the GE component is fundamental to achieve optimal performance, unless high levels of noise make GAE+GE+tSNE the preferred option. Figure 3: The AMI score (y axis) of the various methods (line color). Average performance as the line, standard deviation as the shaded area. x-axis from left to right increases: (a-b) observation noise \(\sigma\); (c-d) network connections noise \(d_{out}\); (e-f) node count \(|V|\); (g-h) observation number \(|O|\). We now analyze what happens when the communities in \(G\) become less well defined, i.e. the expected degree of a node pointing outside its community (\(d_{out}\)) grows. This will make the clusters harder to find. In the first step (Figure 3c), we see that the methods that do not take the network as input (baseline, tSNE) are not affect, as expected. GAE is also resilient to network noise. Both N2V and our GE instead are affected by the weakening of communities. Moving to the second step (Figure 3d), once again, GE+tSNE is the preferred method: GAE does not give a significant contribution to the full framework (GAE+GE+tSNE) and the GE component is necessary - as GAE+tSNE is inferior to GE+tSNE, just like we observed in the previous test. What if the underlying network grows? This is normally not a problem when clustering nodes - at least for methods that do not have a resolution limit Fortunato & Barthelemy (2007) - however here this is a problem. The reason is because we are not clustering nodes, but observations that live in a network. The nodes of the network represent the dimensions of the space in which these observation live. As a consequence, a larger network will lead to a harder clustering tasks, because it is harder to cluster high dimensional data than low dimensional one. Figure 3e shows this principle, as we increase \(|V|\). tSNE's performance degrades even if it does not take \(G\) into account, because a higher \(|V|\) means more dimensions that need to be summarized. All methods degrade their performance: N2V does not get as much information out of its random walks on the overall structure, and GAE has more dimensions to handle in the encoding-decoding layers. The sole exception is our GE method, which is indifferent to \(|V|\) and offers constant performance. While this is not useful for smaller networks, it gets more and more significant as \(|V|\) grows. As a result, Figure 3f shows that GE+tSNE is by far the best method, which is even more relevant considering that most real world networks tend to be large. GAE dominates the method when introduced and so the full framework's performance degrades with larger networks. A final question centers on the effect of the number of observations \(|O|\). It is possible that increasing the size of \(O\) improves performance, as the methods have more and more data to identify the latent patterns. However, in this case neither Figure 3g nor Figure 3h show an appreciable difference for each method as \(|O|\). The ranking of the composite frameworks in the second step is maintained, with GE+tSNE performing best overall - a constant across all tests that we run. In Appendix B we sum up the tests quantitatively, showing how GE+tSNE is the preferred approach. ### Validation with Real World Data We use real world data with ground truth to validate the performance of the network embeddings with unknown and noisy data generation processes. We use two case studies using the Trade Atlas and the Little Sis datasets. Since all methods except Baseline and GE have random fluctuations, we repeat the experiments 25 times and we show the distribution of the resulting performances. #### 3.3.1 Trade Atlas The data originates from the United Nations Comtrade datasets. We obtained it through the Harvard dataverse at Harvard University (2019). After the data cleaning procedure discussed in the Appendix, we obtain \(G\) as a simple undirected network connecting two countries with the total trade volume in either direction across all traded products. Each vector in \(O\) is a product and the values in the vector are the amount exported by each country for this product. To deal with the highly skewed nature of this data, we take the logarithm of export values and we standardize it. We can use the network embeddings to cluster \(O\) using the information in \(G\). The objective is to reconstruct the product category, under the assumption that countries specialize their productive activities according to the knowledge they have, which is more easily transferred across products in the same macro category - this is the base of economic complexity theory Hausmann et al. (2014). From Figure 4a we can see that GAE works extremely poorly, perhaps because the relationship between the node attributes and the graph structure is non-linear and too complex. GE network embeddings, by themselves, are underwhelming, actually performing on par with the baseline. However, using them to provide the embeddings to calculate tSNE provides a significant performance boost to tSNE alone, showing their effectiveness when combined with dimensionality reduction techniques. #### 3.3.2 Little Sis For this validation, we want to infer a politician's ideology by looking at the social network of their political donors. The data originates from LittleSisLittleSis (2022) a nonprofit organization tracking family, social, and work links between worldwide elite people. We perform several rounds of data cleaning, which are described in the Appendix. We end up with a social network \(G\) of political donors with 529 nodes and 871 edges. In \(O\), each observation is a member of the current US Congress. The values in their vectors are the amount they received in campaign donations from each of the 529 donors in \(G\). In this case, the ideology is represented by their party affiliation - either Republican or Democrat - which we can use for validation (how well can our cluster reconstruct the US Congress parties?). Figure 3(b) shows the result. Once more, GE by itself performs on par only with the Euclidean baseline - which is to say, close to zero AMI. Among all the isolated methods, GAE has the best average performance, but it is highly erratic: it can return highly aligned (AMI \(>0.3\)) or highly misaligned (AMI \(\sim 0\)) clusters depending on random fluctuations. This is where our GE network embeddings can help. Combining GAE with tSNE alone improves the average performance, but retains the erratic behavior. Instead, if we add the GE space to the tSNE embeddings and GAE, we obtain the highest average (and maximum) performance, with a much reduced variance. #### 3.3.3 Tivoli For our application, we focus on the task of product recommendations to customers. The data comes from the amusement park Tivoli: \(G\) is a network of rides connected by weighted edges counting how many people holding a given pass rode on both (Figure A1). Pass information composes \(O\), which is a vector of how many times each pass checked in a given ride - details on data cleaning are in the Appendix. Each pass has a type - regular, school, children, etc - which is our ground truth. The objective is to find clusters of passes aligned with their type. If successful, it means we can infer the type a pass behavior corresponds to, meaning we can create new product bundles and suggest to new customers the best product upgrade they should purchase, given their behavior in the park so far. Figure 3(c) shows the result. Again, the best performing method is the GE when combined with tSNE, showing that we can get better recommendations for pass purchases to customers. ### Efficiency Calculating the network-aware embeddings via GE can scale to large networks. There is no need to calculate \(L^{+}\) explicitly. One can estimate the distance between two node attribute vectors by using Laplacian solvers Spielman & Teng (2004); Koutis et al. (2011); Spielman & Teng (2014), which brings the time complexity of the method down to near linear time regime - in number of nodes. Figure 4: The AMI score (y axis) for the different methods (x axis and color). The boxplots show the 10th, 25th, 75th, and 90th percentile, along with the average performance. We sort the methods left to right in ascending average performance order. (a) Trade Atlas, (b) Little Sis, (c) Tivoli. To show this, we perform two experiments. We specifically use the gaussian elimination Laplacian solver Kyng and Sachdeva (2016), but the difference in runtime with other solvers is negligible. We create a benchmark using stochastic blockmodels as we did for the experiments in Section 3.2. We use a Julia implementation and run the experiments on a Intel Xeon W-11955M CPU at 2.60GHz. First, we fix the number of clusters and average degree to \(4\) and we create larger and larger SBMs in number of nodes - from \(100\) to \(2,000,000\). Figure 5 shows the result. From Figure 4(a), we have confirmation that the runtime grows faster than \(|V|\), but decisively less than \(|V|^{2}\). Given the data we have, the best empirical fit of the runtime scaling is \(\mathcal{O}(|V|^{1.31})\). We also fix \(|V|=50,000\) and test the effect on the runtime of having denser and denser network, by increasing \(|E|\). From Figure 4(b), we see that the runtime is actually sublinear in terms of \(|E|\). Given the data we have, the best empirical fit of the runtime scaling is \(\mathcal{O}(|E|^{0.78})\). Note that using the Laplacian solvers is not necessarily the best option. It is advisable to do so only for large networks of, say, \(|V|>10,000\). Below that size, it might be a better idea to actually calculate \(L^{+}\) and cache it. Since \(L^{+}\) is the same for any given \(G\), if \(G\) is small but there are many attributes for which one wants to calculate their GE distances, then re-using \(L^{+}\) would be faster than using the Laplacian solvers. ## 4 Conclusions We introduce a new way to perform data clustering. Specifically, we create the notion of network embeddings: to create embedding of node attributes. In this scenario, the observations are values attached to nodes, and the underlying graph determines the relationships between the dimensions of analysis. This is a new type of unsupervised learning that has hitherto received little attention. In the paper we show how to calculate network embeddings using effective resistance and the generalized Euclidean distance. We use these embeddings in a pipeline that cleans node attribute data via a graph autoencoder, performs dimensionality reduction using tSNE, and finally detects clusters of node attributes using DBSCAN. Experiments show that the network embeddings, by themselves, are not particularly useful, reaching performances achievable with tSNE without any notion of the underlying graph. However, when combined in the larger pipeline, they lead to significant improvements in performance over the state of the art. These improvements are consistent across various analytic scenarios. Our case studies point at a number of potentially interesting applications of this new data clustering problem and technique. Potentially, this is the first step in the creation of a new sub-branch of data clustering. Future works include: the refinement of the pipeline, by optimizing each of its components; the exploration of new ways of calculating network embeddings, using other generalized network distances techniques Coscia et al. (2020); and new applications, deepening the exploration of our case studies with domain experts, who can interpret and contextualize our results. **Acknowledgements**. The authors are grateful to Tivoli, and specifically to Sine Rosted Lind and Francis Romstad, for making their data available to the study and for the support in interpreting the results. Figure 5: The runtimes (y axis) for benchmark networks of growing sizes (x axis). Actual runtimes in bright red, best fit in dark red. ## Reproducibility Statement All code and data necessary to reproduce the main results of the paper are publicly available as supplementary materials for this paper and at [https://www.michelecoscia.com/?page_id=2275](https://www.michelecoscia.com/?page_id=2275). This includes everything needed to reproduce all subfigures from Figures 3, 4, and 5.
2301.13737
Self-Consistent Velocity Matching of Probability Flows
We present a discretization-free scalable framework for solving a large class of mass-conserving partial differential equations (PDEs), including the time-dependent Fokker-Planck equation and the Wasserstein gradient flow. The main observation is that the time-varying velocity field of the PDE solution needs to be self-consistent: it must satisfy a fixed-point equation involving the probability flow characterized by the same velocity field. Instead of directly minimizing the residual of the fixed-point equation with neural parameterization, we use an iterative formulation with a biased gradient estimator that bypasses significant computational obstacles with strong empirical performance. Compared to existing approaches, our method does not suffer from temporal or spatial discretization, covers a wider range of PDEs, and scales to high dimensions. Experimentally, our method recovers analytical solutions accurately when they are available and achieves superior performance in high dimensions with less training time compared to alternatives.
Lingxiao Li, Samuel Hurault, Justin Solomon
2023-01-31T16:17:18Z
http://arxiv.org/abs/2301.13737v4
# Self-Consistent Velocity Matching of Probability Flows ###### Abstract We present a discretization-free scalable framework for solving a large class of mass-conserving partial differential equations (PDEs), including the time-dependent Fokker-Planck equation and the Wasserstein gradient flow. The main observation is that the time-varying velocity field of the PDE solution needs to be _self-consistent_: it must satisfy a fixed-point equation involving the flow characterized by the same velocity field. By parameterizing the flow as a time-dependent neural network, we propose an end-to-end iterative optimization framework called _self-consistent velocity matching_ to solve this class of PDEs. Compared to existing approaches, our method does not suffer from temporal or spatial discretization, covers a wide range of PDEs, and scales to high dimensions. Experimentally, our method recovers analytical solutions accurately when they are available and achieves comparable or better performance in high dimensions with less training time compared to recent large-scale JKO-based methods that are designed for solving a more restrictive family of PDEs. ## 1 Introduction Mass conservation is a ubiquitous phenomenon in dynamical systems arising from fluid dynamics, electromagnetism, thermodynamics, and stochastic processes. Mathematically, mass conservation is formulated as the _continuity equation_: \[\partial_{t}p_{t}(x)=-\boldsymbol{\nabla\cdot}(v_{t}p_{t}),\forall x,t\in[0,T] \tag{1}\] where \(p_{t}:\mathbf{R}^{d}\to\mathbf{R}\) is a scalar quantity such that the total mass \(\int p_{t}(x)\) is conserved with respect to \(t\), \(v_{t}:\mathbf{R}^{d}\to\mathbf{R}^{d}\) is a velocity field, and \(T>0\) is total time. We will assume, for all \(t\in[0,T]\), \(p_{t}\geq 0\) and \(\int p_{t}(x)\,\mathrm{d}x=1\), i.e., \(p_{t}\) is a probability density function. We use \(\mu_{t}\) to denote the probability measure with density \(p_{t}\). Once a pair \((p_{t},v_{t})\) satisfies (1), the density \(p_{t}\) is coupled with \(v_{t}\) in the sense that the evolution of \(p_{t}\) in time is characterized by \(v_{t}\) (Section 3.1). We consider the subclass of mass-conserving PDEs that can be written in a single equation of the form \[\partial_{t}p_{t}(x)=-\boldsymbol{\nabla\cdot}(f_{t}(x;\mu_{t})p_{t}),\forall x,t\in[0,T] \tag{2}\] where \(f_{t}(\cdot;\mu_{t}):\mathbf{R}^{d}\to\mathbf{R}^{d}\) is a given function depending on \(\mu_{t}\), with initial condition \(\mu_{0}=\mu_{0}^{*}\) for a given initial probability measure \(\mu_{0}^{*}\) with density \(p_{0}^{*}\). Different choices of \(f_{t}\) lead to a large class of mass-conserving PDEs. For instance, given a functional \(\mathcal{F}:\mathcal{P}_{2}(\mathbf{R}^{d})\to\mathbf{R}\) on the space of probability distributions with finite second moments, if we take \[f_{t}(x;\mu_{t}):=-\nabla_{W_{2}}\mathcal{F}(\mu_{t})(x), \tag{3}\] where \(\nabla_{W_{2}}\mathcal{F}(\mu):\mathbf{R}^{d}\to\mathbf{R}^{d}\) is the Wasserstein gradient of \(\mathcal{F}\), then the solution to (2) is the Wasserstein gradient flow of \(\mathcal{F}\)(Santambrogio, 2015, Chapter 8). Thus, solving (2) efficiently allows us to optimize in the probability measure space. If we take \[f_{t}(x;\mu_{t}):=b_{t}(x)-D_{t}(x)\nabla\log p_{t}(x), \tag{4}\] where \(b_{t}\) is a velocity field and \(D_{t}(x)\) is a positive-semidefinite matrix, then we obtain the time-dependent Fokker-Planck equation Risken and Risken (1996), which describes the time evolution of the probability flow undergoing drift \(b_{t}\) and diffusion with coefficient \(D_{t}\). The predominant strategy to solve (2) is to use an Eulerian representation of the density field \(p_{t}\) on a discretized mesh or as a neural network (Raissi et al., 2019). However, these approaches do not fully exploit the mass-conservation principle and are difficult to scale to high dimensions. Shen et al. (2022) recently introduced the notion of _self-consistency_ for the Fokker-Planck equation, a Lagrangian formulation of (2) involving the velocity field of the flow. In this work, we extend their notion of self-consistency to a more general class of mass-conserving PDEs of the form (2). To this end, we develop an iterative optimization scheme called _self-consistent velocity matching_. With the probability flow parameterized as a neural network, at each iteration, we refine the velocity field \(v_{t}\) of the current flow to match an estimate of \(f_{t}\) evaluated using the network weights from the previous iteration. This iterative formulation allows us to rewrite the velocity-matching objectives for certain PDEs to get rid of the computationally expensive quantities such as \(\nabla\log p_{t}\) in the Fokker-Planck equation. Moreover, our method is agnostic to the probability flow parameterization: we have empirically found that the two popular ways of parameterizing the flow--as a time-varying pushforward map (Bilos et al., 2021) and as a time-varying velocity field (Chen et al., 2018)--both have merits in different scenarios. Our method tackles mass-conserving PDEs of the form (2) in a unified manner without temporal or spatial discretization. Experimentally, it can recover true solutions faithfully for PDEs with analytically-known solutions. Only recent neural JKO-based methods (Mokrov et al., 2021; Fan et al., 2021; Alvarez-Melis et al., 2021) are capable of solving PDEs of the form (2) in high dimensions, and these methods are specialized to Wasserstein gradient flows (3). Our algorithm achieves comparable or better performance in our test cases compared to these JKO methods while using a lower computational budget and without discretizing time. We further demonstrate the flexibility of our method on a series of qualitative experiments for modeling flocks of birds, flows splashing against obstacles, and computing smooth interpolation of measures, all without discretization. ## 2 Related Works Classical PDE solvers for mass-conserving PDEs such as the Fokker-Planck equation and the Wasserstein gradient flow either use an Eulerian representation of the density and discretize space as a grid or mesh Burger et al. (2010); Carrillo et al. (2015); Peyre (2015) or use a Lagrangian representation, which discretizes the flow as a collection of interacting particles simulated forward in time Crissan and Lyons (1999); Westdickenberg and Wilkening (2010). Due to spatial discretization, these methods struggle with high-dimensional problems. Hence, the rest of the section focuses solely on recent neural network-based methods. Physics-informed neural networks.Physics-informed neural networks (PINNs) are prominent methods that solve PDEs using deep learning (Raissi et al., 2019; Karniadakis et al., 2021). The main idea is to minimize the residual of the PDE along with loss terms to enforce the boundary conditions and to match observed data. Our notion of self-consistency is a Lagrangian analog of the residual in PINN. Our velocity matching only occurs along the flow of the current solution where interesting dynamics happen, while in PINNs the residual is evaluated on collocation points that occupy the entire domain. Hence our method is particularly suitable for high-dimensional problems where the dynamics have a low-dimensional structure. Neural JKO methods.Recent works (Mokrov et al., 2021; Alvarez-Melis et al., 2021; Fan et al., 2021) apply deep learning to the time-discretized JKO scheme (Jordan et al., 1998) to solve Wasserstein gradient flow (3). By pushing a reference measure through a chain of neural networks, usually parameterized as input-convex neural networks (ICNNs) (Amos et al., 2017), these methods avoid discretizing the space and are thus capable of solving high-dimensional problems. Mokrov et al. (2021) optimize one ICNN to minimize Kullback-Leibler (KL) divergence plus a Wasserstein-2 distance term at each JKO step. This method is extended to other functionals by Alvarez-Melis et al. (2021). Fan et al. (2021) use the variational formulation of \(f\)-divergence to obtain a faster primal-dual approach. An often overlooked problem of neural JKO methods is that the total training time scales quadratically with the number of JKO steps: to draw samples for the current step, initial samples from the reference measure must be passed through a long chain of neural networks, along with expensive quantities like densities. However, using too few JKO steps results in large temporal discretization errors. Moreover, the optimization at each step might not have fully converged before the next step begins, resulting in an unpredictable accumulation of errors. In contrast, our method does not suffer from temporal discretization and can be trained end-to-end. It outperforms these neural JKO methods with less training time in most experiments we considered. Velocity matching.A few recent papers employ the idea of velocity matching to construct a flow that follows a learned velocity field. di Langosco et al. (2021) simulate the Wasserstein gradient flow of the KL divergence by learning a velocity field that drives a set of particles forward in time for Bayesian posterior inference. The velocity field is refined on the fly based on the current positions of the particles. Boffi and Vanden-Eijnden (2022) propose a similar method that applies to a more general class of time-dependent Fokker-Planck equations. These two methods approximate probability measures using finite particles which might not capture high-dimensional distributions well. Recent methods (Liu et al., 2022; Lipman et al., 2022; Albergo and Vanden-Eijnden, 2022) use flow matching for generative modeling by learning a velocity field that generates a probability path connecting a reference distribution to the data distribution. Yet these methods are not designed for solving PDEs. Most relevant to our work, Shen et al. (2022) propose the concept of self-consistency for the Fokker-Planck equation, that the velocity field recovering the velocity field of the flow solution to the Fokker-Planck equation must satisfy a fixed-point equation. They theoretically show that, under certain regularity conditions, the Wasserstein-2 distance between the current solution and the true solution is bounded by a term measuring the violation of the fixed-point equation (including up to second-order spatial derivatives). Their algorithm minimizes such violation using neural ODE parameterization (Chen et al., 2018) and the adjoint method. Our work extends the concept of self-consistency to a wider class of PDEs in the form of (2). Unlike Shen et al. (2022), our method does not optimize a fixed objective but instead carries out infinite-dimensional fixed-point iterations on the self-consistency condition. While the experiments of Shen et al. (2022) are limited to a simple 2D example, presumably due to the computational cost of the higher-order spatial derivatives in their objective, our method excels at solving a variety of large-scale problems. ## 3 Self-Consistent Velocity Matching ### Probability flow of the continuity equation A key property of the continuity equation (1) is that any solution \((p_{t},v_{t})_{t\in[0,T]}\) (provided \(p_{t}\) is continuous with respect to \(t\) and \(v_{t}\) is bounded) corresponds to a unique flow map \(\{\Phi_{t}(\cdot):\mathbf{R}^{d}\to\mathbf{R}^{d}\}_{t\in[0,T]}\) that solves the ordinary differential equations (ODEs) (Ambrosio et al., 2005, Proposition 8.1.8) \[\Phi_{0}(x)=x,\frac{\mathrm{d}}{\mathrm{d}t}\Phi_{t}(x)=v_{t}(\Phi_{t}(x)), \forall x,t\in[0,T], \tag{5}\] and the flow map satisfies \(\mu_{t}=(\Phi_{t})_{\#}\mu_{0}\) for all \(t\in[0,T]\), where \((\Phi_{t})_{\#}\mu_{0}\) to denote the push-forward measure of \(\mu_{0}\) by \(\Phi_{t}\). Moreover, the converse is true: any solution \((\Phi_{t},v_{t})\) of (5) with Lipschitz continuous and bounded \(v_{t}\) is a solution of (1) with \(\mu_{t}=(\Phi_{t})_{\#}\mu_{0}\)(Ambrosio et al., 2005, Lemma 8.1.6). Thus the Eulerian viewpoint of (1) is equivalent to the Lagrangian viewpoint of (5). We next exploit this equivalence by modeling the probability flow using the Lagrangian viewpoint so that it automatically satisfies the continuity equation (1). ### Parametrizing probability flows Our algorithm will be agnostic to the exact parameterization used to represent the probability flow. As such, we need a way to parameterize the flow to access the following quantities for all \(t\in[0,T]\): * \(\Phi_{t}:\mathbf{R}^{d}\rightarrow\mathbf{R}^{d}\), the flow map at time \(t\). \(\Phi_{t}(x_{0})\) is the location of a particle at time \(t\) if it is at \(x_{0}\) at time \(0\). We assume \(\Phi_{t}\) is invertible; * \(v_{t}:\mathbf{R}^{d}\rightarrow\mathbf{R}^{d}\), the velocity field of the flow at time \(t\). * \(\mu_{t}\in\mathcal{P}(\mathbf{R}^{d})\), the probability measure at time \(t\) from which we can access samples and its density \(p_{t}\). We will assume all these quantities are sufficiently continuous and bounded to ensure the Eulerian and Lagrangian viewpoints in Section 3.1 are equivalent. This can be achieved by using continuously differentiable activation functions in the network architectures and assuming the network weights are finite similar to the uniqueness arguments given in (Chen et al., 2018). We will use the following two ways to parameterize the flow, modeling either the flow map \(\Phi_{t}\) or the velocity field \(v_{t}\) as a neural network. **Time-dependent Invertible Push Forward (TIPF).** We first parameterize a probability flow by modeling \(\Phi_{t}:\mathbf{R}^{d}\rightarrow\mathbf{R}^{d}\) as an invertible network for every \(t\). The network architecture is chosen so that \(\Phi_{t}\) has an analytical inverse with a tractable Jacobian determinant, similar to (Bilos et al., 2021). We augment RealNVP (Dinh et al., 2016) so that the network for predicting scale and translation takes \(t\) as an additional input. To enforce the initial condition, we need \(\Phi_{0}\) to be the identity map. This condition can be based into the network architecture (Bilos et al., 2021) or enforced by adding an additional loss term \(\mathbf{E}_{X\sim\mu_{0}^{\theta}}\|\Phi_{0}(X)-X\|^{2}\). For brevity, we will from now on omit in the text this additional loss term. The velocity field can be recovered via \(v_{t}(x)=\partial_{t}\Phi_{t}(\Phi_{t}^{-1}(x))\). To recover the density \(p_{t}\) of \(\mu_{t}=(\Phi_{t})_{\#}\mu_{0}\), we use the change-of-variable formula \(\log p_{t}(x)=\log p_{0}^{*}(\Phi_{t}^{-1}(x))+\log\det\big{|}J\Phi_{t}^{-1}(x )\big{|}\). **Neural ODE (NODE).** We also parameterize a flow by modeling \(v_{t}:\mathbf{R}^{d}\rightarrow\mathbf{R}^{d}\) as a neural network; this is used in Neural ODE (Chen et al., 2018). The network only needs to satisfy the minimum requirement of being continuous. The flow map and the density can be recovered via numerical integration: \(\Phi_{t}(x)=x+\int_{0}^{t}v_{s}(\Phi_{s}(x))\,\mathrm{d}s\) and \(\log p_{t}(\Phi_{t}(x))=\log p_{0}^{*}(x)-\int_{0}^{t}\boldsymbol{\nabla} \cdot v_{s}(\Phi_{s}(x))\,\mathrm{d}s\), a direct consequence of (1) also known as the instantaneous change-of-variable formula (Chen et al., 2018). To obtain the inverse of the flow map, we integrate along \(-v_{t}\). With NODE, the initial condition \(\mu_{0}=\mu_{0}^{*}\) is obtained for free. While the use of invertible coupling layers in TIPF allows efficient access to samples and densities, TIPF becomes less effective in higher dimensions as many couple layers are needed to retain good expressive power. In contrast, NODE puts little constraints on the network architecture, but numerical integration can be slow and have errors. Handling the initial condition is trivial for NODE while an additional loss term or special architecture is needed for TIPF. As we will show in the experiments, both strategies have merits. ### Formulation We now describe our algorithm for solving mass-conserving PDEs (2). A PDE of this form is determined by \(f_{t}(\cdot;\mu_{t}):\mathbf{R}^{d}\rightarrow\mathbf{R}^{d}\) plus the initial condition \(\mu_{0}^{*}\). If a probability flow \(\mu_{t}\) with flow map \(\Phi_{t}\) and velocity field \(v_{t}\) satisfies the following _self-consistency_ condition, \[v_{t}(x)=f_{t}(x;\mu_{t}),\forall x\text{ in the support of }\mu_{t}, \tag{6}\] then the continuity equation of this flow implies the corresponding PDE (2) is solved. Conversely, the velocity field of any solution of (2) will satisfy (6). Shen et al. (2022) develop this concept for the Fokker-Planck equation, and here we generalize it to a wider class of PDEs of the form (2). Hence, instead of solving (2) which is a condition on the density \(p_{t}\) that might be hard to access, we can solve (6) which is a more tractable condition on the velocity field \(v_{t}\) that is readily accessible using TIPF or NODE. Let \(\theta\) be the network weights that parameterize the probability flow using TIPF or NODE. The flow's measure, velocity field, and flow map at time \(t\) are denoted as \(\mu_{t}^{\theta}\), \(v_{t}^{\theta}\), \(\Phi_{t}^{\theta}\) respectively. One option to solve (6) would be to minimize \[\min_{\theta}\int_{0}^{T}\mathbf{E}_{X\sim\mu_{t}^{\theta}}\left[\left\|v_{t} ^{\theta}(X)-f_{t}(X;\mu_{t}^{\theta})\right\|^{2}\right]\mathrm{d}t. \tag{7}\] This formulation is reminiscent of PINNs (Raissi et al., 2019) where a residual of the original PDE is minimized. Direct optimization of (7) is challenging: while the integration over \([0,T]\) and \(\mu_{t}^{\theta}\) can be approximated using Monte Carlo, to apply stochastic gradient descent, we must differentiate through the \(\mu_{t}^{\theta}\) and \(f_{t}\): this can be either expensive or intractable depending on the network parameterization. The algorithm by Shen et al. (2022) uses the adjoint method specialized to Fokker-Planck equations; extending their approach to more general PDEs requires a closed-form formula for the time evolution of the quantities within \(f_{t}\), which can only be obtained on a case-by-case basis. Instead, we propose the following iterative optimization algorithm to solve (7). Let \(\theta_{k}\) denote the network weights at iteration \(k\). We define iterates \[\theta_{k+1}:=\operatorname*{arg\,min}_{\theta}F(\theta,\theta_{k}). \tag{8}\] where \[F(\theta,\theta_{k})\!:=\!\int_{0}^{T}\!\!\mathbf{E}_{X\sim\mu_{t}^{\theta_{k} }}\left[\left\|v_{t}^{\theta}(X)-f_{t}(X;\mu_{t}^{\theta_{k}})\right\|^{2} \right]. \tag{9}\] Effectively, in (9), we only match the velocity field \(v_{t}^{\theta}\) to what it should be according to \(f_{t}\) based on the network weights \(\theta_{k}\) from the previous iteration. This scheme is an infinite-dimensional analog to fixed-point iterations as \(v_{t}\) is a continuous vector field. Since \(\theta_{k}\) is fixed, minimizing (9) over \(\theta\) is a lot easier than directly minimizing (7), as \(v_{t}^{\theta}\) only needs to match a constant velocity field \(f_{t}(\cdot;\mu_{t}^{\theta_{k}})\); we found a few steps of stochastic gradient descent sufficient for the optimization in (8) (see a comparison in Figure 13). We call this iterative algorithm _self-consistent velocity matching_. If \(f_{t}\) depends on the density of \(\mu_{t}\) only through the score \(\nabla\log p_{t}\) (corresponding to a diffusion term in the PDE), then we can apply an integration-by-parts trick (Hyvarinen and Dayan, 2005) to get rid of this density dependency by adding a divergence term of the velocity field. Suppose \(f_{t}\) is from the Fokker-Planck equation (4). Then the cross term in (9) after expanding the squared norm has the following alternative expression. **Proposition 3.1**.: _For every \(t\in[0,T]\), for \(f_{t}\) defined in (4), assume \(v_{t}^{\theta},D_{t}\) are bounded and continuously differentiable, and \(\mu_{t}^{\theta^{\prime}}\) is a measure with a continuously differentiable density \(p_{t}^{\theta^{\prime}}\) that vanishes in infinity and not at finite points, then we have_ \[\mathbf{E}_{X\sim\mu_{t}^{\theta^{\prime}}}\left[v_{t}^{\theta}( X)^{\top}f_{t}(X;\mu_{t}^{\theta^{\prime}})\right]=\] \[\mathbf{E}_{X\sim\mu_{t}^{\theta^{\prime}}}\left[v_{t}^{\theta}( X)^{\top}b_{t}(X)+\boldsymbol{\nabla}\cdot\left(D_{t}^{\top}(x)v_{t}^{ \theta}(X)\right)\right]. \tag{10}\] We provide the derivation in Appendix A. Minimizing (9) is then equivalent to minimizing the expectation of the squared norm of \(v_{t}^{\theta}\) plus the cross term (10), and access to \(p_{t}\) is no longer needed. This is useful for NODE parameterization since obtaining the score would otherwise require additional numerical integration. ### Practical algorithm We apply stochastic gradient descent to solve (9) using the Adam optimizer (Kingma and Ba, 2014). Our algorithm is summarized in Algorithm 1. For sampling time steps \(t_{1},\dots,t_{L}\) in \([0,T]\), we use stratified sampling where \(t_{l}\) is uniformly sampled from \(\left[\nicefrac{{(l-1)^{T}}}{{L}},\nicefrac{{tT}}{{L}}\right]\); such a sampling strategy results in more stable training in our experiments. We retain the optimizer state of Adam from iteration \(k\) to iteration \(k+1\). We implemented our method using JAX (Bradbury et al., 2018) and FLAX (Heek et al., 2020). See Appendix B for the implementation details. ``` 1:Input: \(\theta_{k}\), \(\theta_{ Fokker-Planck equation in Section 4.4 and compare it against the Euler-Maruyama method for simulating stochastic differential equations (Higham, 2001). Finally, in Section 4.5 we show that our framework is capable of generating complicated dynamics in dimension 2. We will use SCVM-TIPF and SCVM-NODE to denote our method with TIPF and NODE parameterization respectively. We use JKO-ICNN to denote the method by Mokrov et al. (2021) and JKO-ICNN-PD to denote the method by Fan et al. (2021) (PD for "primal-dual"). We use SDE-EM to denote the Euler-Maruyama method. We implemented all competing methods in JAX--see more details in Appendix B. For JKO methods, we always use \(40\) JKO steps. Evaluation metrics.For quantitative evaluation, we use the following metrics. To compare measures with density access, following Mokrov et al. (2021), we use the symmetric Kullback-Leibler (symmetric KL) divergence, defined as \(\text{SymKL}(\rho_{1},\rho_{2}):=\text{KL}(\rho_{1}\parallel\rho_{2})+\text{ KL}(\rho_{2}\parallel\rho_{1})\), where \(\text{KL}(\rho_{1}\parallel\rho_{2}):=\mathbf{E}_{X\sim\rho_{i}}[\log^{d\rho_ {1}(X)}/d_{\rho_{2}(X)}]\). When estimating symmetric KL divergence using samples, due to the finite sample size and the numerical error in estimating the log density, the estimated divergence can be negative when it is close to zero--when this occurs we take absolute values. We also consider an alternative \(f\)-divergence \(D_{f}(\rho_{1}\parallel\rho_{2}):=\mathbf{E}_{X\sim\rho_{2}}[\log\rho_{1}(X) -\log\rho_{2}(X))^{2}/2]\). Compared to KL divergence, sample estimates of \(D_{f}\) are always positive. We similarly define the symmetric \(f\)-divergence \(\text{Sym}D_{f}(\rho_{1},\rho_{2}):=D_{f}(\rho_{1}\parallel\rho_{2})+D_{f}( \rho_{2}\parallel\rho_{1})\). To compare measures with only sample access, we consider the energy distance (Szekely and Rizzo, 2013) and the Wasserstein-\(2\) distance (Bonneel et al., 2011). More details on the metric calculations are given in Appendix B.4. ### Sampling from mixtures of Gaussians We consider computing the Wasserstein gradient flow of the KL divergence \(\mathcal{F}(\mu)=\text{KL}(\mu\parallel\mu^{*})\) where we have density access to the target measure \(\mu^{*}\). To fit into our framework, we set \(f_{t}(x;\mu_{t})=\nabla\log p^{*}(x)-\nabla\log p_{t}(x)\) which matches (4) with \(b_{t}(x)=\nabla\log p^{*}(x)\) and \(D_{t}(x)=I_{d}\). Following the experimental setup in Mokrov et al. (2021) and Fan et al. (2021), we take \(\mu^{*}\) to be a mixture of 10 Gaussians with identity covariance and means sampled uniformed in \([-5,5]^{d}\). The initial measure is \(\mu_{0}^{*}=\mathcal{N}(0,16I_{d})\). We solve the corresponding Fokker-Planck PDE for a total time of \(T=5\) and for \(d=10,\dots,60\). As TIPF parameterization does not scale to high dimensions, we only consider SCVM-NODE in this experiment. Figure 1 shows the samples produced by SCVM-NODE align well with those from the target measure in dimension 60 at \(t=T\). In Figure 14, we visualize \(\mu_{t}\) produced by our method at irregular time steps. We quantitatively compare our solutions with those from Mokrov et al. (2021) and Fan et al. (2021). In Figure 2, we plot various metrics for all methods at \(t=5\) (compared against the target distribution) while varying the dimension \(d\). The running time of Mokrov et al. (2021) becomes prohibitively long (5 hours for \(d=30\)), so we only include its result for \(d\leq 30\). In Figure 3, we plot the same metrics as functions of \(t\) for \(d=30\) and \(d=60\). We see that SCVM-NODE achieves far lower metrics in all dimensions considered. We notice the gradient flow computed by JKO methods might not result in monotonically decreasing KL divergence (first column in Figure 3), likely because the optimization at each JKO step has yet to reach the minimum even though we use 2000 gradient updates for each step. To illustrate the computational bottleneck of JKO-based methods, in Figure 4, we plot the run time (in seconds) of each JKO step for the JKO-ICNN and JKO-ICNN-PD for dimension 20. For both methods, the running time for each JKO step increases linearly because samples (and for JKO-ICNN also \(\log\det\) terms) need to be pushed through a growing chain of ICNNs; as a result, the total running time scales quadratically with the number of JKO steps. The memory consumption scales linearly with the number of JKO steps as well which can become prohibitive. For \(d=20\), training SCVM-NODE took only 6.78 minutes, while JKO-ICNN and JKO-ICNN-PD with 40 JKO steps took 29.28 and 137.66 minutes respectively. JKO methods also take about 10x as long evaluation time as SCVM-NODE in dimension 20 (and more in higher dimensions) due to density access which requires solving an optimization problem for each JKO step. On top of the computational advantage and the better results, our method also does not have temporal discretization: after being trained, the flow can be accessed at any time \(t\) (Figure 14). ### Ornstein-Uhlenbeck process To compare the accuracy of the obtained solution at all time \(t\), we consider the Ornstein-Uhlenbeck (OU) process following the same experimental setup as in Mokrov et al. (2021); Fan et al. (2021). The OU process is the Wasserstein gradient flow of the KL divergence with respect to a Gaussian \(\mu^{*}=\mathcal{N}(\beta,\Gamma^{-1})\) where \(\beta\in\mathbf{R}^{d}\) and \(\Gamma\) is a \(d\times d\) positive-definite matrix. When the initial distribution is \(\mu_{0}^{*}=\mathcal{N}(0,I_{d})\), the gradient flow at time \(t\) is known to be a Gaussian distribution \(G(t)\) with mean \((I_{d}-e^{-t\Gamma})\beta\) and covariance \(\Gamma^{-1}(I_{d}-e^{-2t\Gamma})+e^{-2t\Gamma}\). We set the total time \(T=2\). We consider both SCVM-TIPF and SCVM-NODE. In Figure 5, for each method, we compute the symmetric KL and the symmetric \(f\)-divergence between the recovered measure at time \(t\) and \(G(t)\) as functions of \(t\) in dimension \(d=5\) and \(d=10\). We Figure 1: Qualitative comparison between the target mixture of 10 Gaussians in dimension \(60\) and the probability flow solution of SCVM-NODE at \(t=5\). Samples are projected onto the first two PCA components and kernel density estimation is used to generate the contours. Figure 3: Quantitative comparison for the mixture of Gaussians experiment for varying \(t\) in dimension \(30\) (top row) and \(60\) (bottom row). Figure 2: Quantitative comparison for the mixture of Gaussians experiment across dimension \(d\) at \(t=5\). found that JKO methods result in much higher errors for small \(t\) compared to both SCVM-TIPF and SCVM-NODE: this is expected because the dependency of \(G(t)\) on \(t\) is exponential, so convergence to \(\mu^{*}\) is faster in the beginning, yet a constant step size is used for JKO methods. In Figure 6, we compute the same metrics at final time \(t=T\) as functions of dimension \(d\) for \(d=2,3,\ldots,10\) (top row) and \(d=10,20,\ldots,60\) (bottom row). We see that SCVM-TIPF gives the best results in low dimensions; however, scaling it to \(d\geq 10\) is difficult as many coupling layers are needed. In high dimensions, both JKO-ICNN methods achieve good results. We suspect this is because the network architecture for ICNN has convex quadratic skip connections, the gradients of which are linear maps so ICNN methods excel at learning linear maps which are sufficient for recovering the OU process. Indeed, if we replace the convex quadratic skip connections with linear connections--this is closer to the original ICNN (Amos et al., 2017)--, then the performance of JKO-ICNN and JKO-ICNN-PD drops drastically and results in numbers worse than those of SCVM-NODE (Figure 15). ### Porous medium equation Following Fan et al. (2021), we consider the porous medium equation with only diffusion: \(\partial_{t}p_{t}=\Delta p_{t}^{m}\) with \(m>1\). It is the Wasserstein gradient flow of \(\mathcal{F}(\mu)=\int\frac{1}{m-1}p(x)^{m}\,\mathrm{d}x\) where \(p\) is the density of \(\mu\). The corresponding Wasserstein gradient is \(\nabla_{W_{2}}\mathcal{F}(\mu)(x)=mp^{m-2}(x)\). This flow has as closed-form solution given by the Barenblatt profile Vazquez (2007) when initialized accordingly: \[p_{t}^{*}(x)\!=\!\left(t+t_{0}\right)^{-\alpha}\left(C-\beta\|x\|^{2}(t+t_{0}) ^{\frac{-2\alpha}{d}}\right)_{+}^{\frac{1}{m-1}}, \tag{11}\] where \(t_{0}>0\) is the starting time, \(\alpha=\frac{m}{d(m-1)+2}\), \(\beta=\frac{(m-1)\alpha}{2dm}\), and \(C>0\) is a free constant. We do not consider SCVM-NODE here because the integration-by-part trick (Proposition 3.1) does not apply. Similar to Fan et al. (2021), we choose \(m=2\) and total time \(T=0.025\). The initial Figure 4: Running time for each JKO step in dimension 20 of a particular run for the mixture of Gaussians experiment. Figure 5: Symmetric KL and \(f\)-divergence for the OU process experiment as functions of \(t\) in dimension \(5\) (top row) and \(10\) (bottom row). Figure 6: Symmetric KL and \(f\)-divergence at \(t=T\) for the OU process experiment as functions of dimension. Top: \(d=2,3,\ldots,10\). Bottom: \(d=10,20,\ldots,60\). measure follows a Barenblatt distribution supported in \([-0.25,0.25]^{d}\) (\(C\) is chosen accordingly) with \(t_{0}=10^{-3}\). We use Metropolis-Hastings to sample from \(\mu_{0}\). We show the efficiency of SCVM-TIPF compared to JKO-ICNN in dimension \(d=1,2,\ldots,6\). We exclude JKO-ICNN-PD since it produces significantly worse results on this application. We visualize the density \(p_{t}\) of the recovered flow from SCVM-TIPF and JKO-ICNN in Figure 7 in dimension 1 compared to \(p_{t}^{*}\). Both methods approximate \(p_{t}^{*}\) well with SCVM-TIPF more precise at the beginning of the flow; this is consistent with the observation in Figure 5 where JKO methods result in bigger errors for small \(t\). In Figure 8, we plot the \(f\)-divergence, the Wasserstein-2 distance, and the total variation (TV) distance (details on the TV distance are given in Appendix B.4) between the recovered solution \(p_{t}\) and \(p_{t}^{*}\) for both methods at \(t=0.004\) and \(t=0.025\). We also plot in Figure 16, for dimensions \(3\) and \(6\), the evolution of the same metrics across time. Note that the values of all metrics are very low implying that the solution from either method is very accurate, with SCVM-TIPF more precise in TV distance and symmetric \(f\)-divergence, especially for \(d>3\). Like with the experiments in previous sections, JKO-ICNN is much slower to train: in dimension \(6\), training JKO-ICNN took \(102\) minutes compared to \(21\) minutes for SCVM-TIPF. ### Time-Dependent Ornstein-Uhlenbeck In this section, we qualitatively evaluate our method for solving a PDE that is not a Wasserstein gradient flow. In this case, JKO-based methods cannot be applied. Consider the OU process from Section 4.2 when the mean \(\beta\) and the covariance matrix \(\Gamma\) become time-dependent as \(\beta_{t}\) and \(\Gamma_{t}\). The resulting PDE is a time-dependent Fokker-Planck equation of the form (4) with a velocity field \[f_{t}(X,\mu_{t})=\Gamma_{t}(\beta_{t}-X)-D\nabla\log p_{t}(X). \tag{12}\] In this configuration, when the initial measure \(p_{0}\) is Gaussian, the solution \(\mu_{t}\) can again be shown to be Gaussian with mean and covariance following an ODE. More details are given in Appendix C.1. We consider, in dimension \(2\) and \(3\), time-dependent attraction towards a harmonic mean \(\beta_{t}=a(\sin(\pi\omega t),\cos(\pi\omega t))\) using the expression of \(\beta_{t}\) from Boffi & Vanden-Eijnden (2022), augmented to \(\beta_{t}=a(\sin(\pi\omega t),\cos(\pi\omega t),t)\) in dimension \(3\). We apply both SCVM-TIPF and SCVM-NODE to this problem and compare our results with those of SDE-EM, the particle-based Euler-Maruyama discretization of the stochastic differential equation associated with the Fokker-Planck equation. To compute metrics for SDE-EM, we use kernel density estimation on the evolving particles. Similar to what has been observed for the static OU process, SCVM-TIPF outperforms SCVM-NODE in these low dimensions. SCVM-TIPF also obtains better results than SDE-EM. Visual simulations of the evolution of a few sampled particles are given Figure 17 and Figure 18. Figure 8: Total variation distance, symmetric \(f\)-divergence, and Wasserstein-2 distances across dimensions at \(t=0.004\) and \(t=0.025\) between \(p_{t}\) and \(p_{t}^{*}\) for solving the porous medium equation. Figure 7: Visualization of the densities of \(p_{t}^{*}\) and \(p_{t}\) for the porous medium equation in dimension \(1\) at varying time steps \(t\) for SCVM-TIPF and JKO-ICNN. ### Additional qualitative experiments To demonstrate the flexibility of our method, we apply our algorithm to model more general mass-conserving dynamics than the ones considered in the previous sections. Animated GIFs of these dynamics can be found at this link. Flock of birds.We first propose to model the dynamics of a flock of birds by augmenting the time-dependent Fokker-Planck equation (12) with an interaction term: \[f_{t}(X,\mu_{t}) = \Gamma_{t}(\beta_{t}-X)+\alpha_{t}(X-\mathbf{E}[\mu_{t}])-D\nabla \log p_{t}(X).\] This is similar to the harmonically interacting particles experiment in Boffi & Vanden-Eijnden (2022), but we use a population expectation \(\mathbf{E}[\mu_{t}]\) instead of an empirical one in modeling the repulsion from the mean. Since \(f_{t}\) needs to access \(\mathbf{E}[\mu_{t}]\), the resulting PDE is not a Fokker-Planck equation (4) and hence not solvable using the method in Boffi & Vanden-Eijnden (2022) but can be solved with our method by estimating \(\mathbf{E}[\mu_{t}]\) using Monte Carlo samples from \(\mu_{t}\). We use a similar setup as in Section 4.4, except we now use an "infinity sign" attraction \(\beta_{t}=a(\cos(2\pi\omega t),0.5\sin(2\pi\omega t))\) along with a sinusoidal \(\alpha_{t}=2\sin(\pi wt)\). Depending on the sign of \(\alpha_{t}\), particles are periodically attracted towards or repulsed from their mean. Both SCVM-TIPF and SCVM-NODE produce similar visual results as shown in Figure 10 and Figure 20. Flow splashing against obstacles.We now model the phenomenon of a 2-dimensional flow splashing against obstacles using a Fokker-Planck equation (4) where \(b_{t}\) encodes the configuration of three obstacles that repel the flow (See Appendix C.3 for details). We solve this PDE using SCVM-NODE for \(T=5\) and visualize the recovered flow in (11). When solving the same PDE using SDE-EM, the flow incorrectly crosses the bottom right obstacle due to a finite time step size (Figure 22) whereas our method has no such issue and results in continuous sample paths (Figure 21). Smooth interpolation of measures.We formulate the problem of smoothly interpolating a list of measures as a time-dependent Fokker-Planck equation and use it to interpolate MNIST digits 1, Figure 10: Flow at a few sampled particles of SCVM-TIPF which simulates a flock of birds. See Figure 19 for visualization with more time steps. Figure 9: Symmetric KL divergence and Wasserstein-2 distances across time for \(d=2,3\) between the recovered flows and the ground truth for the time-dependent OU process. 2, and 3, starting from a Gaussian (Figure 12). We use a mixture of small-variance Gaussians to represent each digit given as an image. See Appendix C.4 for more details. ## 5 Conclusion By extending the concept of self-consistency from Shen et al. (2022), we present an iterative optimization method for solving a wide class of mass-conserving PDEs without temporal or spatial discretization. Our method achieves strong quantitative results in computing Wasserstein gradient flows compared to recent JKO-based methods while requiring far less computation time. Below we highlight a few future directions. First, as discussed, the two ways to parameterize a probability flow, TIPF and NODE, both have their specific limitations. Finding a new parameterization that combines the advantages of both TIPF and NODE is an important next step. Secondly, we hope to extend our approach to incorporate more complicated boundary conditions. Thirdly, our method might struggle or require a long training time with complicated long-time dynamics; using multiple flows, each for a short time interval, is one way to mitigate this issue. Finally, from a theoretical perspective, it would be interesting to explore the convergence properties of the proposed iterative procedure.
2302.14508
Double Sum involving Product of Appell-Type Bernoulli and Euler Polynomials
In this work we derive a bilateral generating function involving the product of an Appell-type product of the Bernoulli and Euler polynomials over independent indices and orders. This function is expressed in terms of the Hurwitz zeta function and special cases in terms of the finite sum of the Hurwitz zeta function and integral formula are derived.
Robert Reynolds
2023-02-28T11:55:31Z
http://arxiv.org/abs/2302.14508v1
# Double Sum Involving Product Of Appell-Type Bernoulli ###### Abstract. In this work we derive a bilateral generating function involving the product of an Appell-type product of the Bernoulli and Euler polynomials over independent indices and orders. This function is expressed in terms of the Hurwitz zeta function and special cases in terms of the finite sum of the Hurwitz zeta function and integral formula are derived. Key words and phrases:Generating function, Bernoulli polynomial, Euler polynomial, Cauchy integral, Catalan's constant 2020 Mathematics Subject Classification: Primary 30E20, 33-01, 33-03, 33-04 ## 0.1. Theory and Background In 1880, Appell [1, 2] introduced a widely studied sequence of \(n\)th-degree polynomials \(\{f_{n}\}_{n\in\mathbb{N}}\) satisfying the differential relation \[f_{n}^{\prime}(x)=nf_{n-1}(x),n=1,2,.... \tag{0.1}\] Certain Appell sets such as the Hermite polynomials, Bernoulli and Euler polynomials described in [3], Chap. 2, have been of high interest in research. The Bernoulli and Euler polynomials below are given in equations (6.3.1.1), (6.3.3.1), (6.3.2.1) and (6.3.4.1) in [4] respectively; \[\sum_{n=0}^{\infty}B_{n}(x)\frac{z^{n}}{n!}=\frac{ze^{xz}}{e^{z}-1}, \tag{0.2}\] and \[\sum_{n=0}^{\infty}E_{n}(x)\frac{z^{n}}{n!}=\frac{2e^{xz}}{e^{z}+1}, \tag{0.3}\] and \[\sum_{n=0}^{\infty}B_{n}(x+ny)\frac{u^{n}}{n!}=\frac{1}{1-yz}\frac{ze^{xz}}{e ^{z}-1}, \tag{0.4}\] and \[\sum_{n=0}^{\infty}E_{n}(x+ny)\frac{u^{n}}{n!}=\frac{1}{1-yz}\frac{2e^{xz}}{e ^{z}+1}, \tag{0.5}\] where \[u=ze^{-yz}. \tag{0.6}\] Appell polynomials are of high interest and have many applications in mathematics, theoretical physics, chemistry, special functions, analysis, combinatorics and number theory [1, 2, 5]. Bernoulli polynomials and numbers were first introduced by Jacob Bernoulli, and the Bernoulli polynomials are a special case of Appell polynomials. Bernoulli polynomials and numbers are used in the theory of finite differences especially in the process of summation. The Euler polynomials are named after gifted Swiss mathematician Leonhard Euler (1707-1783), these polynomial functions have much in common with Bernoulli polynomials. Both these polynomial families are useful in summing series involving quantities raised to integer powers defined by [6], Chap. 20. Considerable scientific study continues to this day involving the Bernoulli and Euler polynomials defined by [2], Chap.2. In this paper we will derive a generating function in terms of the product of the Bernoulli and Euler polynomials over independent variables. This is an extension of formulae in current literature. ### Preliminaries We proceed by using the contour integral method [7] applied to equations (0.4) and (0.5) to yield the Appell-type Bernoulli-Euler contour integral representation given by: \[\frac{1}{2\pi i}\int_{C}\sum_{n,p\geq 0}\frac{a^{w}\pi^{n+p}( \pi\beta w-1)(\pi\delta w-1)B_{p}(\gamma+p\delta)E_{n}(\alpha+n\beta)}{n!\,p!\, e^{\pi w(\beta n+\delta p)}w^{k-n-p+1}}dw\\ =\frac{1}{2\pi i}\int_{C}\mathrm{csch}(\pi w)e^{\pi w(\alpha+ \gamma-1)}dw \tag{0.7}\] where \(a,\alpha,\beta,\gamma,\delta,k\in\mathbb{C},Re(w)>0\). Using equation (0.7) the main Theorem involving the product of the Bernoulli and Euler polynomials and expressed in terms of the Hurwitz zeta function to be derived and evaluated is given by \[\sum_{n,p\geq 0}\frac{B_{p}(\gamma+p\delta)E_{n}(\alpha+n \beta)}{(a-n\beta-p\delta)^{2-k+n+p}\Gamma(1+n)\Gamma(1+p)(k)_{1-n-p}}\left( \delta\left(\beta\left(k^{2}-k(n+p+1\right)\right.\right.\] \[\left.\left.+2np+n+p\right)-a(k-n+p))+(a-\beta n)(a+\beta(p-k))+ \delta^{2}p(k-n)\right)\\ =-2^{k}\zeta\left(1-k,\frac{1}{2}(a+\alpha+\gamma)\right) \tag{0.8}\] where the variables \(a,\alpha,\beta,\gamma,\delta,k\) are general complex numbers and the Pochhammer symbol, \((-k)_{p}\) is given in equation (5.2.5) in [8]. The derivations follow the method used by us in [7]. This method involves using a form of the generalized Cauchy's integral formula given by \[\frac{y^{k}}{\Gamma(k+1)}=\frac{1}{2\pi i}\int_{C}\frac{e^{wy}}{w^{k+1}}dw, \tag{0.9}\] where \(y,w\in\mathbb{C}\) and \(C\) is in general an open contour in the complex plane where the bilinear concomitant [7] has the same value at the end points of the contour. This method involves using a form of equation (0.9) then multiplies both sides by a function, then takes the definite double sum of both sides. This yields a double sum in terms of a contour integral. Then we multiply both sides of equation (0.9) by another function and take the infinite sum of both sides such that the contour integral of both equations are the same. ### Left-Hand Side First Contour Integral In this section we derive the infinite sum representation involving the product of two generalized Euler and Bernoulli polynomials over independent indices for the left-hand side of equation (10). Using a generalization of Cauchy's integral formula (11), we first replace by \(\log(a)-\pi(\beta n+\delta p)\) and \(k\) by \(k-n-p\) then we multiply both sides by \[\frac{\pi^{n+p}B_{p}(\gamma+p\delta)E_{n}(\alpha+n\beta)}{n!\,p!} \tag{12}\] and then we take the sums over \(n\in[0,\infty)\) and \(p\in[0,\infty)\) and simplify to get \[\sum_{n,p\geq 0}\frac{\pi^{n+p}B_{p}(\gamma+p\delta)E_{n}( \alpha+n\beta)(\log(a)-\pi(\beta n+\delta p))^{k-n-p}}{n!\,p!\,(k-n-p)!}\] \[=\frac{1}{2\pi i}\sum_{n,p\geq 0}\int_{C}\frac{a^{w}\pi^{n+p}B_{p }(\gamma+p\delta)E_{n}(\alpha+n\beta)w^{-k+n+p-1}e^{-\pi w(\beta n+\delta p) }}{n!\,p!}dw\] \[\quad=\frac{1}{2\pi i}\int_{C}\sum_{n,p\geq 0}\frac{a^{w}\pi^{n+p }B_{p}(\gamma+p\delta)E_{n}(\alpha+n\beta)w^{-k+n+p-1}e^{-\pi w(\beta n+ \delta p)}}{n!\,p!}dw \tag{13}\] ### Left-Hand Side Second Contour Integral Using a generalization of Cauchy's integral formula (11), we first replace \(y\) by \(\log(a)-\pi(\beta n+\delta p)\) and \(k\) by \(k-n-p-1\) then we multiply both sides by \[-\frac{\beta\pi^{n+p+1}B_{p}(\gamma+p\delta)E_{n}(\alpha+n\beta) }{n!\,p!}dw\] \[=-\frac{1}{2\pi i}\sum_{n,p\geq 0}\int_{C}\frac{\beta a^{w}\pi^{n+p +1}B_{p}(\gamma+p\delta)E_{n}(\alpha+n\beta)w^{-k+n+p}e^{-\pi w(\beta n+ \delta p)}}{n!\,p!}dw \tag{14}\] ### Left-Hand Side Third Contour Integral Using a generalization of Cauchy's integral formula (11), we first replace \(y\) by \(\log(a)-\pi(\beta n+\delta p)\) and \(k\) by \(k-n-p-1\) then we multiply both sides by \[-\frac{\delta\pi^{n+p+1}B_{p}(\gamma+p\delta)E_{n}(\alpha+n\beta) }{n!\,p!}\] \[=-\frac{1}{2\pi i}\sum_{n,p\geq 0}\int_{C}\frac{\delta a^{w}\pi^{n+ p+1}B_{p}(\gamma+p\delta)E_{n}(\alpha+n\beta)w^{-k+n+p}e^{-\pi w(\beta n+ \delta p)}}{n!\,p!}dw \tag{15}\] \[=-\frac{1}{2\pi i}\int_{C}\sum_{n,p\geq 0}\frac{\delta a^{w}\pi^{n+p+1}B_{p}( \gamma+p\delta)E_{n}(\alpha+n\beta)w^{-k+n+p}e^{-\pi w(\beta n+\delta p)}}{n!\,p!}dw\] ### Left-Hand Side Fourth Contour Integral Using a generalization of Cauchy's integral formula (10), we first replace \(y\) by \(\log(a)-\pi(\beta n+\delta p)\) and \(k\) by \(k-n-p-2\) then we multiply both sides by \[\frac{\beta\delta\pi^{n+p+2}B_{p}(\gamma+p\delta)E_{n}(\alpha+n\beta)}{n!\,p!}\] and take the sums over \(n\in[0,\infty)\) and \(p\in[0,\infty)\) and simplify to get \[\sum_{n,p\geq 0}\frac{\beta\delta\pi^{n+p+2}B_{p}(\gamma+p \delta)E_{n}(\alpha+n\beta)(\log(a)-\pi(\beta n+\delta p))^{k-n-p-2}}{n!\,p! \,\Gamma(k-n-p-1)}\] \[=\frac{1}{2\pi i}\sum_{n,p\geq 0}\int_{C}\frac{\beta\delta a^{w} \pi^{n+p+2}B_{p}(\gamma+p\delta)E_{n}(\alpha+n\beta)w^{-k+n+p+1}e^{-\pi w( \beta n+\delta p)}}{n!\,p!}dw\] \[=\frac{1}{2\pi i}\int_{C}\sum_{n,p\geq 0}\frac{\beta\delta a^{w} \pi^{n+p+2}B_{p}(\gamma+p\delta)E_{n}(\alpha+n\beta)w^{-k+n+p+1}e^{-\pi w( \beta n+\delta p)}}{n!\,p!}dw\] ## 1 Hurwitz zeta Function In Terms Of The Contour Integral ### The Hurwitz zeta Function The Hurwitz zeta function (25)(i) in [8] is defined by the infinite sum \[\zeta(s,a)=\sum_{n=0}^{\infty}\frac{1}{(n+a)^{s}},\] where \(\zeta(s,a)\) has a meromorphic continuation in the \(s\)-plane, its only singularity in \(\mathbb{C}\) being a simple pole at \(s=1\) with residue \(1\). As a function of \(a\), with \(s(\neq 1)\) fixed, \(\zeta(s,a)\) is analytic in the half-plane \(Re(a)>0\). The Hurwitz zeta function is continued analytically with a definite integral representation (25) in [8] given by \[\zeta(s,a)=\frac{1}{\Gamma(s)}\int_{0}^{\infty}\frac{x^{s-1}e^{-ax}}{1-e^{-x}}dx,\] where \(Re(s)>1,Re(a)>0\). ### Derivation of the Right-Hand Side Contour Integral Using a generalization of Cauchy's integral formula we first replace \(y\) by \(\pi(\alpha+\gamma-1)+\log(a)+\pi(2y+1)\) and \(k\) by \(k-1\) then multiply both sides by \(-2\pi\) then take the infinite sum over \(y\in[0,\infty)\) and simplify in terms of the Hurwitz zeta function to get \[-\frac{(2\pi)^{k}\zeta\left(1-k,\frac{\pi(\alpha+\gamma-1)+\log( a)+\pi}{2\pi}\right)}{(k-1)!}\\ =-\frac{1}{2\pi i}\sum_{y\geq 0}\int_{C}2\pi a^{w}w^{-k}e^{ \pi w(\alpha+\gamma+2y)}dw\] \[=-\frac{1}{2\pi i}\int_{C}\sum_{y\geq 0}2\pi a^{w}w^{-k}e^{\pi w(\alpha+\gamma+2y )}dw\] \[=\frac{1}{2\pi i}\int_{C}\pi a^{w}w^{-k}\mathrm{csch}(\pi w)e^{\pi w(\alpha+ \gamma-1)}dw\] from equation (1.232.3) in [9] where \(Im(w)>0\) in order for the sum to converge. ## 2. Main Results In this section we derive the main theorem along with special cases in terms of integral, series and special function forms of the Hurwitz zeta function. A special case in terms of Catalan's constant is also derived and evaluated. **Theorem 2.1**.: _For all \(k,a,\alpha,\beta,\gamma,\delta\in\mathbb{C}\) then,_ \[\sum_{n,p\geq 0}\frac{B_{p}(\gamma+p\delta)E_{n}(\alpha+n\beta)}{(a- n\beta-p\delta)^{2-k+n+p}\Gamma(1+n)\Gamma(1+p)(k)_{1-n-p}}\left(\delta \left(\beta\left(k^{2}-k(n+p+1\right)\right.\right.\] \[\left.\left.+2np+n+p)-a(k-n+p)\right)+(a-\beta n)(a+\beta(p-k))+ \delta^{2}p(k-n)\right)\\ =-2^{k}\zeta\left(1-k,\frac{1}{2}(a+\alpha+\gamma)\right). \tag{2.1}\] Proof.: Since the addition of the right-hand sides of equations (0.11) to (0.17) is equal to the right-hand side of equation (1.1) we may equate the left-hand sides, replace \(a\to e^{a\pi}\) then simply using the formulae for the gamma function and Pochhammer symbol to yield the stated result. **Theorem 2.2**.: _For all \(k,a,\alpha,\beta,\gamma,\delta\in\mathbb{C}\) then,_ \[\sum_{n\geq 0}\frac{E_{n}(x+n\alpha)(-1)^{1+n}(1-k)_{-1+n}}{(a-n \alpha)^{1-k+n}\Gamma(n+1)}\\ =\frac{2^{1+k}\left(-\zeta\left(-k,\frac{a+x}{2}\right)+\zeta \left(-k,\frac{1}{2}(1+a+x)\right)\right)}{k(-a+k\alpha)}. \tag{2.2}\] Proof.: We use equation (0.5) and repeat the procedure in Theorem (2.1) and apply the contour integral method [7]. **Theorem 2.3**.: _For all \(k,a,\alpha,\beta,\gamma,\delta\in\mathbb{C}\) then,_ \[\sum_{n\geq 0}\frac{(-1)^{n}(a-\alpha k)\Gamma(n-k)(a-\alpha n)^{k-n-1}B_{n}(x+ n\alpha)}{\Gamma(1-k)\Gamma(n+1)}=\zeta(1-k,a+x). \tag{2.3}\] Proof.: We use equation (0.4) and repeat the procedure in Theorem (2.1) and apply the contour integral method [7]. **Example 2.4**.: Special values in terms of the polygamma function. \[\sum_{n,p\geq 0}\frac{B_{p}(\gamma+p\delta)E_{n}(\alpha+n\beta)}{(a -n\beta-p\delta)^{2+k+n+p}\Gamma(1+n)\Gamma(1+p)(-k)_{1-n-p}}\left(\delta \left(\beta\left(k^{2}+k(n+p+1\right)\right.\right.\] \[\left.\left.+2np+n+p)-a(-k-n+p)\right)+(a-\beta n)(a+\beta(k+p))+ \delta^{2}p(-k-n)\right) \tag{2.4}\] \[=-\frac{2^{-k}(-1)^{-1-k}\psi^{(k)}\left(\frac{1}{2}(a+\alpha+\gamma) \right)}{\Gamma(k+1)}.\] Here we use a special value of the Hurwitz zeta function given by equation (25.11.12) in [8] and simplify the right-hand side of equation (2.1). **Example 2.5**.: Special values in terms of the finite sum of the Hurwitz zeta function. \[\sum_{n,p\geq 0}\frac{B_{p}(q\gamma+p\delta)E_{n}(q\alpha+n \beta)}{\Gamma(1+n)\Gamma(1+p)(k)_{1-n-p}}\left((aq-\beta n-\delta p)^{k-n-p-2 }\left(\delta\left(\beta\left(k^{2}-k(n+p+1\right)\right.\right.\right.\] \[\left.\left.+2np+n+p)-aq(k-n+p)\right)+(aq-\beta n)(aq+\beta(p-k ))+\delta^{2}p(k-n))\right)\\ =-2^{k}q^{-1+k}\sum_{n=0}^{q-1}\zeta\left(1-k,\frac{n}{q}+\frac{ 1}{2}(a+\alpha+\gamma)\right). \tag{2.5}\] Here we use a special value of the Hurwitz zeta function in terms of the finite sum of the Hurwitz zeta function given by use equation (25.11.15) in [8] and simplify the right-hand side of equation (2.1). **Example 2.6**.: Integral Representation. \[\sum_{n,p\geq 0}\frac{B_{p}(\gamma+p\delta)E_{n}(\alpha+n\beta)}{(a -n\beta-p\delta)^{2-k+n+p}\Gamma(1+n)\Gamma(1+p)(k)_{1-n-p}}\left(\delta\left( \beta\left(k^{2}-k(n+p+1\right.\right.\right.\] \[\left.\left.+2np+n+p)-a(k-n+p)\right)+(a-\beta n)(a+\beta(p-k))+ \delta^{2}p(k-n)\right)\\ =-\frac{2^{k}}{\Gamma(1-k)}\int_{0}^{\infty}\frac{\left(e^{-\frac {1}{2}x(a+\alpha+\gamma)}x^{-k}\right)}{(1-e^{-x})}dx. \tag{2.6}\] Here we use the integral representation of the Hurwitz zeta function given by equation (1.1) and equation (12.3.5) in [10] and simplify the right-hand side of equation (2.1). **Example 2.7**.: The \(n^{th}\) harmonic number \(H_{n}^{s}\) and the Riemann zeta function \(\zeta(s)\). \[\sum_{n,p\geq 0}\frac{B_{p}(\gamma+p\delta)E_{n}(\alpha+n\beta)}{(a -n\beta-p\delta)^{2-k+n+p}\Gamma(1+n)\Gamma(1+p)(k)_{1-n-p}}\left(\delta\left( \beta\left(k^{2}-k(n+p+1\right.\right.\right.\] \[\left.\left.+2np+n+p)-a(k-n+p)\right)+(a-\beta n)(a+\beta(p-k))+ \delta^{2}p(k-n)\right)\\ =-2^{k}\left(-H_{-1+\frac{k}{2}(a+\alpha+\gamma)}^{(1-k)}+\zeta( 1-k)\right). \tag{2.7}\] In this proof we apply the relationship between the Polygamma functions and Hurwitz zeta function given by equations (1.7) and (1.9) in [11] and simplify the right-hand side of equation (2.1). **Example 2.8**.: Bernoulli polynomial \(B_{n}(x)\) over integers. \[\sum_{n,p\geq 0}\frac{B_{p}(\gamma+p\delta)E_{n}(\alpha+n\beta)}{(a-n \beta-p\delta)^{2-k+n+p}\Gamma(1+n)\Gamma(1+p)(k)_{1-n-p}}\left(\delta\left(\beta \left(k^{2}-k(n+p+1\right)\right.\right.\] \[\left.\left.+2np+n+p\right)-a(k-n+p))+(a-\beta n)(a+\beta(p-k))+ \delta^{2}p(k-n)\right)\\ =\frac{2^{k}}{k}B_{k}\left(\frac{1}{2}(a+\alpha+\gamma)\right). \tag{2.8}\] Proof.: In this proof we apply the formula between the Hurwitz zeta function and Bernoulli polynomial given by equation (25.11.14) in [8] and simplify the right-hand side of equation (2.1). **Example 2.9**.: Hermite's formula for Hurwitz zeta function. \[\sum_{n,p\geq 0}\frac{B_{p}(\gamma+p\delta)E_{n}(\alpha+n\beta)}{(a- n\beta-p\delta)^{2-k+n+p}\Gamma(1+n)\Gamma(1+p)(k)_{1-n-p}}\left(\delta \left(\beta\left(k^{2}-k(n+p+1\right.\right.\right.\] \[\left.\left.+2np+n+p\right)-a(k-n+p))+(a-\beta n)(a+\beta(p-k))+ \delta^{2}p(k-n)\right)\\ =-(a+\alpha+\gamma)^{-1+k}+\frac{(a+\alpha+\gamma)^{k}}{k}\\ -\int_{0}^{\infty}\frac{2^{1+k}\left(y^{2}+\frac{1}{4}(a+\alpha+ \gamma)^{2}\right)^{\frac{1}{2}(-1+k)}\sin\left((1-k)\tan^{-1}\left(\frac{2y}{ a+\alpha+\gamma}\right)\right)}{-1+e^{2\pi y}}dy. \tag{2.9}\] Proof.: Here we use the Hermite formula for the Hurwitz zeta function given by equation (2.2.12) in [12] and simplify the right-hand side of equation (2.1). **Example 2.10**.: A functional equation for Hurwitz zeta function. \[\sum_{n,p\geq 0}\frac{B_{p}(\gamma+p\delta)E_{n}(\alpha+n\beta)}{(a -n\beta-p\delta)^{2-k+n+p}\Gamma(1+n)\Gamma(1+p)(k)_{1-n-p}}\left(\delta\left( \beta\left(k^{2}-k(n+p+1\right.\right.\right.\] \[\left.\left.+2np+n+p\right)-a(k-n+p))+(a-\beta n)(a+\beta(p-k))+ \delta^{2}p(k-n)\right)\\ =-2\pi^{-k}\Gamma(k)\sum_{m=1}^{\infty}\left(m^{-k}\cos(m\pi(a+ \alpha+\gamma))\sin\left(\frac{1}{2}(1-k)\pi\right)\right.\\ \left.\left.+m^{-k}\cos\left(\frac{1}{2}(1-k)\pi\right)\sin(m\pi( a+\alpha+\gamma))\right).\right. \tag{2.10}\] Proof.: In this proof we will apply the Hurwitz zeta function expressed as a convergent Dirichlet series given in Theorem 3.1 of Chap. 2.3 in [13] and simplify the right-hand side of equation (2.1). **Example 2.11**.: The trigamma function \(\psi^{(1)}(z)\). \[\sum_{n,p\geq 0}\frac{1}{\Gamma(n+1)\Gamma(p+1)(-1)_{-n-p+1}}B_{p}(3 \gamma+p\delta)E_{n}(3\alpha+n\beta)\left(-\left(\frac{3a}{2}-\beta n\right) \left(\frac{3a}{2}+\beta+\beta p\right)\right.\\ \left.-\frac{3}{2}a\delta(n-p+1)-2\beta\delta(n+1)(p+1)+\delta^{ 2}(n+1)p\right)\left(\frac{3a}{2}-\beta n-\delta p\right)^{-n-p-3} \tag{2.11}\] \[+2^{n+p+1}B_{p}(\gamma+p\delta)E_{n}(\alpha+n\beta)\left((a-2\beta n)(a+2 \beta(p+1))+2a\delta(n-p+1)\right.\] \[\left.+8\beta\delta(n+1)(p+1)-4\delta^{2}(n+1)p\right)(a-2(\beta n +\delta p))^{-n-p-3}\] \[=\frac{1}{2}\left(\psi^{(1)}\left(\frac{3}{4}(a+2(\alpha+\gamma)) \right)-\psi^{(1)}\left(\frac{1}{4}(a+2(\alpha+\gamma))\right)\right).\] Proof.: In this proof we will use equation (2.1) and set \(k=-1\) to form a second equation. Using this new equation form a third equation by replacing \(a\to 3a,\alpha\to 3\alpha,\gamma\to 3\gamma\). Then take the difference between the second and third equations and simplify. **Example 2.12**.: Catalan's Constant \(C\)_._ \[\sum_{n,p\geq 0}\frac{8^{n+p+1}}{\Gamma(n+1)\Gamma(p+1)(-1)_{-n-p+ 1}}\left(B_{p}\left(\frac{p}{8}+3\right)E_{n}\left(\frac{n}{4}+3\right)(-2n-p +420)^{-n-p-3}\right.\\ \left.(n(p+420)-423p-177664)-B_{p}\left(\frac{p}{8}+1\right)E_{n }\left(\frac{n}{4}+1\right)\right.\\ \left.(-2n-p+140)^{-n-p-3}(n(p+140)-143p-20024)\right)\\ \approx 8C-\frac{752123372726218579207939350187534529801703884005228518 47928913376}{10213049603314044640247750329701049140178779927760268106012748125}. \tag{2.12}\] Proof.: In this proof we will use equation (2.11) and set \(a=35,\alpha=\gamma=1,\beta=\frac{1}{4},\delta=\frac{1}{8}\) and simplify using equations (24.11.40) and (25.11.1) in [8]. ## 3. The derivative with respect to \(k\) In this section we will evaluate the first partial derivative with respect to \(k\) of equation (2.1) in terms of composite Hurwitz zeta functions. **Example 3.1**.: The Hurwitz zeta function \(\zeta(u,v)\). \[\sum_{n,p\geq 0}\frac{1}{\Gamma(n+1)\Gamma(p+1)}(-1)^{n+p+1}B_{p}( \gamma+p\delta)E_{n}(\alpha+n\beta)(1-k)_{n+p-1}(a-\beta n-\delta p)^{k-n-p-2} \\ \left(\delta\left(\beta\left(k^{2}-k(n+p+1)+2np+n+p\right)\right. \right.\\ \left.-a(k-n+p))+(a-\beta n)(a+\beta(p-k))+\delta^{2}p(k-n)\right)\\ =-2^{k}\zeta\left(1-k,\frac{1}{2}(a+\alpha+\gamma)\right). \tag{3.1}\] Proof.: In this proof we will use equation (2.1) and simplify the reciprocal Pochhammer's symbol using equations (5.2.5) and (5.2.6) in [8]. **Example 3.2**.: The derivative of the Hurwitz zeta function \(\zeta^{\prime}(u,v)\). \[\sum_{n,p\geq 0}\frac{1}{\Gamma(n+1)\Gamma(p+1)}(-1)^{n+p}B_{p}( \gamma+p\delta)E_{n}(\alpha+n\beta)(1-k)_{n+p-1}(a-\beta n-\delta p)^{k-n-p-2} \\ \left(a(\beta+\delta)-\left((a-\beta n)(a+\beta(p-k))-a\delta(k-n +p)+\beta\delta\left(k^{2}-k(n+p+1)+2np+n+p\right)\right.\right.\\ \left.+\delta^{2}p(k-n)\right)(\log(a-\beta n-\delta p)-H_{-k+n+ p-1}+H_{-k})+\beta\delta(-2k+n+p+1)+\beta^{2}(-n)-\delta^{2}p\right) \tag{3.2}\] \[=2^{k}\left(\zeta^{\prime}\left(1-k,\frac{1}{2}(a+\alpha+\gamma)\right)-\log(2) \zeta\left(1-k,\frac{1}{2}(a+\alpha+\gamma)\right)\right).\] In this proof we will use equation (2.1) and take the first partial derivative with respect to \(k\) and simplify the right-hand side using equation (25.11.1) in [8]. The derivative and the Hurwitz zeta function, trigamma function \(\psi^{(1)}(z)\) and \(\log(2)\). The Derivative of the Riemann zeta function \(\zeta^{\prime}(3)\) and Apery's constant \(\zeta(3)\). \[\sum_{n,p\geq 0}\frac{1}{\Gamma(n+1)\Gamma(p+1)}B_{p}\left(\frac{p}{ 3}+1\right)E_{n}\left(\frac{n}{4}+1\right)(-1)^{n+p}12^{n+p+2}(-3n-4p+264)^{-n -p-4}\] \[\Gamma(n+p+2)\left((n(p-282)+260p-73464)H_{n+p+1}+n(-p)+(n(-p)+282n -260p+73464)\right.\] \[\left.\log\left(-\frac{n}{4}-\frac{p}{3}+22\right)+279n-256p+71556 \right) \tag{3.5}\] \[=-\frac{\zeta^{\prime}(3)}{2}+\frac{1}{4}\zeta(3)(\log(4)-1)-\frac{28985 3\log(2)}{3456000}-\frac{259\log(3)}{11664}-\frac{25523438671457(\log(4)-1)}{852000 14592000}\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad-\frac{9 \log(5)}{2000}-\frac{\log(7)}{686}-\frac{\log(11)}{2662}.\] Proof.: In this proof we will use equation (3.4) and set \(a=22,\alpha=\gamma=1,\beta=\frac{1}{4},\delta=\frac{1}{3}\) and simplify using equation (1.6) in [14]. ## 4 Extended Generating Functions In this section we apply the methods of simultaneous equations and ordinary differential equations to derive extended forms involving the Bernoulli and Euler polynomials. The method involves finding the closed form solution after increasing the factorial in the denominator by 1. We first assign a general function to the right-hand side of the equation we wish to derive. Next we take the difference of these equations followed by taking the derivative of the equation we are solving for such that the left-hand side is the same as the difference of the equations. Next we equate the right-hand sides and solve the ordinary differential equation. ### Example 1: Euler's polynomial Starting with the initial formula given by; \[\sum_{n=0}^{\infty}\frac{z^{n}e^{-nyz}E_{n}(x+ny)}{\Gamma(n+1)}=\frac{2e^{xz} }{\left(e^{z}+1\right)\left(1-yz\right)} \tag{4.1}\] We wish to solve the formula given by; \[\sum_{n=0}^{\infty}\frac{z^{n}e^{-nyz}E_{n}(x+ny)}{\Gamma(n+2)}=g(z) \tag{4.2}\] We next take the difference of equations (4.1) and (4.2) simplify to get; \[\sum_{n=0}^{\infty}\frac{nz^{n}e^{-nyz}E_{n}(x+ny)}{\Gamma(n+2)}=-g(z)-\frac{2 e^{xz}}{\left(e^{z}+1\right)\left(yz-1\right)} \tag{4.3}\] Next we take the first partial derivative with respect to \(z\) of equation (4.2) and multiply both sides by \(\frac{z}{1-yz}\) such that the left-hand side is the same as equation (4.3) given by; \[\sum_{n=0}^{\infty}\frac{nz^{n}e^{-nyz}E_{n}(x+ny)}{\Gamma(n+2)}=\frac{zg^{ \prime}(z)}{1-yz} \tag{4.4}\] Since the left-hand sides of equations (4.3) and (4.4) are the same we may equate the right-hand sides and derive the ordinary differential equation given by; \[-\frac{zg^{\prime}(z)}{1-yz}-g(z)-\frac{2e^{xz}}{\left(e^{z}+1 \right)\left(yz-1\right)}=0 \tag{4.5}\] Solving the above ordinary differential equation with initial condition \(g(0)=0\) and simplifying we get; **Theorem 4.1**.: _For all \(|Re(z)|<1,x,y\in\mathbb{C}\),_ \[\sum_{n=0}^{\infty}\frac{z^{n}e^{-nyz}E_{n}(x+ny)}{\Gamma(n+2)}\\ =\frac{e^{yz}}{z(x-y)}\left(2e^{z(x-y)}\,_{2}F_{1}\left(1,x-y;x-y+ 1;-e^{z}\right)\right.\\ \left.+(x-y)\left(\psi^{(0)}\left(\frac{x-y}{2}\right)-\psi^{(0) }\left(\frac{1}{2}(x-y+1)\right)\right)\right) \tag{4.6}\] where \[\int\frac{z^{a-1}}{z+1}dz=\frac{z^{a}\,_{2}F_{1}(1,a;a+1;-z)}{a}, \tag{4.7}\] \({}_{2}F_{1}\left(a,b,c,z\right)\) is the hypergeometric function and \(\psi^{(0)}(z)\) is the zeroth derivative of the digamma function \(\psi^{(n)}(z)\). ### Example 2: Bernoulli's polynomial Repeating the above method we derive the generating function for Bernoulli's polynomial given by; **Theorem 4.2**.: _For all \(|Re(z)|<1,x,y\in\mathbb{C}\),_ \[\sum_{n=0}^{\infty}\frac{z^{n}e^{-nyz}B_{n}(x+ny)}{\Gamma(n+2)}\\ =\frac{ey^{z}}{z}\left(\frac{e^{z(x-y)}\left(\left(x^{2}-2xy+y^{ 2}\right)\Phi\left(e^{z},2,x-y\right)+z(y-x)\,_{2}F_{1}\left(1,x-y;x-y+1;e^{z} \right)\right)}{(x-y)^{2}}\right.\\ \left.-\psi^{(1)}(x-y)\right) \tag{4.8}\] ## 5. Discussion In this paper, we have presented a method for deriving a bilateral generating function involving the product the Bernoulli and Euler polynomials along with some interesting related forms using contour integration. We would like to apply this method to derive other generating functions in future work. The results presented were numerically verified for both real and imaginary and complex values of the parameters in the integrals using Mathematica by Wolfram.
2309.15564
Jointly Training Large Autoregressive Multimodal Models
In recent years, advances in the large-scale pretraining of language and text-to-image models have revolutionized the field of machine learning. Yet, integrating these two modalities into a single, robust model capable of generating seamless multimodal outputs remains a significant challenge. To address this gap, we present the Joint Autoregressive Mixture (JAM) framework, a modular approach that systematically fuses existing text and image generation models. We also introduce a specialized, data-efficient instruction-tuning strategy, tailored for mixed-modal generation tasks. Our final instruct-tuned model demonstrates unparalleled performance in generating high-quality multimodal outputs and represents the first model explicitly designed for this purpose.
Emanuele Aiello, Lili Yu, Yixin Nie, Armen Aghajanyan, Barlas Oguz
2023-09-27T10:40:23Z
http://arxiv.org/abs/2309.15564v2
# Jointly Training Large Autoregressive Multimodal Models ###### Abstract In recent years, advances in the large-scale pretraining of language and text-to-image models have revolutionized the field of machine learning. Yet, integrating these two modalities into a single, robust model capable of generating seamless multimodal outputs remains a significant challenge. To address this gap, we present the Joint Autoregressive Mixture (JAM) framework, a modular approach that systematically fuses existing text and image generation models. We also introduce a specialized, data-efficient instruction-tuning strategy, tailored for mixed-modal generation tasks. Our final instruct-tuned model demonstrates unparalleled performance in generating high-quality multimodal outputs and represents the first model explicitly designed for this purpose. ## 1 Introduction Autoregressive text-to-image models, as exemplified by works such as Yu et al. (2023, 2022), have made remarkable strides in generating highly detailed images, paralleling the achievements of Diffusion Models Nichol et al. (2022); Ramesh et al. (2022); Rombach et al. (2022). These models bear architectural resemblance to Large Language Models (LLMs), yet their training regimen is tailored for paired image-text data. LLMs on the other hand (Brown et al., 2020; Zhang et al., 2022; Touvron et al., 2023) are limited to text-based output, thus lacking multimodal generative capabilities despite their proficiency in textual tasks. The subfield of Multimodal Large Models has emerged in recent years Tsimpoukelli et al. (2021); Alayrac et al. (2022); Li et al. (2022) in the quest to bring together the disparate strengths of vision and language models. Despite important advances in this direction, these models still predominantly generate one modality, thereby constraining their expressiveness. This study aspires to break this limitation by developing a multimodal model capable of generating integrated text and image outputs. To achieve this objective, we conduct a comprehensive empirical investigation into the fusion of two specialized autoregressive, decoder-only, large transformer models, each designed for unique tasks (one for text-to-image and a text only model). We introduce a set of methods under the umbrella of the Joint Autoregressive Mixture (JAM) framework. In building this framework, we take advantage of the inherent architectural compatibility of autoregressive text-to-image models with LLMs, allowing us to do deep model fusion and joint training in ways which would otherwise not be possible. Our modular and data-efficient solution allows for deep, rapid and effective integration of continually evolving large models, using less than 1% of the original pretraining data for both parent models. Our contributions to this study are twofold. First, we establish the feasibility of blending autoregressive text-to-image models with LLMs into a unified architecture that retains the core strengths of each while revealing new, emergent capabilities. Second, we present innovative strategies for multimodal instruction tuning, utilizing text-based instructions and a custom-curated dataset designed explicitly for image generation. The result is a first-of-its-kind large multimodal model which can coherently generate long-form content with interleaved text and images. ## 2 Methods To tackle the challenge of creating a unified model that excels at vision-language generative tasks, we propose to combine two autoregressive decoder-only architectures. Our primary image-text model is CM3leon (Yu et al., 2023), trained on 2.4T image-text caption tokens. In contrast, using the same architecture, our LLM (Molybog et al., 2023) has been trained on 1.4T text tokens. Both models have 7B parameters, we provide additional architectural details in Section 3.1. Our overall methodology develops in two stages. In the first stage (Sect. 2.1), we first combine and align the models. In the second stage (Sect. 2.2), we explore new directions for instruction tuning focused on interleaved image-text generation. ### Continued Pretraining We combine the two pretrained models into a singular, cohesive structure in our proposed framework. This composite model is fine-tuned using a hybrid dataset comprising both text-only and image-text samples within our _continued pretraining_ phase. The central motivation behind this approach is to seamlessly merge the capabilities of two pretrained models, capitalizing on the unique strengths of each. #### 2.1.1 Model Merging The concept of model merging has been previously utilized to combine models that share identical optimization trajectories (Kaddour et al., 2022), or models that are trained on identical datasets but have independent optimizations (for instance, Matena and Raffel (2022); Wortsman et al. (2022); Ainsworth et al. (2022)). A consistent approach across these studies is to combine models without any training. Our approach diverges from this convention; we view the merged model as a powerful initialization for subsequent training on mixed-modal data. The weights of the averaged model are defined as: \[\mathbf{\theta}_{average}=\frac{1}{2}\mathbf{\theta}_{lim}+\frac{1}{2}\mathbf{\theta}_{img} \tag{1}\] Where \(\mathbf{\theta}_{llm}\) and \(\mathbf{\theta}_{img}\) represent the weights of the LLM and the text-to-image model respectively. In this study, we explore weights merging specifically to multimodal decoder-only large transformer models, and notably, on an unprecedented scale, involving models trained on trillions of tokens from diverse datasets. In the following sections, we refer to our average model as JAM-Uniform. Figure 1: Selected sample generated by our instruction-tuned JAM-Cross model. The model can complex mixed-modal outputs with coherent alignment between generated text and images. #### 2.1.2 Width Concatenation Our second approach employs the pretrained weights to initialize a wider architecture. Our new model has hidden dimensions \(d_{joint}=8192\), which is doubled with respect to one of the two original models \(d_{llm}=d_{img}=4096\). We keep the same number of layers of the original architectures. The resulting architecture has 26B parameters, initialized starting from the pretrained weights of our backbones. The token embedding input/output projections and the learned positional embeddings of the two initial models are concatenated on the hidden dimension. The attention weights (e.g query projection) \(\mathbf{W}_{q,combined}\in\mathbb{R}^{d_{joint}\times d_{joint}}\) are initialized as: \[\mathbf{W}_{q,combined}=\begin{pmatrix}\mathbf{W}_{q,llm}&\mathbf{W}_{q,llm}\\ \mathbf{W}_{q,img}&\mathbf{W}_{q,img}\end{pmatrix} \tag{2}\] Where \(\mathbf{W}_{q,llm}\), \(\mathbf{W}_{q,img}\in\mathbb{R}^{d_{llm}\times d_{llm}}\) represent the weights for the query projection of a generic attention layer. All the other weights (FFNs and output projections) are initialized following the same logic. We also experiment with slight variations of the approach: \[\mathbf{W}_{q,combined}=\begin{pmatrix}\mathbf{W}_{q,llm}&\mathbf{W}_{q,average}\\ \mathbf{W}_{q,img}&\mathbf{W}_{q,average}\end{pmatrix} \tag{3}\] Instead of copying the two models' parameters, we use the average to initialize half of the new parameters. We name the resulting model JAM-Width. #### 2.1.3 Cross Model Fusion We propose to embed cross-attention layers between the foundational models to facilitate seamless information interchange while preserving the original models' knowledge. Given two decoder-only transformers models \(\mathcal{T}_{llm}\) and \(\mathcal{T}_{img}\), we introduce a bi-directional cross-attention mechanism that enables the layers of one model to attend to the corresponding layer's output of the other model. This approach allows for a progressive exchange of information at different representation levels. For a specific layer \(l\), let the models produce sequences of hidden states \(\mathbf{H}_{llm,l}\) for \(\mathcal{T}_{llm}\) and \(\mathbf{H}_{img,l}\) for \(\mathcal{T}_{llm}\) where these hidden states are outputs from layer \(l\). The output of the cross-attention mechanism (\(\mathbf{H}_{cross,l}\)) from \(\mathcal{T}_{img}\rightarrow\mathcal{T}_{llm}\) for a given layer is evaluated as: \[\mathbf{Q}_{cross,l}=\mathbf{W}_{q,l}\mathbf{H}_{llm,l-1},\quad\mathbf{K}_{cross,l}=\mathbf{W}_{k, l}\mathbf{H}_{img,l-1},\quad\mathbf{V}_{cross,l}=\mathbf{W}_{v,l}\mathbf{H}_{img,l-1} \tag{4}\] \[\mathbf{H}_{cross,l}=\text{Softmax}\left(\frac{\mathbf{Q}_{cross,l}\mathbf{K}_{cross,l}^{ T}}{\sqrt{d_{k}}}\right) \tag{5}\] Where \(W_{q},W_{k},W_{v}\) represent the query, key, and value projection weights of the newly inserted cross-attention layers. A symmetric process is applied for the reverse direction \(\mathcal{T}_{llm}\rightarrow\mathcal{T}_{img}\). We use a shared input-output projection layer, initializing the weights of the text tokens from the LLM input embedding and the weights of the image tokens from the image-text model. We insert a new linear projection layer that takes the concatenation of the two model's output embeddings as input. Figure 2: JAM-Cross, architecture overview. The cross-attention blocks are interleaved between the original LLM block and the Text-Image blocks, and the output embedding between the two branches are concatenated and then projected to the output embedding dimension. Figure 2 illustrates a schematic of our model configuration. We refer to the model resulting from this approach as JAM-Cross. Additional architectural details and the underlying design choices can be found in Sect. 3.1. The ablation study for the optimal frequency of inserting new layers is presented in Sect. 3.3. ### Multimodal Conversational Instruct Tuning Supervised fine-tuning is a fundamental tool to leverage the abilities of large pretrained models. Recently, instruct tuning has been extended to a multimodal setting (Liu et al., 2023; Dai et al., 2023); however, all the existing approaches are focused on visual understanding abilities. In this work, we study instruction tuning tailored to interleaved image-text generation. We collect a small and curated mixed-modal dataset to teach our JAM model to support textual explanations with coherent images. Since in the first stage, the model has been trained on image-text captions and text-only data; we train on interleaved image-text data during this phase. In line with the superficial alignment hypothesis introduced in LIMA (Zhou et al., 2023), we demonstrate that the model can quickly learn the style of images and text from a small curated dataset. Our results suggest that the Superficial Alignment Hypothesis introduced in LIMA holds not only for learning the text style but also for images. In our experiments, we consider two slightly different instruction tuning settings, we introduce a small portion of the image-text Shutterstock data with retrieval augmentation and we find this approach beneficial to preserve the generated image quality when generating with retrieval augmentation. Sect 3 presents a comparison between these two strategies. We train using a standard supervised procedure without leveraging any reinforcement learning or human preference strategy. In this instruction-tuning phase, we leverage interleaved image-text data in contrast to previous methods (Koh et al., 2023) that rely only on image-text caption and no instruction tuning, our experimental results confirm the benefits of training with interleaved image-text data. ## 3 Experiments ### Experimental Details TokenizersFor images, we use the VQ-VAE tokenizer from Gafni et al. (2022). The image resolution is set to \(256\times 256\), \(1024\) tokens represent each image, and the vocabulary has a size of \(8192\). Our text tokenizer is the same that have been used to train the two parent models, trained over the Zhang et al. (2022) data for text. We introduce the additional <break> token used by CM3leon to identify a modality break. Image-Text Autoregressive ModelWe adopt CM3leon as the image-text autoregressive backbone. The model has a standard decoder-only architecture with some peculiarities: no bias terms, dropout, and learnable parameters for layer norms. It has been trained on 2.4T image-text tokens and uses a sequence length 4096. LLMAs an LLM backbone, we select a model with the same architecture as CM3leon, trained in Molybog et al. (2023) this allows us to experiment with a broader range of approaches, such as weight averaging and width concatenation. The model is trained on 1.4T text data with a 2048 context length, and we further fine-tuned it with a 4096 context length using only 30B text tokens. ObjectiveIn all our experiments, we employ the CM3 objective introduced in Aghajanyan et al. (2022); this objective accepts the original sequence as input or transforms it into an infilling instance by masking specific spans and relocating them to the end. Then, the model is optimized for minimizing the standard autoregressive loss \(-\log p(x_{input})\). This objective allows for optional bidirectionally and increases the versatility of the model that can be used for both infilling or standard autoregressive generation. We prevent the objective from masking across the modality <break> tokens. Retrieval AugmentationWe employ multimodal retrieval augmentation introduced in Yasunaga et al. (2022) for our training procedure. We leverage our text-to-image backbone's modifications introduced in Yu et al. (2023). The retrieval procedure employs a dense retriever and a specifically selected retrieval strategy. The retriever takes an input query \(x\) and returns a relevance score \(r(q,m)\) for each candidate document \(m\) in our memory bank \(\mathcal{M}\). Each multimodal document is split between text and images and fed to the corresponding modality-specific VIT-B-32 CLIP encoder (Radford et al., 2021). The two embeddings are then averaged to form the documents' vector representation. We then use Maximum Inner Product Search (MIPS) over the memory bank to obtain a list of candidates. When sampling retrieved documents, we prioritize the diversity of the sampled documents by skipping candidates with a score \(r(q,m)\leq 0.9\). Query dropout is applied to regularize the training, dropping 20% of tokens from the input sequence \(x\). Training - Alignment PhaseDuring the continued pretraining, we train for approximately 50B multimodal tokens. Our initial learning rate is \(lr=3\times 10^{-5}\) we use 500 warm-up steps. We set our optimal batch size to 8M tokens, this hyperparameter is borrowed from the mixed-modal scaling laws introduced in Aghajanyan et al. (2023). The total number of training steps is \(5960\). This training procedure takes approximately one day on 256 80GB A100s for all models. We select the last checkpoint for all the different JAM models, which is always the one with the lowest average validation PPL. All our training procedures are implemented using Metaseq1. Footnote 1: [https://github.com/facebookresearch/metaseq](https://github.com/facebookresearch/metaseq) Training - Instruct TuningOur instruct tuning training procedure is data efficient we train with our instruction tuning mixed corpora. The initial learning rate is set to \(1\times 10^{-5}\), and we use 300 warm-up steps and a batch size of 1M. The instruction tuning procedure takes less than 2 hours on 64 80GB A100s, we train for 15 epochs over our mixture of datasets and manually select the best checkpoint corresponding to the 9th epoch. Following Zhou et al. (2023), we notice that the validation PPL doesn't correlate with the quality of the responses. Figure 3: Selected samples generated by our JAM-Cross instruct tuned model. (Top - generated without retrieval augmentation; Bottom - generated with retrieval augmentation) Decoding StrategiesWe implement a mixed-modal decoding strategy for our interleaved generation. The model starts generating text tokens until a modality <break> token is detected, then an image is sampled, and the generation continues until a <eoss> token is sampled. We employ temperature sampling, a common technique used in autoregressive model (e.g Ramesh et al. (2022)) to control the randomness of the prediction by modifying the softmax temperature \(\tau\). We pair this technique with TopP sampling introduced in Holtzman et al. (2019) consisting of sampling from the top-ranked tokens with a cumulative probability exceeding a predefined threshold \(\tau_{P}\). We also employ classifier-free guidance (CFG (Gafni et al., 2022)) for sampling images. This technique allows to condition the sampling procedure, blending the logits from an unconditional sample with the logits from a conditional sample. The procedure is mathematically described as \[\text{logits}_{cf}=\text{logits}_{uncond}+\alpha_{c}(\text{logits}_{cond}- \text{logits}_{uncond}) \tag{6}\] where \(\text{logits}_{cond}=\mathcal{T}(t_{y}|t_{x})\) and \(\text{logits}_{uncond}=\mathcal{T}(t_{y}|<mask>)\); \(\mathcal{T}\) represent the transformer model, \(<mask>\) represent the absence of the input text, \(t_{x}\) are the conditional input tokens, \(t_{y}\) are the output tokens and \(\alpha_{c}\) is the scaling factor for CFG. Thanks to the CM3 objective, our training procedure allows our models to sample with CFG without further fine-tuning. Inspired by Yu et al. (2023) we complement this technique to boost the generation quality. Our samples are generated using a temperature value \(\tau=1\), \(\tau_{P}\) is set between \(0.8\) and \(1\), and we use classifier-free guidance with values \(3.5\) and \(4\). In contrast to other approaches, we don't make use of the computationally expensive clip-reranking (Ramesh et al., 2021; Yu et al., 2022; Gafni et al., 2022) or constrastive decoding (Li et al., 2022; Yu et al., 2023). #### 3.1.1 Datasets ShutterstockWe randomly sample a subset of 30B tokens from CM3leon (Yu et al., 2023) pretraining data. The data consists of legally acquired image-caption pairs from Shutterstock, a commercial online platform offering images with ownership attribution and clear licensing terms. Text corporaWe use 30B text tokens sampled from a mixture of several publicly available data, and we reuse the data used for training other common open-source LLM following the same preprocessing of (Touvron et al., 2023). The datasets are: English CommonCrawl (Touvron et al., 2023), C4 (Raffel et al., 2020), Wikipedia, Books3 from ThePile (Gao et al., 2020), and arXiv. LimaWe use the 1k dataset present in Zhou et al. (2023), which features various curated prompts and responses. wikiHowWe collect an interleaved image-text dataset sampling 3000 articles from WikiHow, an online wiki publication that usually curates apposite images for each article. We sample balanced articles from each category to ensure diversity; moreover, we leverage the platform's community ratings to filter each article's quality, sampling only those with a score greater than \(90/100\). For each article, we use the title (e.g., '_How to make..._') as prompt, we modify the phrase _'This article...'_ with _The following answer..._. Furthermore, we restrict the number of images as 3 per sample, to fit our 4096 context length. ### Continued Pretraining Results In the initial stage of continued pretraining, we evaluate the performance across various JAM models. Our primary objective is to ensure minimal performance degradation post-merging, relative to the parent models. Managing both image and text processing within a single model poses significant challenges. This evaluation seeks to quantify the retention of original performance in our different JAM models, benchmarked against the two parent models specialized in individual modalities. #### 3.2.1 Text Modality For the text modality, we compare the zero-shot performance on some common sense reasoning tasks: PIQA (Bisk et al., 2020), ARC-Challenge, ARC-Easy (Clark et al., 2018), StoryCloze (Mostafazadeh et al., 2016), Winograd, and Winogrande (Sakaguchi et al., 2021). We also report some recent influential LLM (Brown et al., 2020; Touvron et al., 2023), and our LLM (Molybog et al., 2023) fine-tuned with 4k context as a reference. Results are presented in Table 1. The JAM-Uniform reaches slightly better text-only performance than JAM-Width however, it is crucial to remark that this approach consolidates the functionalities of both parent models within a constrained 7B parameter space. Our findings reveal that the intrinsic knowledge of the parent models can be recovered mainly from the parameter average utilizing only a minimal portion of the original pretraining data. The JAM-Cross model yields the best results, aligning with our primary LLM. This highlights the strength of our bidirectional cross-attention mechanism against other baselines. #### 3.2.2 Image-Text Modality To assess the performance of our different baselines over the image-text modality, we compare them using the validation PPL on MS-COCO dataset (Lin et al., 2014). We believe this metric robustly correlates with performance on subsequent tasks, such as image generation and captioning. Furthermore, it provides a reliable reference point for comparing different autoregressive models sharing an identical tokenizer. Results are reported in Table 2. Diverging from results on the text-only modality, the JAM-Width model exhibits enhanced performance over the JAM-Uniform model in the image-text domain. Specifically, the JAM-Width model demonstrates superior efficacy in retaining image-text performance relative to text-only performance. Conversely, despite a decline in performance, the JAM-Uniform model remains a good parameters-performance trade-off. Interestingly, our JAM-Cross model not only reaches the best PPL between the JAM strategies but also surpasses our foundational image-text model, CM3leon. We hypothesize that such advancement can be attributed to integrating novel textual capabilities coupled with an augmented parameter count inherent to the combined architecture. Based on empirical evidence, the JAM-Cross emerges as the best strategy to combine two pretrained autoregressive models. \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline Model & Size & PIQA & ARC-C & ARC-E & StoryCloze & Winograd & Winogrande \\ \hline GPT-3 & 175B & 81.0 & 51.4 & 68.8 & - & - & 70.1 \\ LLaMa & 7B & 79.8 & 47.6 & 72.8 & - & - & 70.1 \\ LLM-4k & 7B & 76.7 & 45.9 & 67.7 & 79.3 & 83.9 & 66.2 \\ \hline JAM-Uniform & 7B & 62.4 & 28.5 & 42.6 & 63.5 & 47.8 & 49.7 \\ JAM-Width & 26B & 57.8 & 31.4 & 31.6 & 54.7 & 50.2 & 51.9 \\ JAM-Cross & 19B & 75.4 & 41.6 & 67.2 & 79.8 & 81.0 & 66.0 \\ \hline \hline \end{tabular} \end{table} Table 1: Zero Shot Text Comparison on Common Sense Reasoning Tasks \begin{table} \begin{tabular}{l c c} \hline \hline Model & Size & MS-COCO PPL \\ \hline CM3 & 2.7B & 200.1 \\ RA-CM3 & 2.7B & 193.1 \\ CM3leon & 760M & 168.8 \\ CM3leon & 7B & 149.0 \\ \hline JAM-Uniform & 7B & 177.5 \\ JAM-Width & 26B & 159.5 \\ JAM-Cross & 19B & **147.6** \\ \hline \hline \end{tabular} \end{table} Table 2: Image-Text Comparison \begin{table} \begin{tabular}{l c c} \hline \hline \multicolumn{2}{l}{C-Attn Size Wikipedia PPL MS-COCO PPL} \\ \hline ✗ & 13B & 7.86 & 153.2 \\ 1 & 26B & 7.53 & 152.4 \\ 2 & 19B & **7.18** & **149.0** \\ 4 & 16B & 8.55 & 151.7 \\ \hline \hline \end{tabular} \end{table} Table 4: Ablations - JAM-Cross Model \begin{table} \begin{tabular}{l c c} \hline \hline \multicolumn{2}{l}{Sutterstock MS-COCO PPL} \\ \hline ✗ & 190.2 \\ ✓ & 164.5 \\ \hline \hline \end{tabular} \end{table} Table 5: Ablations - Instruction Tuning #### 3.2.3 Interleaved Generation Our instruct-tuned JAM-Cross model reaches a high-quality level of image-text generated output. To demonstrate its ability to generate coherent modality interleaved responses, we show an extensive set of generated samples in Figure 3 and Section B. The samples generated with retrieval are obtained from the model instruct-tuned with a mixture of pretraining image-text Shutterstock data along with our corpora of instruct-tuning datasets, while the samples generated without retrieval are obtained from the model instruct tuned only on our instruct-tuning set. The generated samples show coherent image and text integration, demonstrating unprecedented abilities at this novel task. Overall we find our retrieval augmented solution to be more effective than standard image sampling, boosting image quality. We further report several qualitative comparisons with the most relevant previous work GILL (Koh et al., 2023a) that features mixed-modal generation. We use our retrieval-augmented JAM-Cross model and source generations for the GILL model from the original paper. From this comparison (Figure 4), it's immediate to notice how our model has a better overall quality of responses. The generated text is more complete and exhaustive, while the generated images are more relevant to the text context. We remark that our method is the first capable of such coherent and interleaved generation with a focus on instruction tuning and that our fine-tuning procedure is effective in efficiently learning the style of the dataset, not only for text but even for images. Our model paves the way toward a larger adaption of mixed-modal generation in real-world use cases. ### Ablation Study We compare the two approaches for the width concatenation model: copying the original models' weight or using the average to initialize the new parameters. Results (Table 3) show that copying the Figure 4: Qualitative comparison with previous interleaved generation models. Compared to GILL, our model is able to generate more complete and precise answers. Results for GILL are sourced from Koh et al. (2023a). weights is more effective than averaging them to retain the original model capabilities. The ablation study for the Cross-attention model is presented in Table 4. We ablate the frequency of inserting cross-attention layers and the impact of not using any cross-attention layers. These experiments are performed training with 25B tokens, all the other parameters are the same as reported in Sect. 3.1. We remark that this is an even shorter training setting concerning our 50B tokens total training and that the difference in performance increases as the training progresses. We further ablate the contribution of image-text pretraining data in the instruction tuning procedure in Table 5. The results indicate the importance of using pretraining data mixed in the instruction tuning procedure to preserve the MS-COCO PPL. We do not report WikiHow PPL since analyzing the models shows that it doesn't correlate with generation quality similarly to Zhou et al. (2023). ## 4 Related Works Generative Text-to-Image ModelsThe field of generative text-to-image models has recently been dominated by diffusion models (Sohl-Dickstein et al., 2015; Ho et al., 2020). Recent enhancements have used pretrained text representations (Ramesh et al., 2022; Nichol et al., 2022) like CLIP (Radford et al., 2021) to improve the generation quality. Concurrently to developing diffusion-based generative models, significant steps have been made by autoregressive token models (Esser et al., 2021; Gafni et al., 2022). These models encode images into a discrete latent space (Van Den Oord et al., 2017) and can be processed as a standard sequence-to-sequence modeling task, enabling the borrowing of techniques used from Large Language Models. A critical element that has been found beneficial in boosting text-to-image generative models is retrieval augmentation (Chen et al., 2022; Yasunaga et al., 2022). Yasunaga et al. (2022) propose to prefix decoder-only models, such as Aghajanyan et al. (2022), with retrieved images during training, resulting in a huge efficiency gain for the training procedure. Yu et al. (2023), scale this strategy to reach state-of-art performance in image generation using 5x less training compute. In this work, we borrow their model as our text-to-image autoregressive backbone. Multimodal Language ModelsThe multimodal language model field has recently seen considerable development. Several prior works have focused on connecting language models to visual encoders. (Tsimpoukelli et al., 2021; Mokady et al., 2021; Najdenkoska et al., 2023; Li et al., 2023). These methods typically train a mapping network between a pretrained image encoder and a language model. Flamingo (Alayrac et al., 2022) introduces cross attention into a frozen LLM to inject visual features and trains a large corpus of image-text pairs. In this work, we similarly use cross attention to bridge the two models; however, our mechanism is bidirectional between the vision and language models, while for Flamingo, the visual knowledge is injected in the language model and not vice-versa. CM3 (Aghajanyan et al., 2022) is trained on a large corpus of structured HTML; it introduces the Casually Masked Language Modeling objective we adopt to train our models. Koh et al. (2023) propose a multimodal language model capable of processing arbitrarily interleaved image and text inputs and generating interleaved output of text and retrieved image. Subsequently, on the same line of work, GILL Koh et al. (2023) proposes to ground an LLM to a text-to-image model, using a mapping network and freezing the pretrained models, introducing the possibility of generating or retrieving images as output. Instruction TuningInstruction tuning aims to teach language models to follow natural language instructions. Several methods have been proposed for instruction tuning, using existing NLP datasets converted in instruction formats Wei et al. (2021) Chung et al. (2022), or using LLMs like GPT-4 to generate instruction data with better diversity Wang et al. (2022) Honovich et al. (2022). Recently, LIMA Zhou et al. (2023) demonstrated that 1,000 carefully curated samples are enough to reach competitive results compared to bigger instruction-tuning datasets. The authors hypothesize that most of the knowledge is learned during the pretraining, and the instruction tuning teaches the style to interact with the users. In this work, we explore using a small set of multimodal instruction tuning data to fine-tune our model, verifying the effectiveness of a small dataset in this multimodal setting tailored to image generation. Several vision language works adopt instruction tuning for multimodal tasks-focused user interactions optimized for visual content understanding Liu et al. (2023) Dai et al. (2023) Ye et al. (2023) Zhu et al. (2023). Unlike previous works, we explore instruction tuning focused mixed-modal generation, paving the way for more significant adaptation of multimodal models that can generate interleaved image-text output. ## 5 Conclusions In this work, we have presented novel methodologies for combining pretrained autoregressive models, demonstrating the viability of synthesizing the knowledge of two distinct models into a cohesive structure with extended capabilities. Our exploration validates that the integrated model can be adeptly fine-tuned using our tailored instruction-tuning procedure for interleaved image-text generation. To this end, we pioneered creating a specialized dataset centered on instruction tuning for this particular task. Nevertheless, the proposed study is limited to 7B parameter models with the same architecture. Future works may consider scaling the models' size and asymmetrically applying our cross-fusion method to bridge models of varying sizes. Increasing the context length and delving into multi-turn conversations could further represent an interesting exploration direction. In conclusion, our study sets the foundation for substantial advancements in the realm of multimodal autoregressive models. The fusion of text-to-image generation with large language models paves the way for sophisticated systems capable of interleaved image-text interactions, enriching the landscape of conversational AI.
2309.13115
FORGE'd in FIRE: Resolving the End of Star Formation and Structure of AGN Accretion Disks from Cosmological Initial Conditions
It has recently become possible to zoom-in from cosmological to sub-pc scales in galaxy simulations to follow accretion onto supermassive black holes (SMBHs). However, at some point the approximations used on ISM scales (e.g. optically-thin cooling and stellar-population-integrated star formation [SF] and feedback [FB]) break down. We therefore present the first cosmological radiation-magnetohydrodynamic (RMHD) simulation which self-consistently combines the FIRE physics (relevant on galactic/ISM scales where SF/FB are ensemble-averaged) and STARFORGE physics (relevant on small scales where we track individual (proto)stellar formation and evolution), together with explicit RMHD (including non-ideal MHD and multi-band M1-RHD) which self-consistently treats both optically-thick and thin regimes. This allows us to span scales from ~100 Mpc down to <100 au (~300 Schwarzschild radii) around a SMBH at a time where it accretes as a bright quasar, in a single simulation. We show that accretion rates up to $\sim 10-100\,{\rm M_{\odot}\,yr^{-1}}$ can be sustained into the accretion disk at $\ll 10^{3}\,R_{\rm schw}$, with gravitational torques between stars and gas dominating on sub-kpc scales until star formation is shut down on sub-pc scales by a combination of optical depth to cooling and strong magnetic fields. There is an intermediate-scale, flux-frozen disk which is gravitoturbulent and stabilized by magnetic pressure sustaining strong turbulence and inflow with persistent spiral modes. In this paper we focus on how gas gets into the small-scale disk, and how star formation is efficiently suppressed.
Philip F. Hopkins, Michael Y. Grudic, Kung-Yi Su, Sarah Wellons, Daniel Angles-Alcazar, Ulrich P. Steinwandel, David Guszejnov, Norman Murray, Claude-Andre Faucher-Giguere, Eliot Quataert, Dusan Keres
2023-09-22T18:00:07Z
http://arxiv.org/abs/2309.13115v2
Forge'd in fire: resolving the end of star formation and structure of AGN accretion disks from cosmological initial conditions ###### Abstract It has recently become possible to "zoom-in" from cosmological to sub-pc scales in galaxy simulations to follow accretion onto supermassive black holes (SMBHs). However, at some point the approximations used on ISM scales (e.g. optically-thin cooling and stellar-population-integrated star formation [SF] and feedback [FB]) break down. We therefore present the first cosmological radiation-magnetohydrodynamic (RMHD) simulation which self-consistently combines the FIRE physics (relevant on galactic/ISM scales where SF/FB are ensemble-averaged) and STARFORGE physics (relevant on small scales where we track _individual_ (proto)stellar formation and evolution), together with explicit RMHD (including non-ideal MHD and multi-band M1-RHD) which self-consistently treats both optically-thick and thin regimes. This allows us to span scales from \(\sim 100\) Mpc down to \(<100\) au (\(\sim 300\) Schwarzschild radii) around a SMBH at a time where it accretes as a bright quasar, in a _single simulation_. We show that accretion rates up to \(\sim 10-100\) M\({}_{\odot}\) yr\({}^{-1}\) can be sustained into the accretion disk at \(\ll 10^{3}\)\(R_{\rm schw}\), with gravitational torques between stars and gas dominating on sub-kpc scales until star formation is shut down on sub-pc scales by a combination of optical depth to cooling and strong magnetic fields. There is an intermediate-scale, flux-frozen disk which is gravitotturbulent and stabilized by magnetic pressure sustaining strong turbulence and inflow with persistent spiral modes. In this paper we focus on how gas gets into the small-scale disk, and how star formation is efficiently suppressed. Subject headings:galaxies: formation -- quasars: general -- quasars: supermassive black holes -- galaxies: active -- galaxies: evolution -- accretion, accretion disks + Footnote †: slugcomment: Version November 3, 2021 ## 1. Introduction The origins and growth of super-massive black holes (SMBHs) represents one of the most important open problems in extragalactic astrophysics. Most sufficiently-massive galaxies host SMBHs whose masses correlate with various host galaxy bulge properties and reach masses as large as \(\sim 10^{10}\)\(M_{\odot}\)(Magorrian et al., 1998; Ferrarese & Merritt, 2000; Gebhardt et al., 2000; Hopkins et al., 2007a,b; Aller & Richstone, 2007; Kormendy et al., 2011; for a review see Kormendy & Ho, 2013). Many constraints indicate that most of this BH mass is assembled via accretion of gas in a few bright quasar phases (Soltan, 1982; Salucci et al., 1999; Yu & Tremaine, 2002; Hopkins et al., 2006), giving rise to a picture of "co-evolution" between galaxies and active galactic nuclei (AGN) or quasars (Merloni & Heinz, 2008). Understanding this "co-evolution" has crucial consequences far beyond the BHs themselves, for example in the form of AGN "feedback" launching galactic winds (Silk & Rees, 1998; King, 2003; Di Matteo et al., 2005; Murray et al., 2005; Hopkins et al., 2005a,b; Debuhr et al., 2010; Faucher-Giguere & Quataert, 2012; Torrey et al., 2020), regulating galaxy masses (Croton et al., 2006; Hopkins et al., 2006a,b, 2008a), and changing the structure of the circum-galactic or inter-galactic medium (CGM or IGM) around galaxies (Ciotti & Ostriker, 1997; Cox et al., 2006; Best et al., 2007; Voit et al., 2017). Essential to understanding this, of course, is to understand how gas is transported from the cosmic web on \(\gtrsim\) Mpc scales down to scales of order the innermost stable circular orbit (ISCO) or event horizon at \(\sim R_{\rm s}\sim 2\,R_{\rm g}\sim 2\,G\,M_{\rm BH}/c^{2}\sim{\rm au}\,(M_{\rm BH }/5\times 10^{7}\,M_{\odot})\). Not only must the specific angular momentum of accreted gas decrease by factors of \(\sim 10^{7}\), but it must do this sufficiently-rapidly to avoid being turned into stars or ejected from the galaxy via stellar feedback processes along the way. This challenge is far more serious for the most luminous quasars, which must sustain gas inflow rates of up to \(\gtrsim 10\,M_{\odot}\) yr\({}^{-1}\) - which would naively imply that the outer accretion disk is gravitationally unstable and present a unique "last parsec problem" (Goodman, 2003). Local magnetic or Reynolds-type stresses (let alone micro-physical viscosity) as assumed to dominate angular momentum transport in a classical Shakura & Sunyaev (1973)-like "accretion disk" (Balbus & Hawley, 1998) are inefficient at larger scales \(\gg 0.01\) pc (Shlosman & Begelman, 1989; Goodman, 2003; Thompson et al., 2005), as is random accretion of individual gas clumps or molecular clouds (Hopkins & Hernquist, 2006; Kawakata & Wada, 2008; Nayakshin & King, 2007)1 However, the last couple of decades have seen considerable progress on this front at scales \(\gtrsim 0.1-1\) pc. Initial analytic arguments (Shlosman et al., 1989), followed by semi-idealized numerical simulations of different "levels" of the scale hierarchy (Escala et al., 2004; Escala, 2007; Mayer et al., 2007; Wise et al., 2008; Levine et al., 2008; Hopkins & Quataert, 2010; Costa et al., 2022), and then simulations using "super-Lagrangian" or "hyper-refinement" techniques to probe small scales (Curtis & Sijacki, 2015; Prieto & Escala, 2016; Prieto et al., 2017; Bourne & Sijacki, 2017; Su et al., 2019, 2020, 2021; Franchini et al., 2022; Talbot et al., 2022; Sivasankaran et al., 2022) within larger boxes eventually reaching up to cosmological scales (Beckmann et al., 2019; Bourne et al., 2019; Bourne & Sijacki, 2021; Angles-Alcazar et al., 2021) have led to a robust emergent picture wherein on large scales, gravitational torques between non-axisymmetric structures (including e.g. mergers, bars, large clumps, lopsided/warped disks; see Hopkins & Quataert, 2010), especially _between collisionless and collisional components of the galaxy_ (e.g. torques from stars on gas not only driving angular momentum exchange but inducing shocks which can orders-of-magnitude enhance inflow rates over classical single-component disk models; see Hopkins & Quataert, 2011) can produce inflows of gas on timescales of order the dynamical time, ensuring some can reach sub-pc radii without turning into stars (references above and e.g. Levine et al., 2008, 2010; Hopkins & Quataert, 2011; Hopkins et al., 2012, 2016). Footnote 1: As those authors and e.g. Meece et al. (2017); Aird et al. (2018); Yesuf & Ho (2020); Lambrides et al. (2021); Guo et al. (2022) more recently note, those processes could be important for much lower-accretion rate AGN, e.g. systems like M87 today accreting several orders of magnitude below their Eddington limit, but they cannot sustain quasar-level accretion rates. While these represent an enormous progress, there are still many open questions and key issues unresolved by these simulations. In particular, it has not yet been possible to "bridge the gap" between these (\(\gtrsim\) pc) scales and the traditional (\(Q\gg 1\)) accretion disk. This is not just a question of dynamic range, but of physics: the physics believed to drive accretion on small scales - physics like the magneto-rotational instability (MRI) - are qualitatively different from the physics of gravitational torques on larger scales. And it is by no means clear what physically occurs when the different physics most relevant on different scales intersect. On large scales \(\gtrsim 10-100\) pc, simulations of high-redshift quasar fueling require cosmological dynamics following optically-thin cooling from dusty ionized, atomic, and molecular gas, self-gravity, and detailed models of star formation and stellar feedback which model the formation and collective effects of entire stellar populations (spanning the range of the entire stellar initial mass function [IMF]), including their radiation, acceleration of cosmic rays, mass-loss, and supernovae. On smaller scales \(\lesssim 10\) pc, simulations of star formation need to follow individual stars and protostars as they form, accrete, and grow, while injecting feedback in the form of jets, winds, radiation, and (eventually) supernovae, all in a dusty medium which spans both optically-thin and optically-thick cooling. At even smaller scales around a SMBH (\(\lesssim 10^{4}\,R_{g}\), where \(R_{g}=G\,M_{\rm BH}/c^{2}\)) traditional "accretion disk" simulations must be able to accurately evolve radiation-magneto-hydrodynamics, with global simulations that can accurately follow the growth of the magneto-rotational instability even in warped or irregular disks, radiation-pressure dominated fluids with explicit radiation-dynamics (accounting for finite-speed-of-light effects), with opacities dominated by partially-ionized (largely dust-free) gas, and gravity integrators which must follow huge numbers of orbits accurately. As a result, there have not been simulations that can span all three of these regimes simultaneously and self-consistently. Even today, very few codes include all of the physics listed for even just one of the three scale regimes described above, let alone two or all three. So simulations using super-Lagrangian hyper-refinement have generally either (a) had to stop at some radius or resolution where the physics prescriptions simply cease to make sense (e.g. at \(\sim\) pc scales, for simulations with traditional "galaxy-scale" cooling, star formation and feedback prescriptions as in FIRE, e.g. Angles-Alcazar et al., 2021); or (b) consider only restricted special cases like accretion onto low-redshift SMBHs at extremely low accretion rates (\(\lesssim 10^{-4}\) times the Eddington limit) in gas-poor ellipticals in "hot halos" (where star formation and many other physical processes above can be neglected relatively "safely"; as in Guo et al., 2022), or (c) simply neglect most of the physics above even on scales where it could be important. In this paper, we present the first simulation to span all three of these regimes including all of the physics above. The key to this is to leverage a suite of physics that has been developed and extensively studied in the GIZMO code (Hopkins, 2015, 2017) over the last several years. On large scales, all of the physics above (and more) has been developed into a physics suite as part of the Feedback In Realistic Environments (FIRE) (Hopkins et al., 2014, 2018, 2022) project, designed fundamentally for simulations of galaxies on scales where stars can be treated as ensemble stellar populations, so star formation \begin{table} \begin{tabular}{l l} \hline \hline Cosmology & Fully-cosmological baryons+dark matter simulation from \(z\sim 100\), with a \(\sim(6\,{\rm Mpc})^{3}\) scale zoom-in volume in a \(\sim(100\,{\rm cMpc})^{3}\) box. \\ Gravity & Full self-gravity, 5th-order Hermite integrator, adaptive softening for gas, consistent softenings for collisionless particles. \\ \hline Hydrodynamics & Fluid dynamics with 2nd-order finite-volume MFM solver, refinement to \(<0.01\,M_{\odot}\), non-ideal (ion=atomic+molecular) EOS. \\ Magnetic Fields & Integrated with constrained-gradient MHD solver, trace seed cosmological fields amplified self-consistently. \\ Non-Ideal MHD & Kinetic terms: anisotropic Spitzer-Braginskii conduction \& viscosity, plus ambipolar diffusion, (optional) Hall MHD, Ohmic resistivity. \\ Thermo-chemistry & Detailed processes for \(1-10^{10}\) K. Non-equilibrium H \& He ions, H\({}_{2}\) formation/destruction, dust destruction. Fully coupled to RHD. \\ Radiation & M1 solver. Photo-ionizing, Lyman-Werner, photo-electric, NUV, optical \& near-IR, and adaptive (multi-wavelength) grey-body HR followed. \\ Opacities & Dust, molecular, metal, atomic, H,, free \(e\) ” with Kramers or bound, bound-broad, free-free, Compton, Thompson, Rayleigh. \\ Cosmic Rays & Dynamically-evolved with LEBRON approximation, coupled to chemistry, sourced from fast shocks from SNe and stellar mass-loss. \\ SSP Particles & FIRE: Formation in self-gravitating, Leans-unstable gas, sampling IMF, when cell resolution \(>1\,M_{\odot}\). \\ SSP Feedback & FIRE: Main-sequence IMF-sampled tracks: radiation, stellar mass-loss (O/B & AGB), supernovae (Types I & II), cosmic rays. \\ Star Particles & STARFORGE: Formation in self-gravitating, isolated, resolved Larson cores inside own Hill sphere, accrete bound mass, resolution \(<1\,M_{\odot}\). \\ Star Feedback & STARFORGE: Protostellar \& main-sequence single-star tracks with accretion, radiation, jets, surface mass-loss, end-of-life explosions. \\ Supermassive BH & Live sink particles formed dynamically. Refinement centers on \(\sim 1.3\times 10^{10}\,M_{\odot}\) BH, accretion at \(<300\,R_{\rm shdw}\). \\ \hline \end{tabular} \end{table} Table 1Summary of physics included in our default simulation. occurs in environments which should fragment to form stars and produces "stellar population particles" which represent many stars that can act on the environment in the form of radiation, cosmic rays, mass-loss, and supernovae. In parallel, we have also developed a suite of physics as part of the STARFORGE project (Grudic et al., 2021; Guszejnov et al., 2021), designed for simulations which resolve _individual_ star formation, where sink particles form representing individual (proto)stars which then follow individual (proto)stellar evolution tracks as they grow, accrete, evolve, ending up on the main sequence, and eventually ending their main sequence lives as remnants or SNe, explicitly modeling jets, radiation, mass-loss, and supernovae. As a part of this, we have developed gravity and radiation-magnetohydrodynamics solvers which have been applied to high accuracy to evolving e.g. the MRI in global disk simulations (Gaburov et al., 2012; Hopkins and Raives, 2016; Hopkins, 2016, 2017; Deng et al., 2019, 2020; Deng and Ogilvie, 2022), dynamics of strongly radiation-pressure dominated fluids (Hopkins and Grudic, 2019; Hopkins et al., 2020, 2021; Williamson et al., 2020, 2022; Lupi et al., 2022; Braspenning et al., 2022; Soliman and Hopkins, 2022) including radiation-pressure-dominated AGN accretion disks, and accurately evolving individual gravitational orbits (allowing for "hard" N-body dynamics) for up to millions of orbits (Grudic and Hopkins, 2020; Grudic et al., 2021; Guszejnov et al., 2022; Hopkins et al., 2022). Crucially, the physics of all of the above are built in a modular fashion in the code, allowing for cross-compatibility - this allows us to evolve all of the relevant physics simultaneously for the first time. These physics and numerical methods allow us to "zoom in" from truly cosmological initial conditions down to \(<300\)\(R_{\rm s}\) around a super-massive BH during an extremely high-accretion-rate quasar episode, and to see the formation of the true accretion disk and cessation of star formation on sufficiently small scales in a self-consistent manner. In SS 2, we summarize the numerical methods and physics included (SS 2.1) including both the FIRE (SS 2.2) and STARFORGE (SS 2.3) regimes, and initial conditions (SS 2.4) and architecture (SS 2.5) of the fiducial simulation studied here. In SS 3 we study the results of the simulation (including some variants with different physics): we describe the qualitatively different behaviors over the vast hierarchy of scales (SS 3.1), our effective resolution (SS 3.2), the (gas/stellar/dark matter) mass density and accretion rate profiles (SS 3.3), the plasma and thermodynamic properties on these scales (SS 3.4), dynamics of fragmentation and star formation and its suppression at small radii (SS 3.5), and the torques driving inflow (SS 3.6). In SS 4 we contrast a simulation that ignores magnetic fields entirely, and in SS 5 we summarize the scales where different physics "ingredients" play a crucial role. In SS 6 we compare to previous work in different regimes from galactic (SS 6.1) through accretion disk (SS 6.4) scales. We summarize our conclusions in SS 7. ## 2 Numerical Methods ### Overview and Common Physics Fundamentally, the simulation suite presented here combines two well-tested numerical physics implementations: the Feedback In Realistic Environments (FIRE) physics (specifically the FIRE-3 version from Hopkins et al., 2022), and STARFORGE physics (Grudic et al., 2021). Both of these physics modules have been extensively tested in the literature2 so we will only summarize what is included and refer to the relevant methods papers for each, in order to focus on what is novel here (how the two are integrated within our refinement scheme). An even more succinct high-level overview is provided in Table 1. Footnote 2: For additional numerical tests of FIRE methods, we refer to (Hopkins et al., 2014; Ma et al., 2016; Sparre et al., 2017; Garrison-Kimmel et al., 2017; Angles-Alcazar et al., 2017; Stu et al., 2018; Escala et al., 2018; Ma et al., 2018; Dror et al., 2018; Hopkins et al., 2018; Chan et al., 2019; Garrison-Kimmel et al., 2019; Hopkins et al., 2020; Pandya et al., 2021; Wettel et al., 2022; Wellons et al., 2022), and for the same for STARFORGE, see Grudic et al. (2018, 2022); Guszejnov et al. (2018, 2020, 2021, 2022); Lane et al. (2022). Footnote 3: A public version of GIZMO is available at [http://www.tapir.caltech.edu/~phobkins/Site/GI2R0.html](http://www.tapir.caltech.edu/~phobkins/Site/GI2R0.html) All of the relevant physics are implemented in the code GIZMO4(Hopkins, 2015). The simulations evolve the radiation-magneto-hydrodynamics (RMHD) equations, using the meshless finite-mass MFM scheme (a mesh-free Lagrangian Godunov method). The ideal MHD equations are numerically integrated as described in Hopkins and Raives (2016); Hopkins (2016) using the constrained-gradient method from Hopkins (2016) for greater accuracy4, with the addition of non-ideal terms including fully-anisotropic Spitzer-Braginskii conduction and viscosity (implemented as in Hopkins, 2017; Su et al., 2017; Hopkins et al., 2020, including all appropriate terms needed to self-consistently apply them at arbitrary values of temperature or plasma \(\beta\) and in both saturated and unsaturated limits), as well as ambipolar diffusion, the Hall effect, and Ohmic resistivity (Hopkins, 2017). The RHD equations are integrated using the M1 moments method (as tested and implemented in Lupi et al., 2018, 2021, 2022; Hopkins et al., 2020; Hopkins and Grudic, 2019; Williamson et al., 2020, 2022; Bonnerot et al., 2021) for each of five bands (H ionizing, FUV/photo-electric, NUV, optical-NIR, and an adaptive-wavelength blackbody FIR band).5 As described in Grudic et al. (2021), this includes the ability to evolve the effective wavelength or temperature of the IR radiation field so as to accurately handle wavelength/temperature-dependent opacities and emission from wavelengths of \(\sim 0.1-1000\,\mu\)m. Also as described therein, we separately evolve the gas, dust, and radiation temperatures and different radiation bands, with the appropriate physical coupling/exchange terms between these, so that the code can self-consistently handle both limits where the various temperatures are arbitrarily de-coupled from one another and limits where they become closely-coupled (e.g. Bonnerot et al., 2021; Grudic et al., 2021). Note that compared to previous STARFORGE or FIRE RHD simulations, we greatly increase the reduced speed of light, with most runs here using \(\tilde{c}=0.1\,c\), though we have tested runs for a limited time with \(\tilde{c}=0.01\,c\) and \(=1\,c\) (i.e. no reduced speed of light at all) to validate that the radiation properties in the galaxy nucleus at \(\ll 100\,\)pc scales (the regime of greatest interest) are converged. Gravity is solved with an adaptive Lagrangian force softening matching hydrodynamic and force resolution for gas cells, and fixed softenings specified below for the collisionless particles, using a fifth-order Hermite integrator designed to accurately integrate "hard" gravitational encounters (e.g. close binaries) for the entire duration of our simulation (Grudic et al., 2021). We explicitly follow the enrichment, chemistry, and dynamics of 11 abundances (H, He, Z, C, N, O, Ne, Mg, Si, S, Ca, Fe; Colbrook et al., 2017), allowing for micro-physical and turbulent/Reynolds diffusion (Escala et al., 2018), as well as a set of tracer species. Thermo-chemistry is integrated with a detailed solver described in Grudic et al. (2021); Hopkins et al. (2022): we follow all of the expected dominant processes at temperatures of \(\sim 1-10^{10}\,\mathrm{K}\) including explicit non-equilibrium ionization/atomic/molecular chemistry as well as molecular, fine-structure, photo-electric, dust, ionization, cosmic ray, and other heating/cooling processes (including the effects of the meta-galactic background from Faucher-Giguere 2020, with self-shielding). Crucially, the explicit radiation-hydrodynamics is coupled directly to the thermo-chemistry-radiative heating, ionization, and related processes scale directly from the local (evolved) radiation field, and cooling radiation is not simply "lost" (as assumed in many imple Figure 1.— Series of images of the projected gas density in our simulation (§ 2.4) at one moment in time at redshift \(z=4\) typical of when we analyze it. Color encodes surface density increasing black-to-white on a logarithmic scale (red panel rescaled owing to the different dynamics) – a median pixel in the largest-scale panel (_top-left_) has column \(N_{H\star}\sim 10^{19}\,\mathrm{cm}^{-2}\) (density \(n_{H\star}\sim 10^{-5}\,\mathrm{cm}^{-3}\)), while in the smallest-scale panel \(N_{H\star}\sim 10^{27}\,\mathrm{cm}^{-2}\) (\(n_{H}\sim 10^{12}\,\mathrm{cm}^{-3}\)). We see structure on all scales, with a chaotic, cold, disordered morphology on most scales until an ordered disk forms from capture of gas from a passage of a giant molecular cloud complex (itself triggered by an ongoing galaxy merger in the rapidly-accreting proto-galaxy), forming the accretion disk at \(\lesssim 0.1\,\mathrm{pc}\). mentations of optically-thin cooling), but is emitted back into the evolved RHD bands appropriately (see Grudic et al., 2021, for various tests demonstrating that this accurately captures the transition between optically thin and thick cooling regimes). We assume a dust-to-gas ratio which scales as \(f_{\rm dg}=0.01\) (\(Z/Z_{\odot}\)) \(\exp\) (\(-T_{\rm dust}/1500\) K), i.e. a standard dust-to-metals ratio at low dust temperatures with dust destruction above a critical dust temperature. This allows us to capture the most important dust transition at small radii in these simulations, namely dust destruction within the QSO sublimation radius. We stress that the thermo-chemistry modules are designed to self-consistently include essentially all processes which dominate radiative cooling and opacities from densities \(n\ll 10^{-10}\,{\rm cm}^{-3}\) through \(n\gg 10^{15}\,{\rm cm}^{-3}\) in proto-stellar disks (with or without dust). We separately account for the dust and gas opacities in each of the ionizing, photo-electric, NUV, optical-NIR, and gray-body IR bands, calculated as an appropriate function of the (distinct) dust and gas temperature and radiation temperature in each band including dust opacities from Semenov et al. (2003), bound-free/ionizing, free-free, Rayleigh and Thompson opacities for free \(e^{-}\), HI, HII, H\({}^{-}\), H\({}_{2}\), CO, and partially-ionized heavy elements evolved, with the abundances of each of these species calculated in the chemical network (see e.g. John, 1988; Glover & Jappsen, 2007 and other references in Hopkins et al., 2022). In addition to tests in the diffuse ISM and CGM/IGM limits (Hopkins et al., 2021), we also compare the results of the _Hubble Space Telescope_ (HST) and the _HST_ (HST) observations. Figure 2.— As Fig. 1, but tiling the images so more structure can be seen and identifying each with the heuristic label appropriate to the range of scales shown, per § 3.1. In order, each image zooms in by a factor of 10 around the previous image, with side-length \(L=(1000,\ 100,\ 10)\) kpc (_top_), \((1000,\ 100,\ 10)\) pc (_middle_), \((1,\ 0.1,\ 0.01)\) pc (_bottom_). The projection here is chosen to be face-on to the innermost central disk. Note the “hole” in the latter inside \(r\lesssim 80\) au is caused by our inner accretion boundary (dashed circle). 2020a), and the (dust-dominated) protostellar disk and molecular cloud limits (Grudic et al., 2021), we have also validated our opacities against those tabulated in Lenzuni et al. (1991) for metal-free gas with densities \(n\sim 10^{12}-10^{16}\,\rm cm^{-3}\). We emphasize that all the physics and numerical methods above, including gravity, radiation transport, MHD, and thermochemistry, apply always and everywhere in the simulation: there is no distinction between FIRE and STARFORGE treatments.6 The _only_ difference between the FIRE and STARFORGE limits in our simulations lies in how we treat "stars." Specifically, when a gas cell is eligible for "star formation", we must decide whether to convert it into a "single stellar population (SSP) particle" which represents a _statistically-sampled ensemble_ of multiple stars (the FIRE limit, relevant at lower resolution/large cell masses) or to convert it into a "sink/single (proto)star particle" which represents a _single_ (proto)star (the STARFORGE limit, relevant at higher resolution/small cell masses). Those particles each then use their distinct appropriate (SSP or single-star) evolutionary tracks to calculate the mass/momentum/energy/cosmic ray/photon fluxes which are deposited back onto the grid, at which point that injected material is again evolved identically according to the algorithms above. Thus both operate simultaneously in the simulation. Footnote 6: Again note that some previous FIRE simulations using the “default” model in Hopkins et al. (2018b) employ a simpler approximate LEBRON radiatiophrodynamics solver, and FIRE-2 used a simpler thermochemistry module compared to the FIRE-3 version here. However these simplifications are not designed for handling extremely high densities or optically thin-to-thick transitions, so we adopt the more accurate M1 RHD and FIRE-3/STARFORGE thermochemistry detailed above. But we stress that these RMHD and thermochemistry modules have been used (and compared to those simpler modules) in multiple previous FIRE studies (e.g. Hopkins et al., 2022, 2020a; Hopkins et al., 2022; Shi et al., 2022; Schauer et al., 2022, and references therein) as well as STARFORGE (Guszelnov et al., 2021; Grudic et al., 2021, 2022b; Guszelnov et al., 2022, 2022, 2022; ea), and may additional simulations using GIZMO, referenced above. ### FIRE Treatment of Stars (Relevant for Large Cell Masses) FIRE was designed for galaxy-scale simulations, with resolution sufficient to resolve some phase structure in the ISM, but insufficient to resolve _individual_ proto-star formation and stellar growth/evolution histories. As such, we apply the FIRE treatment of stars when the resolution is still sufficiently low (\(\rm cell\ mass>1\ M_{\odot}\)), using the FIRE-3 implementation in (Hopkins et al., 2022) and summarized above. In this limit, gas is eligible for star formation if it is locally self-gravitating at the resolution scale, Jeans unstable, and in a converging flow, as in Hopkins et al. (2013b); Grudic et al. (2018). The intention here is not to resolve e.g. local "peaks" in the density field which will become _individual_ stars, but rather to identify "patches" of the ISM where the fragmentation cascade becomes unresolved, so the gas should continue to fragment and ultimately form a population of stars. As such, for the cells which meet this criterion, we assume fragmentation on a dynamical time (per Hopkins et al., 2018b) and convert them into "single stellar population (SSP) particles" - i.e. collisionless particles which represent ensemble populations of stars Figure 3.— As Fig. 2, but in stars. The chaotic merger morphology at a few kpc, and clumpy, highly asymmetric stellar morphology driving gravitational torques on the gas is evident on all scales. In most panels we show a continuous projection of stellar density, but in the last panel this breaks down (the inter-stellar separation is no longer much smaller than a pixel) so we show _individual_ O-stars. We do not show images at \(\ll 1\) pc because there is a negligible stellar mass compared to the gas on these scales. which sample an assumed universal stellar IMF. Once formed, these SSP particles evolve as detailed in Hopkins et al. (2022) according to explicit stellar evolution models from the 2021 version of STARBURST99 (Leitherer et al., 2014), and return metals, mass, momentum, and energy to the ISM via resolved individual SNe (both Ia & core-collapse) and O/B and AGB mass-loss as in Hopkins et al. (2018), with radiative heating and momentum fluxes determined from the stellar population spectra as in Hopkins et al. (2020), appropriate for a Kroupa (2001) IMF. Cosmic rays are injected in fast stellar wind or SNe shocks as described in Hopkins et al. (2022), using the approximate method from Hopkins et al. (2022). Once injected onto the grid, gas/radiation/cosmic rays obey the physics in SS 2.1. To deal with intermediate resolution cases, we employ the stochastic IMF sampling scheme from Ma et al. (2015); Su et al. (2018); Wheeler et al. (2019); Grudic and Hopkins (2019): when a SSP particle forms, we draw a quantized number of massive stars from an appropriate binomial distribution, from which the relevant feedback properties particular to (rare) massive stars (e.g. core-collapse SNe, ionizing radiation) scale. This allows us to at least approximately apply SSP particles down to gas mass resolution \(\sim 1-10\,M_{\odot}\), where the resolution is still too poor to explicitly resolve individual (proto)star formation, but so high that the expected number of massive stars per star particle is \(\ll 1\) and as such discreteness effects could be important (see discussion in Ma et al., 2015, 2016, 2020; Grudic and Hopkins, 2019). ### STARFORGE Treatment of Stars (Relevant at Small Cell Masses) STARFORGE, on the other hand, was designed for simulations which resolve individual (proto)star formation and evolution, e.g. simulations of individual molecular clouds, clumps, star clusters, or (proto)stellar disks. In this limit, each sink represents a single star, which obviously means it cannot meaningfully represent systems with resolution poorer than \(\gtrsim 1\,M_{\odot}\). As such, we apply the STARFORGE treatment of stars when the resolution is sufficiently high (cell mass \(<1\,M_{\odot}\)). In this limit, gas is eligible for (proto)star formation if it meets a standard but more stringent set of seed criteria described in Grudic et al. (2021), including a strict virial/self-gravity, Jeans, converging flow, fully-compressive tides, and local density/potential maximum criteria as well as restricting to gas cells without a pre-existing neighboring sink and requiring their collapse time be much shorter than the infall time onto the nearest sink (whatever its distance). If all of these criteria are met, a cell is immediately converted into a sink or _individual star_ particle. Once formed, each sink accretes gas that is bound (accounting for thermal, kinetic, and magnetic energies) to it, and whose current and circularized radii fall within the sink radius (set comparable to the force softening), following a standard strict sink accretion model validated in a variety of idealized accretion problems (details in Grudic et al., 2021). The sinks evolve along combined proto and main-sequence stellar evolution tracks, explicitly following the stellar evolution physics versus time (e.g. contraction, heating, different burning stages) allowing for the dynamic accretion rate in every timestep (Offner et al., 2009). In the proto-stellar evolution stage, sinks radiate in all bands with the appropriate effective temperature, and launch collimated jets with a mass-loading proportional to the surface accretion rate onto the star and a launch velocity comparable to the escape velocity from the protostellar surface (details in Grudic et al., 2021; Guszejnov et al., 2021; Grudic et al., 2022). Main-sequence stars continue to emit jets and accretion luminosity if accretion continues, and radiate in all bands following their stellar evolution-track calculated effective temperatures and full spectra, while also emitting continuous stellar surface winds (assumed to be isotropic in the frame of the star, with a continuous main-sequence mass-loss rate given by Grudic et al. 2022) Eq. 1), as a function of the instantaneous stellar luminosity. At the end of their main-sequence lifetime stars can, if sufficiently massive, explode as SNe with \(10^{51}\,\)erg of ejecta kinetic energy. Again we refer to Grudic et al. (2021) for details. Importantly, we note that these physics have been validated by direct comparison to dense molecular gas properties, the observed stellar IMF and multiplicity statistics, and star cluster properties (Guszejnov et al., 2021, 2022; Grudic et al., 2022; Lane et al., 2022), for typical Milky Way-like galaxy conditions. ### Initial Conditions and Refinement Choices Our initial condition (see Fig. 2) is a fully-cosmological "zoom-in" simulation, evolving a large box from redshifts \(z\gtrsim 100\), with resolution concentrated in a \(\sim 10\,\)Mpc co-moving volume centered on a "target" halo of interest (specifically, halo "A1" aka" "m12z4" studied in Feldmann et al., 2016, 2017; Oklopcic et al., 2017; Angles-Alcazar et al., 2017; Hopkins et al., 2020; Ma et al., 2021; Wellons et al., 2022). While there are many smaller galaxies in that volume, for the sake of clarity we focus just on the properties of the "primary" (i.e. best-resolved) galaxy in the volume. The dynamic refinement scheme employed here is numerically identical to that used in many previous GLZMO studies," including examples which have refined to similar resolution around single or binary SMBHs, just without the use of the hybrid FIRESTRFORGE physics described above (see Orr et al., 2018; Hopkins et al., 2018; Su et al., 2019, 2020, 2021; Benincasa et al., 2020; Angles-Alcazar et al., 2021; Franchini et al., 2022). The simulation is run from \(z\sim 100\) down to some redshift (here \(z<4\)), using a dynamic refinement scheme that adaptively varies the mass resolution, as a function of the minimum of either the thermal Jeans mass (ensuring it is resolved by \(\sim 100\) cells) or a function of distance to the nearest SMBH particle (progressively moving from minimum refinement at \(>100\,\)kpc from the nearest SMBH to maximum refinement at \(<10\,\)kpc), between an imposed minimum refinement cell mass of \(\approx 4000\,M_{\odot}\) and maximum refinement mass of \(\approx 10^{6}\,M_{\odot}\). To ensure even low-density, non-self-gravitating multi-phase structure is reasonably resolved, once refined a cell cannot be de-refined unless it escapes far from the galaxy. This allows us to resolve the galaxy at \(\sim 4000\,M_{\odot}\) resolution through formation and initial growth of the SMBH, through a redshift of \(z\approx 4\). We then select a specific time \(t_{0}\) (at a redshift \(z_{0}\approx 4.4\)) from this original simulation just before a period where it (at its rel atively modest resolution) identified rapid quasar-level SMBH growth, with one clearly-dominant SMBH particle within the galaxy nucleus. We will show the galaxy properties at this time in great detail below, but to summarize at \(z\approx 4.4\) it has a dark matter halo mass of \(\sim 3\times 10^{12}\,\mathrm{M}_{\odot}\) inside \(r<250\,\mathrm{kpc}\), and a galaxy stellar mass of \(2\times 10^{10}\,\mathrm{M}_{\odot}\) (and very similar gas mass) inside \(<10\,\mathrm{kpc}\) (with stellar half-mass radius of \(\sim 1.5\,\mathrm{kpc}\)), and a nuclear SMBH mass of \(\sim 10^{7}\,\mathrm{M}_{\odot}\). We then re-start the simulation from this time, with an additional refinement layer: on top of the refinement scheme above, we apply a multiplier \(f\,(r\equiv|\mathbf{x}-\mathbf{x}_{\mathrm{SMBH}}|,\,t-t_{0})\) to the "target" mass resolution, which is a continuous function of \(r\) with \(0<f\leq 1\), where at small radii \(f\propto r^{3}\) and at radii \(\gtrsim 1\,\mathrm{kpc}\), \(f=1\) exactly.8 We reduce the minimum allowed/target cell mass to \(\Delta m\lesssim 0.01\,M_{\odot}\). To avoid pathological behaviors, this refinement layer does not "instantly" activate but appears as a smooth function of time \(t-t_{0}\), designed such that each concentric radius \(r\) (beginning at \(\sim 1\,\mathrm{kpc}\) interior to which the refinement begins) is evolved for a few local dynamical times before the next "layer" of refinement is applied interior to this. This greatly reduces initial noise and spurious features (see discussion in Angles-Alcazar et al., 2021 and other references above). This means that the total duration of the simulation after the beginning of the "hyper-refinement" period is a couple of Myr, but after the highest resolution level (resolving orbits at \(\lesssim 100\) au around the SMBH with an orbital dynamical time of \(\sim 10-20\,\mathrm{days}\)) is reached at a redshift closer to \(z\approx 4\), we evolve for \(\sim 10^{4}\) years. Footnote 8: For comparison, Angles-Alcazar et al. (2021) used a shallower \(\propto r^{2}\) refinement with a maximum resolution of \(\Delta m\sim 15\,M_{\odot}\) and \(\sim 0.1\,\mathrm{pc}\). At our highest resolution level, our mass resolution is \(\Delta m\sim 0.001-0.01\,M_{\odot}\), with spatial resolution \(\Delta x\sim 10^{-5}-10^{-4}\,\mathrm{pc}\), so we can resolve densities up to \(\sim 10^{13}-10^{15}\,\mathrm{cm}^{-3}\) and timescales down to \(\sim 1\) day, with an "effective" grid resolution across our box equivalent to \(N_{\mathrm{eff}}\sim(10^{13})^{3}\). ### Types of Cells/Particles In summary, our simulations include five types of cells or particles: 1. **Gas/Radiation Cells:** These define the effective mesh on which the equations of radiation-magneto-hydrodynamics are solved, including thermochemistry and all the physics described above. The mesh resolution (for gravity and all other forces) is adaptive with spatial resolution \(\Delta x=(\rho/\Delta m)^{1/3}\), and \(\Delta m\) ranges smoothly from \(\lesssim 0.01\,M_{\odot}\) in the high-resolution region around the SMBH (after the "hyper-refinement" phase begins) to a median of \(\sim 5000\,M_{\odot}\) in the \(\sim\,\mathrm{kpc}\)-scale ISM of galaxies to \(\sim 10^{6}\,M_{\odot}\) in the diffuse IGM. The physics/equations integrated for the gas are independent of resolution - the choice of FIRE or STARFORGE physics only appears as a choice of whether cells should form SSP or sink/single-star particles. 2. **Dark Matter Particles:** Dark matter is represented in standard fashion by collisionless particles which interact only via gravity. In the low-resolution cosmological box (well outside of the high-resolution region) these particles have lower resolution in factor-of-two increments (with the poorest resolution \(\sim 4\times 10^{10}\,M_{\odot}\) in the \(\sim 100\,\mathrm{Mpc}\) region), but following standard practice we have confirmed that within \(\sim 500\,\mathrm{kpc}\) of the "target" galaxy of interest, there are zero low-resolution dark matter particles. The high-resolution dark matter particles have \(\Delta m\sim 10^{6}\,M_{\odot}\) and a force softening equivalent to \(\Delta x\approx 200\,\mathrm{pc}\). While crucial for cosmological evolution and galactic-scale dynamics, the dark matter contributes negligibly to the nuclear dynamics (contributing just \(\sim 6\%\) of the total mass inside \(<200\,\mathrm{pc}\) and a vanishingly small fraction of the mass inside \(<20\,\mathrm{pc}\)), and the force softening is sufficiently large that the worst-scale \(N\)-body deflection from an encounter between a high-resolution baryonic cell and DM particle would be no larger than the acceleration/deflection for a gas cell with density \(n\lesssim 1\,\mathrm{cm}^{-3}\) (vastly lower density/acceleration scales than those of interest in the galaxy nucleus).9 Footnote 9: We validated this directly in post-processing, calculating the acceleration and torques on nuclear gas from all particle types. **SMBH Particles:** SMBHs are represented by collisionless sink particles. In the "pre-simulation" phase (running from redshift \(z\sim 100\) to the time \(t_{0}\) (\(z\sim 4.4\)) when we begin our hyper-refinement), the BHs are formed and evolve according to the default sub-grid FIRE-3 seeding, dynamics, accretion, and feedback models described in Hopkins et al. (2022). But this is only relevant in that it gives us a plausible initial condition for our hyper-resolution run. Once the hyper-refinement phase begins, we disable all "sub-grid" models for BH growth and accretion: the BHs are represented by sinks that follow normal gravitational dynamics. Any cells/particles which fall inside of the SMBH sink radius set to \(\approx 80\,\mathrm{au}\sim 300\,R_{s}\) (where \(R_{s}=2\,G\,M_{\mathrm{BH}}/c^{2}\)) are immediately captured (removed from the domain and added Figure 4.— Illustration of the time evolution of the main galaxy in our simulation before our hyper-refinement. We plot the galaxy-integrated SFR \(M_{\star}\), and the _sub-grid_ estimated BH accretion rate (BHAR) from the model – which scales approximately as \(M_{\mathrm{subgrid}}\sim\eta\,M_{\mathrm{gas}}\,\Omega\) at the low-resolution limit of \(\sim 10-100\,\mathrm{pc}\) before hyper-refinement is turned on – as a function of time prior to the time of refinement (at \(z\sim 4.4\)). The inset shows the _resolved_ gas inflow rate into the central sink resolution around the SMBH of \(<80\,\mathrm{au}\), as a function of time in units of the dynamical time \(t_{\mathrm{dyn}}=1/\Omega\) at this resolution scale (\(\sim 80\,\mathrm{au}\)), over the final \(\sim 1500\,\mathrm{yr}\) of our simulation duration (well after it reaches the maximum refinement level everywhere). Though this is a very short relative timescale (compared to the order Hubble time evolution on large scales), we see that the inflow rate into the inner accretion disk is quite stable over tens of thousands of dynamical times in the center, for a given set of conditions at larger radii. We also see extremely high inflow rates, as expected based on the high nuclear gas masses and densities in the “parent” simulation which motivated the choice of this particular moment in time to “zoom in.” to the sink mass).10 We do not include SMBH "feedback" from within the sink radius during this phase. For simplicity, we choose a time where the primary galaxy of interest contains just one SMBH: a sink with mass \(\approx 1.3\times 10^{7}\,M_{\odot}\). Footnote 10: At this capture radius (\(\approx 80\) au), the escape velocity is \(\sim 2\times 10^{4}\,\mathrm{km\,s^{-1}}\), and all the accreted material is tightly bound to the SMBH. Footnote 11: This ensures the “worst-case” N-body deflection is never stronger than typical encounters between gas and star-forming gas clumps/clouds, and is much smaller than any interactions with gas cells at the median gas density inside \(\lesssim 100\,\mathrm{pc}\). Given the chaotic, turbulent nature of the galaxy we follow, the discrete N-body heating rate estimated as in Hopkins et al. (2018) or Ma et al. (2022) is several orders of magnitude smaller than the typical turbulent dissipation rates in the simulation. ## 3. Results We summarize some of the results for our fiducial simulation in Figs. 2-3, showing images of the simulation on scales from \(>\) Mpc to \(<100\,\mathrm{au}\). Specifically, Figs. 2 & 3 show images of the projected simulation gas and stellar mass densities, viewed from the same viewing angle (chosen to be face-on to the accretion disk in the center), on a range of scales. Fig. 4 illustrates the pre-history of the large-scale "parent" simulation, for reference. ### Different Characteristic Scales/Regimes We can clearly see in Figs. 2-3 that the dynamic range spanned by the simulation is enormous - a factor of \(>10^{9}\) in black-hole centric radii, and more like \(\sim 10^{13}\) if we compare our smallest spatial resolution at radii \(\sim 10^{-3}-10^{-2}\,\mathrm{pc}\) from the SMBH, to the size of our entire cosmological box. Fig. 5 shows an alternative illustration, plotting the gas and stars on a logarithmic scale and identifying the different scales with the labels below, in an attempt to visualize the qualitative structures and phases of gas on each scale. It is difficult to actually describe so many orders of magnitude in scale at once, so we break the scales from \(10^{-3}-10^{6}\,\mathrm{pc}\) in SMBH-centric radius down into each order-of-magnitude and both assign a characteristic label for these scales and describe some of the key physics and processes occurring. From largest to smallest scales, we follow gas inflows as follows: * IGM \(\rightarrow\) CGM: On scales \(\gg 100\,\mathrm{kpc}\), the IGM is "cool" (temperatures \(\sim 10^{4}\,\mathrm{K}\)), diffuse (\(\rho\ll 10^{-2}\,m_{p}\,\mathrm{cm^{-3}}\)), quasi-spherical (\(H/R\sim 1\)), dark-matter dominated, weakly-magnetized (\(\beta_{\mathrm{plasma}}\equiv P_{\mathrm{thermal}}/P_{\mathrm{magnetic}}=n\,k_{B} \,T/(|\mathbf{B}|^{2}/8\pi)\gg 100\), with \(|B|\sim 1-10\,\mathrm{nG}\)), with weak outflows and strong primarily-radial (so \(\Pi_{rr}\equiv\langle\,\rho\,v_{r}\,v_{r}\,\rangle\) dominates the kinetic stress tensor) super-sonic inflows of \(\sim 300\,\mathrm{M_{\odot}\,yr^{-1}}\) onto the halo. Essentially gas is in free-fall collapsing with dark matter via the cosmic web. Since this has been well-studied and resolved in many previous simulations, it is not our goal to study this regime in detail here, but what we see is consistent with the most previous studies with the FIRE simulations (Hafen et al., 2019, 2020; Butsky et al., 2020; Hopkins et al., 2021, 2022; Pi et al., 2020, 2021; Li et al., 2021; Stern et al., 2021; Esmerian et al., 2021; Kim et al., 2022; Butsky et al., 2022) as well as results from other codes and semi-analytic models (Hummels et al., 2019; Pandya et al., 2022) and standard observational inferences (see Tumlinson et al., 2017; Chen et al., 2020; Lan & Prochaska, 2020, reviews). * CGM \(\rightarrow\) Galactic ISM: On scales \(\sim 10-100\,\mathrm{kpc}\), the volume-filling gas in the CGM is shock-heated to virial temperatures \(\sim 10^{6}\,\mathrm{K}\), with \(\beta_{\mathrm{plasma}}\sim 100\) and trans-sonic or sub-sonic turbulence, mostly ionized, with thermal pressure comparable to the total pressure and gravity. But the gas is multi-phase, with accretion and outflows of comparable magnitude, with outflows prominent in the diffuse/volume filling phases and inflows dominated by accretion of "cool" (\(\lesssim 10^{5}\,\mathrm{K}\)) gas along filaments with densities \(\sim 100\) times larger than the median background hot gas, lower \(\beta_{\mathrm{plasma}}\sim 10\), and velocities of order the free-fall speed in a dark-matter dominated potential. This is essentially the classic "cold flows in hot halos" picture, again consistent with many previous theoretical studies (Keres et al., 2005; Dekel & Birnboim, 2006; Brooks et al., 2009; Keres et al., 2009, 2019; Faucher-Giguere et al., 2011; Sijacki et al., 2012; Keres et al., 2012; Vogelsberger et al., 2012; Stern et al., 2020) and more recent observations (Ribaudo et al., 2011; Kacprzak et al., 2012; Vayner et al., 2022). * Galactic ISM \(\rightarrow\) Galactic Core/Proto-Bulge: On scales \(1-10\,\mathrm{kpc}\) in the galaxy, the gas is highly multi-phase with self-shielding of the UV radiation field (\(\Sigma_{\mathrm{gas}}\gtrsim 10\,\mathrm{M_{\odot}\,pc^{-2}}\)) allowing formation of "cold" neutral medium (CNM) and molecular medium with \(T\ll 10^{4}\,\mathrm{K}\), alongside hot gas with \(T\gtrsim 10^{7}\,\mathrm{K}\) from SNe, while gas densities range from \(\lesssim 10^{-2}\,m_{p}\,\mathrm{cm^{-3}}\) to \(\gg 10\,m_{p}\,\mathrm{cm^{-3}}\) in cold cloud complexes (and \(\beta_{\mathrm{plasma}}\) similarly ranges from \(\sim 0.1\) in cold phases to \(\sim 1-10\) in warm phases and \(\sim 100\) in the most diffuse volume-filling phases, and \(|B|\gtrsim\) a few \(\mu\)G). These cold complexes maintain most of the SF, with a SFR inside \(<10\,\mathrm{kpc}\) of \(\sim 50-100\,\mathrm{M_{\odot}\,yr^{-1}}\) (over the last \(\sim 100\,\mathrm{Myr}\)). The potential becomes dominated by stars inside a few kpc (the galaxy effective radius). The turbulence is mildly supersonic (sonic \(\mathcal{M}_{\mathrm{s}}\sim 1-\) a few) in a volume-averaged sense (with the volume-average dominated by warm ionized media [WIM] and warm neutral media [WNM] at \(\sim 10^{4}\,\mathrm{K}\)), but highly super-sonic (\(\mathcal{M}_{s}\sim 10-100\)) in the "cold" phases. Figure 5.— Images of the gas (_left_) and stars (_right_) with different scales and their approximate naming label conventions from § 3.1 shown. The images show BH-centric radius \(r\) increasing from bottom-to-top (the vertical axis) on a logarithmic scale as labeled. The horizontal axis shows \(\theta\equiv z/r\) from \(-1\) to \(+1\) (defined so \(z=0\) corresponds to the midplane of the inner disk), in a wedge of azimuthal opening angle as \(\phi<0.3\). For gas (_left_) colors denote different phases \(T<10^{3}\) K (_green_), \(10^{3}<T<10^{4}\) K (_yellow_), \(10^{4}<T<10^{5}\) K (_magenta_), \(10^{5}<T<10^{6}\) K (_purple_), \(T>10^{6}\) K (_cyan_). We see other galaxies on IGM scales, the virialized CGM with accretion in warm clumps/filaments, the highly clumpy/inhomogeneous/asymmetric and multi-phase structure in the galaxy and (thermally colder, primarily atomic+molecular) galaxy nucleus, settling into the more ordered (but still visible and turbulent) non-star forming disk and BH accretion disk on sub-pc scales. Most of the gas is atomic or molecular. While turbulence maintains an effective volume-averaged \(Q\sim\) a few as at all larger radii, the _thermal_ Toomre \(Q\) parameter drops to \(\ll 1\) in the cold phases, in particular, meaning that fragmentation via self-gravity is rapidly promoted, with the characteristic "most unstable" fragment masses expected to contain most of the power in the fragment mass spectrum (e.g. the largest self-gravitating complexes) ranging from \(\sim 10^{7}\) to a few \(10^{9}\,M_{\odot}\) (larger than in low-redshift galaxies, owing to the massive gas content of this dense, high-redshift galaxy, similar to complexes observed at high redshift). Again this is broadly consistent with previous theoretical (Noguchi, 1999; Bournaud et al., 2008; Agertz et al., 2009; Dekel et al., 2009; Ceverino et al., 2010; Hopkins et al., 2012b; Oklopcic et al., 2017) and observational (Elmegreen et al., 2004; Martinez-Sansier et al., 2009; Kriek et al., 2009; Daddi et al., 2010; Forster Schreiber et al., 2011; Newman et al., 2012) studies of massive star-forming and quasar-host galaxies at redshifts \(z\gtrsim 2\). The system is extremely inhomogeneous, with non-axisymmetric mode amplitudes \(|a_{1}|\sim 0.1-1\) and large clump and cloud complexes and star clusters visible. At the time of this particular simulation, the torques from \(\sim 1-10\) kpc clearly involve large non-axisymmetries which are visually dominated by a large minor merger (with the companion at \(\sim 10\) kpc, having just passed pericenter). * i.e. order-unity asymmetries in the potential dominated by the _stellar_ structure (since this dominates the mass) leading to the gas structures shocking and losing angular momentum on a timescale comparable to the dynamical/orbital time (see Levine et al., 2008; Hopkins & Quataert, 2010, 2011b; Hopkins et al., 2016; Angles-Alcazar et al., 2013, 2017a, 2021; Prieto & Escala, 2016; Prieto et al., 2017). The system begins to be optically thick to cooling radiation in NIR/optical/NUV/UV bands, so the IR radiation energy density begins to rise. * gravitational torques still clearly dominate in this regime. The system is beginning to become more optically thick at some wavelengths but still has cooling times much shorter than dynamical times and is not in a black-body like state (the dust, radiation, and gas temperatures are all significantly different). * BHROI \(\to\) "Torus": On scales \(\sim 1-10\) pc, the BH begins to dominate the potential, though stars still strongly dominate over gas in the local fluctuations in the potential (since the density of stars is much higher than gas). Because the system is now "fully" optically thick to its own cooling radiation, we begin to see a clear inversion of the density-temperature relation, with denser gas being warmer (in both its kinetic, gas, and radiation temperatures, even though these are not yet all in equilibrium with one another) in a quasi-adiabatic manner (as opposed to the usual case at larger radii where denser gas is colder), with \(\beta_{\rm plasma}\sim 0.01\). The densities in the midplane and dense gas phases begin to exceed \(\gtrsim 10^{6}\,m_{p}\,{\rm cm}^{-3}\), at which point the dust temperature starts to couple appreciably to the gas kinetic temperature so the two begin to approach one another, but the large inhomogeneity of the medium and much shorter dynamical times (compared to e.g. the conventional case in molecular clouds) mean this coupling is still relatively weak/gradual and incomplete. Despite inflow rates still as large as \(\sim 100\,{\rm M}_{\odot}\,{\rm yr}^{-1}\), the SFRs averaged over the last (100, 10, 1) Myr are \(\sim(0.5,\,5,\,25)\,{\rm M}_{\odot}\,{\rm yr}^{-1}\), indicating its highly non-global-equilibrium nature. Turbulence remains highly super-sonic (in primarily warm molecular gas) but becomes only mildly super-Alfvenic with \({\cal M}_{A}\sim 1-3\). Again, gravitational torques clearly dominate the visual structure (with large coherent gas asymmetries and \(|a_{1}|\sim 1\)): the major change from larger radii is that instead of being incoherent/clumpy structures, the increasingly Keplerian nature of the potential means that coherent, non-linear \(m=1\)-like perturbations related to torques between gas and stars dominate (Tremaine, 2001; Zakamska & Tremaine, 2004; Hopkins & Quataert, 2010, 2011b, 2011c, 2010). * "Torus" \(\to\) Non-Star-Forming Disk: On scales \(\sim 0.1-1\) pc, the temperatures quasi-stabilize at a few \(10^{3}\) K, and molecules begin to dissociate into atomic gas at these warmer temperatures (though a non-negligible molecular fraction remains). We see a rapid rise in \(Q_{\rm turb}\) and driven by the steeply-rising \(\Omega(r)\) towards small \(r\). This leads to a rapidly-forming, coherent disk. At \(\sim\) pc, we still have \(Q_{\rm thermal}\lesssim 1\) and a cooling time short compared to the dynamical time, and as shown in previous studies (see e.g. Hopkins & Christiansen 2013 and discussion below) a disk with these conditions is still unstable to gravitational fragmentation within "patches" even if it is _statistically_ marginally stable with turbulent+magnetic support (the system still has \(\beta\ll 1\), with trans-Alfvenic turbulence), so it continues to break into individual resolved stars. The SFR averaged on (\(100,\;10,\;1,\;0.1\)) Myr timescales inside \(\sim 1\) pc is \(\sim(0.01,\;0.1,\;1,\;10)\) M\({}_{\odot}\) yr\({}^{-1}\), to the extent that it can be defined in any meaningful way on these small timescales, as the individual protostars and main-sequence stars formed since the gas arrived at these radii are still accreting. There is an apparent large outflow structure in the \(\dot{M}\) plot but this is really a very large-scale coherent eccentricity (\(m=1\) mode) of the "arm," as anticipated given the large asymmetries. However, as we approach \(\sim 0.1\) pc, the system dramatically changes and star formation effectively ceases. * Non-Star-Forming Disk \(\rightarrow\) "Accretion Disk": (\(\sim 0.01-0.1\) pc) Just outside \(\sim 0.1\) pc, a crucial transition occurs as \(Q_{\rm thermal}\) increases to \(\gtrsim 1\) with \(Q_{\rm mag}\gg 1\) dominated by increasingly-organized toroidal fields. Meanwhile, the characteristic maximal fragment mass \(\sim\pi\,\Sigma_{\rm gas}\,H^{2}\) starts to drop into the stellar mass range. As a result (discussed in detail below), star formation shuts down. The SFR inside \(<0.1\) pc averaged over the entire duration of our simulation is \(\lesssim 10^{-3}\) M\({}_{\odot}\) yr\({}^{-1}\), compared to inflow rates of \(\sim 10-100\) M\({}_{\odot}\) yr\({}^{-1}\). With this transition, the disk mass inside \(<0.1\) pc is now locally gas-dominated instead of stellar-dominated, so gravitational torques rapidly become inefficient (Hopkins & Quataert 2011b), but we still see \(m=1\) modes propagate into these radii and gravito-turbulent behavior, but now a combination of Maxwell and Reynolds stresses take over as the dominant provider of the torques maintaining a similar global bulk inflow rate. As we go to smaller scales still, the deepening potential means the scale height becomes somewhat smaller and the disk becomes increasingly well-ordered. The disk is strongly-magnetized, with \(\beta_{\rm plasma}\lesssim 10^{-3}\), and the turbulence becomes modestly sub-Alfvenic at smaller radii. * "Accretion Disk" \(\rightarrow\) ISCO: On scales \(\ll 0.01\) pc (\(\ll 10^{4}\) gravitational radii), the disk is essentially in the regime of a "traditional" \(\alpha\)-like accretion disk in many ways. It is optically thick, geometrically thin or "slim," radiating in increasingly black-body-like fashion, nearly-Keplerian and close to circular, gravitationally stable (\(Q_{\rm thermal}\gtrsim 1\) with \(Q_{\rm mag}\gg 1\)), with maximum fragmentation mass scale \(\lesssim 1\) M\({}_{\odot}\) so it is not able to fragment efficiently at all. But there are many important differences between the disk here and what is usually assumed in accretion disk studies (to be studied in detail in Hopkins 2023a, henceforth Paper II). Even though the optical depth is large, the effective black-body cooling time is much shorter than the dynamical time (by a factor \(\sim 10^{-3}\)), and the turbulence is supersonic, so it maintains a quasi-isothermal, relatively cool global structure. The disk is strongly magnetized with \(\beta_{\rm plasma}\sim 10^{-4}\) (\(|B|\gtrsim 100\) G primarily toroidal fields), sustained by flux-freezing from the flux it is fed from the ISM (hence a "flux-frozen" and/or "flux-fed" accretion disk) and modestly sub-Alfvenic (hence highly-supersonic) turbulence. ### Effective Resolution of the Simulation Figure 6.— Effective resolution of the simulation as a function of radial distance from the BH (\(r\)). We show the median, 50% and 90% inclusion intervals (_shaded_) of the mass resolution (gas cell mass \(\Delta m\); _left_), spatial resolution (equivalent cell size \(\Delta x\equiv(\Delta m/\rho)^{1/3}\)), and time resolution (numerical efficiency \(\Delta r\)), at each \(r\). The bulk of the galaxy is resolved with \(\Delta m<10^{4}\) M\({}_{\odot}\); from \(1-100\) pc the resolution is rapidly refined as \(r\) decreases, with the target resolution of \(\Delta m<0.01\) M\({}_{\odot}\) and \(\Delta x<10^{-3}\) pc reached for the particles at \(r\lesssim\) pc scales. For \(\Delta m\), \(\Delta x\), and \(\Delta r\) we compare the global enclosed gas mass \(M_{\rm gas}(r)\), radius \(r\), and dynamical time \(t_{\rm dyn}\) at each \(r\). In \(\Delta m\), we also denote the region where the simulation will form STARFORGE single-star/sink particles, as opposed to FIRE SSP particles. In \(\Delta r\), we also denote as “duration” approximately how long the simulation was run after reaching the maximum refinement level at each radius. Fig. 6 shows the effective mass, spatial, and time resolution of the fiducial simulations as a function of BH-centric radius \(R\) at the times where we study it. This reflects the target resolution discussed in SS 2 above. In brief: at galactic radii (\(\lesssim 10\,\)kpc) the resolution is uniformly better than \(\sim 10^{4}\,M_{\odot}\) as given by our target refinement criterion before the "hyperrefinement" is activated; this then achieves the desired radial refinement with \(\Delta m\) smoothly decreasing from \(\sim\,\)kpc to \(\sim\,\)pc scales before we saturate at our target resolution inside \(\lesssim 1\,\)pc of \(\Delta m<0.01\,M_{\odot}\). This also lets us clearly identify where the simulation lies in different "limits" with regard to star formation per SS 2.2-2.3: at \(>10\,\)pc scales, the resolution is always in the "FIRE" limit (forming SSP particles), and at \(<1\,\)pc scales, the resolution is always in the "STARFORGE" limit (forming single-star particles). We can also compare to the total enclosed gas mass in the simulation, to demonstrate that there are always \(N\gg 1\) gas cells in each radial annulus. Likewise, we see the spatial resolution at small radii is uniformly \(\ll H\) (the scale height of the gas at that radius, defined below), reaching \(\Delta x\sim 10^{-5}\,\)pc, and the time resolution is always much shorter than the local dynamical time. Our timesteps reach extremely small values \(\Delta t<1\,\)day in the central regions at the maximum refinement level: even with hierarchical timestepping (obviously necessary for such large dynamic range) this limits how long the simulations can be run. Here we evolve for \(\sim 10^{4}\,\)yr after the finest refinement level was activated: roughly \(3\times 10^{5}\) dynamical times (\(t_{\rm dyn}=1/\Omega\)) at our innermost boundary condition (the "excision radius" around the SMBH of \(80\,\)au or \(4\times 10^{-4}\,\)pc). ### Mass and Accretion Rate Profiles Figs. 7 & 8 more quantitatively examines the radial profiles of various quantities related to the mass and mass flows: the circular velocity (defined as \(V_{\rm c}\equiv\sqrt{G\,M_{\rm enc}(<r)/r}\)) and its contribution from the SMBH, gas, stars, and dark matter; the radial profile of surface density \(\Sigma_{\rm gas}\) and mid-plane three-dimensional density \(\rho_{\rm s}\) and the inflow and outflow rates \(\dot{M}\) through each annulus.12 Footnote 12: We quantify both the volume-weighted (\(\rho_{\rm gas}(r)\equiv\dot{M}M_{\rm gas}/4\pi\,r^{2}\,dr\) in concentric shells, and the “midplane” gas density defined as the mass-weighted mean gas density within \(<10\%\) of the midplane defined by the net gas angular momentum vector within each concentric shell. The latter much more obviously shows large variance owing to phase structure, satellite galaxies (at large radii), and other forms of inhomogeneity, and is (as expected) systematically larger than the volume-weighted mean by \(\sim 1-3\,\)dex, but the broad trends are similar. Because \(V_{\rm c}^{2}\) at some \(r\) is just proportional to the enclosed mass, we can clearly read off from Fig. 7 where different components dominate the potential and the local matter distribution. The BH dominates inside the BHROI at a few pc, and we see the local density is gas-dominated only interior to the radii where star formation shuts down (\(\ll 0.2\,\)pc here), while stars dominate the local density from \(\sim 0.2\,\)pc to \(\sim 2\,\)kpc, and dark matter dominates the density at much larger scales. While there are clearly very large local fluctuations in gas density, it (rather remarkably) appears to follow an approximately isothermal-sphere-like \(\rho_{\rm gas}\propto r^{-2}\) profile _on average_ over nine decades in radius.13 This leads to a gas surface density profile scaling approximately as \(\Sigma_{\rm gas}\propto R^{-1}\) (an actual power-law fit gives a very slightly shallower slope). Footnote 13: We quantify both the volume-weighted (\(\rho_{\rm gas}(r)\equiv\dot{M}M_{\rm gas}/4\pi\,r^{2}\,dr\) in concentric shells, and the “midplane” gas density defined as the mass-weighted mean gas density within \(<10\%\) of the midplane defined by the net gas angular momentum vector within each concentric shell. The latter much more obviously shows large variance owing to phase structure, satellite galaxies (at large radii), and other forms of inhomogeneity, and is (as expected) systematically larger than the volume-weighted mean by \(\sim 1-3\,\)dex, but the broad trends are similar. Recall, the duration of our simulation at its highest resolution is still short compared to the global dynamical/evolution timescales on \(\gtrsim 1-10\,\)pc scales, so (as expected) these profiles are robust in time over the duration of the simulations. Even at the smallest radii, where we run for many dynamical times, after the initial refinement period they remain consistent within the scatter shown, as they are determined by the boundary conditions from larger radii. In terms of accretion rates, we also see a surprisingly close-to-constant \(\dot{M}_{\rm in}(r)\) from radii of \(\sim\,\)Mpc down to \(\lesssim 10^{-3}\,\)pc. This is especially surprising given (a) the wildly different characteristic dynamical times on these scales, and (b) as noted from the morphologies above and some kinematic discussion below, that many radii are strongly out-of-equilibrium. The latter does produce some of the large "wiggles" in \(\dot{M}_{\rm in}\), but seems to produce much more dramatic variation in the outflow rates \(\dot{M}_{\rm out}\) at different radii. That is consistent with e.g. the behavior seen in Angles-Alcazar et al. (2021), especially when we consider the time variability shown in Fig. 7 over the dynamical time at each radius, but we caution that we focus on much smaller spatial scales for a much shorter overall period of time, compared to their study. Interestingly however, comparing the different simulations considered in Angles-Alcazar et al. (2021), the (weak) variation in \(\dot{M}_{\rm in}(r)\) we see is most similar to their "full-QSO" simulation (the simulation with the largest sustained inflow, most similar to the case here). That suggests the radial and time variability may be much larger at lower accretion rates (which is plausible, as e.g. star formation and outflows and other potential "bottlenecks" may play a much larger role limiting gas supply at low \(\dot{M}\)). We do, on average, see some systematic decline in \(\dot{M}_{\rm in}\) from the largest to smallest radii, as expected (material can "stall" and simply cease inflow without efficient angular momentum transport mechanisms, or be ejected in outflows, or go into star formation, at each radius), but this is weak, especially at the smallest radii \(\ll\,\)pc where star formation has ceased (again notably weaker than simulations modeling systems with orders-of-magnitude lower mass inflow rates like M87, see e.g. Guo et al., 2022). And we see outflow rates are order-of-magnitude comparable to inflow rates at most radii; but even where \(\dot{M}_{\rm out}>\dot{M}_{\rm in}\) locally (which again clearly indicates out-of-equilibrium behavior, and is much more transient in these simulations) inflows are sustained over the entire duration of the simulation. It is also the case that even at large radii both the inflow/outflow rates are generally much larger than star formation rates within a given annulus (except right around \(\sim 1\,\)kpc), as shown in Fig. 8, owing to feedback self-consistently regulating star formation to be relatively slow (with an average efficiency \(\sim 1\%\) per free-fall time; see Hopkins et al., 2011; Orr et al., 2018), as expected from previous studies of gas-rich, star-forming galaxies. Finally we can also see where star formation ceases at small radii in both Figs. 7 & 8. Looking at different times in our simulations in Fig. 4, we see that while there are some significant (factor of a few to order of magnitude) variations in the accretion rates into the central \(<80\,\)au over the duration of the simulation, the accre tion rates at these radii are quite slowly-evolving in a dynamical sense (these variations occur over tens of thousands of local dynamical times \(t_{\rm dyn}\sim 1/\Omega\) at the smallest radii). Thus it is reasonable to consider the inner regions to be in some kind of statistical quasi-steady-state in terms of accretion and dynamics, at given large-scale time in the galaxy. ### Plasma and Thermo-Chemical Properties In Fig. 9 we illustrate some of the multi-phase structure of the simulation more explicitly, alongside the (highly inhomogenous) stellar distribution. For comparison, Fig. 10 illustrates the average radial profiles (smoothing out these local variations) in various plasma and thermo-chemical properties of the medium. We see that the mean temperature jumps from typical \(\sim 10^{4}\) K IGM values to much warmer \(\gtrsim 10^{6}\) K (comparable to the virial temperature) inside the virial radius of the dark matter halo as the gas shocks, although as shown in Fig. 9 much of the accretion onto the galaxy can still be in the form of warm/cold filaments and clumps. At these large radii the radiation and dust temperatures are largely determined by the CMB (if we specifically ignore the CMB, the radiation temperature of the residual radiation instead just reflects the meta-galactic UV background), because the medium is optically thin without significant sources on these scales. As expected, the gas is largely a mix of ionized and atomic phases. Inside the galaxy, we see an even more dramatic multi-phase structure (evident in e.g. the separation between mass and volume-weighted mean Figure 7.— _Top Left:_ Circular velocity profile \(V_{\rm c}\equiv\sqrt{G\,M_{\rm enc}(<r)/r}\) versus spherical distance from the SMBH \(r\), with contributions from different mass components. The BH excision radius truncates the gas mass distribution at \(\leq 0.001\) pc. We clearly see a transition from dark matter dominating outside the galaxy, stars dominating within the galaxy, until the BH001 at \(\sim 4\) few pc, and the cessation of star formation giving a gas-dominated disk at \(\lesssim 0.1\) pc. _Top Right:_ Projected mean gas, stellar, dark matter, and “star formation rate” (defined by mass of stars formed in the last \(t_{\rm dyn}\equiv 1/\Omega\) at each radius) surface density \(\Sigma\), in cylindrical shells of different projected radii \(R\) (plotted down to radii where we have at least \(>1000\) particles of each type). We see the same transitions. While there are large deviations at any radius, \(Z_{\rm gas}\propto r^{-1}\) approximates the scaling reasonably well from \(\sim 10^{-3}-10^{6}\) pc. In this and subsequent plots we independently determine the “midplane” in each annulus from the angular momentum vector of the gas in the annulus. The suppression of \(\Sigma_{\rm SFR}\) at small radii is discussed below (§ 3.5.3). _Bottom Left:_ Three-dimensional gas density \(\rho\) versus spherical radius \(r\). We show both the volume-weighted density mean (_line_) in shells and mass-weighted mean (shaded range shows the volume-weighted 90% inclusion interval; the mean can exceed this upper limit if the distribution has large tails). The gas density profile is crudely isothermal (in a very order-of-magnitude sense), \(\rho\propto r^{-2}\), from \(10^{-3}-10^{6}\) pc. Mass-weighted densities are systematically enhanced relative to volume-weighted but also vary more owing to clumping and phase structure. _Bottom Right:_ Mass flow rate \(\dot{M}\), showing the total inflow rate \(\dot{M}_{\rm in}\) and total outflow rate \(\dot{M}_{\rm out}\) through each annulus, and the cumulative SFR summed within each radius \(\dot{M}_{*}(<r)\), versus spherical \(r\). Grey bar shows the 50% (_solid_; \(\sim 18-30\,{\rm M}_{\odot}\,{\rm yr}^{-1}\)) and 90% (_dotted_; \(\sim 15-73\,{\rm M}_{\odot}\,{\rm yr}^{-1}\)) range of inflow rates at \(<100\) au over the duration of our highest-resolution simulation. Lack of spherical symmetry and non-equilibrium dynamics mean that inflow and outflow co-exist and can change relative sign (e.g. the “outflow” at \(\leq\) pc scales is mostly just coherent eccentric motion), but despite this a remarkably stable order-of-magnitude inflow rate to the SMBH of \(\dot{M}_{\rm in}\sim 10-100\,{\rm M}_{\odot}\,{\rm yr}^{-1}\) persists. temperature), with large amounts of gas at \(\sim 10^{4}-10^{5}\) (both ionized and warm atomic), but some cold neutral and molecular star-forming gas and a hot phase at \(\gtrsim 10^{6}\) K. In the galaxy nucleus at scales, \(\lesssim 100\) pc, the mean temperature drops as the densities are so high that there is very little hot phase, and the medium becomes primarily molecular. As we go to smaller radii and the star formation rate density becomes higher and infrared optical depths become appreciable, we see the dust temperature rise from CMB values up to \(\sim 100\) K, very similar to the typical values observed in low-redshift starburst nuclei and circum-nuclear disks around AGN where the molecular gas and SFR densities are comparable to those predicted here (see e.g. Narayanan et al., 2005; Evans et al., 2006; Iono et al., 2007; Hopkins et al., 2008b,c; Casey et al., 2009; Wang et al., 2008; Izumi et al., 2016; Lelli et al., 2022, and references therein). We also see a significant "warm" molecular component at \(\sim 1000\) K begin to appear at \(\sim\) pc. At \(r\lesssim 5\) pc, most of the medium is at warm phase temperatures \(\sim 10^{3}-10^{4}\) K, and molecules begin to be dissociated again, and at \(\lesssim 0.1\) pc the optical depths to cooling radiation become large while the densities are sufficiently large (\(\gg 10^{6}\) cm\({}^{-3}\)) that the dust and gas and radiation temperatures all begin to couple to one another (rapidly converging to broadly similar values by \(\sim 0.01\) pc). Meanwhile, similar to what we saw with the density field, the mean magnetic fields follow a profile \(\langle|{\bf B}|\rangle\propto r^{-1}\) (becoming slightly steeper in the CGM/IGM, see Ponnada et al., 2022).3 Note that, because of the extreme dynamic range here, the fact that \(|{\bf B}|\) scales slightly steeper than \(r^{-1}\), while \(\rho\) scales slightly shallower than \(r^{-2}\) means that the mean ideal-MHD Alfven speed (\(v_{A}=(|{\bf B}|^{2}/4\pi\rho)^{1/2}\)) is not exactly constant but increases gradually from tens of km s\({}^{-1}\) on "galactic" scales \(\sim 1-10\) kpc to hundreds of km s\({}^{-1}\) at scales \(\ll 0.01\) pc, but this is consistent with a very weak trend (\(v_{A}\propto r^{-0.15}\) or so. At radii \(\gg\) pc we see large variance in \(|{\bf B}|\) reflecting the multi-phase structure of the gas. We also see that the energy-weighted typical plasma \(\beta=c_{s}^{2}/v_{A}^{2}\gg 1\), as expected and observed in the ISM and CGM of typical galaxies (e.g. Mao et al., 2012; Han, 2017; Mao, 2018; Seta & Federrath, 2021; van de Voort et al., 2021; Prochaska et al., 2019; Lan & Prochaska, 2020; Ponnada et al., 2022). However it is important to note that this is phase-dependent: as usual, in the coldest phases in a multi-phase ISM (e.g. the molecular or cold neutral medium), \(\beta\ll 1\) almost by definition (see e.g. Crutcher et al., 2010; Alina et al., 2019 for observations or Ostriker et al., 2001; Padoan & Nordlund, 2011; Su et al., 2017; Hopkins et al., 2020; Guszejnov et al., 2020 for theoretical discussion). Where we see the mean temperature/thermal pressure of the medium drop sharply as the mass collapses into cold phases at \(\lesssim 100\) pc, we therefore see a sharp transition from a total-energy or volume-weighted \(\beta\gg 1\) to \(\beta\ll 1\) at smaller radii. The magnetic field evolution through this point is relatively smooth; it is the thermal phase structure which changes much more rapidly. We can also see that outside of the nuclear disk at \(\gtrsim 0.1\) pc, the magnetic fields are order-of-magnitude isotropic (no single component strongly dominates \(|{\bf B}|\)) and exhibit a large dispersion, reflecting the disordered and super-Alfven turbulent morphology of the gas and multi-phase structure (there is at some radii a small bias towards radial \({\bf B}\) fields, reflecting inflows and outflows). Inside \(\lesssim 0.1\) pc where the ordered, thin, nuclear disk forms, we see this produces a much more ordered, predominantly toroidal field. This will be studied in more detail in Paper II, where we will examine the time-dependence of the field and its amplification mechanisms in detail. Footnote 3: We follow standard practice and initialize a uniform, trace seed field with comoving strength \(t_{0}\sim 10^{-15}\) cR at redshift \(z\sim 100\) in the “pre-refinement” simulation initial conditions, in order to source the simulation magnetic fields. This is rapidly amplified self-consistently and at all but the most extreme diffuse IGM at radii \(\gg\) Mpc, the predicted (saturated) magnetic field strength here is independent of the trace field (see e.g. Su et al., 2017; Bieder & Teyssier, 2017; Martin-Alvarez et al., 2018). At all radii, the kinetic energy density of gas is non-negligible (whether primarily ordered or disordered), as expected. The radiation energy density is always sub-dominant to kinetic, magnetic+thermal, and gravitational energy densities (\(\sim\rho\,V_{c}^{2}\)) - this is expected at large radii where the galaxy is optically thin, but is surprising at the smallest radii, where Figure 8.— Radial profiles of related timescales and bulk flows as Fig. 7. _Left_: “Depletion” (or accretion) timescales for inflow/accretion, outflow, and star formation, in different radial annuli, defined as shown (with \(\Delta M_{i}=\beta M_{i}/\partial\ln r\)), in units of the galactic dynamical time (\(t_{\rm dyn}\equiv\Omega^{-1}\)) at each radius. Star formation is efficient at Galactic radii from \(-\) pc to \(-\) kpc, as expected. Inflows are dynamical at essentially all radii, and produce net inflow at sub-kpc scales during this strong-inflow phase. _Right_: Gas mass-weighted mean radial (\(v_{r}\)), azimuthal/rotational (\(v_{\theta}\), with the \(\phi\) axis defined in each annulus by the net angular momentum vector as Fig. 7), and polar (\(v_{\theta}\)) velocities, and velocity dispersions \(v_{\theta}\), in units of \(V_{r}\). We see also mix of inflow/outflow motions (large \(g_{vr}\)) dominate at large radii (\(\gg\) kpc), partial rotation in a kinematically hot/thick galaxy configuration at Galactic radii (\(-0.1-10\) kpc), and near free-fall at the radii interior to the BHROI where the GMC-like gas complex fueling the SMBH is tidally disrupted and captured (\(\sim 1-10\) pc), which then circularizes at sub-pc scales to form a kinematically cold, rotation-dominated disk at \(\lesssim 0.1\) pc scales. again it will be discussed in greater detail in Paper II. However the radiation energy density we see in the simulation at small radii is expected (it is approximately that of a simple blackbody if we set the cooling luminosity equal to the accretion luminosity \(\sim\dot{M}_{\rm in}\,V_{c}^{2}(r)\) at each \(r\)) - it is simply that the magnetic and kinetic energy densities are much larger. The relative composition of the radiation energy density is unsurprising: at large radii the broad NUV band dominates as expected for an optically thin young stellar population, whereas at small radii our adaptive "IR" (but really just any re-radiated light) band dominates when the medium becomes optically thick to NUV and optical emission. The cosmic ray energy density is also small (except perhaps at the very largest radii) compared to others in such a dense environment, as expected and discussed in more detail below. Briefly, it is worth noting that for a reasonable estimate of the UV luminosity from un-resolved (\(\ll 100\,{\rm au}\)) scales around the SMBH given the accretion rate here, we might expect the broad line region (BLR) to reside at radii \(\sim 20-150\) light-days (Kaspi et al., 2005), or \(\sim 0.01-0.1\,{\rm pc}\). It is notable that the gravitational velocities at these radii are \(\gtrsim 1000\,{\rm km\,s^{-1}}\), and (per Fig. 12) the disk covering fraction or \(H/R\) is relatively large \(\sim 0.03-0.1\) and increasing at these radii, where it is broadly comparable to the fraction of the UV/optical quasar continuum emitted in the broad lines (Vanden Berk et al., 2001; Richards et al., 2006). This is highly suggestive, but more quantitative comparisons (and conclusions related to the physical nature of the BLR "clouds" in these simulations) will require detailed post-processing radiative line transfer, which we hope to explore in future work. As noted previously in SS 3.3, the profiles here remain stable in time (well within their fairly large scatter) over the duration of the highest-resolution simulation, though they can, of course, evolve on larger radii (pre-refinement) on much longer Figure 10.— _Top Left:_ Thermal properties: we plot the volume-weighted and mass-weighted mean gas temperatures (and their range, _shaded_) in each annulus, together with the effective radiation temperature (integrating over all bands evolved, but excluding the CMB which dominates at \(r\gtrsim 200\,{\rm pc}\)) and dust temperature (also dynamically evolved), all versus distance from the BH in spherical annuli. _Top Right:_ Chemical properties: average free electron (\(x_{\rm e}\)), ionized hydrogen (\(x_{\rm HII}\)), molecular hydrogen mass (\(f_{\rm H_{2}}\)), atomic hydrogen (\(x_{\rm HI}\)), and metal (\(Z\), so solar \(\sim 0.01\)) mass fractions. We see the highly multi-phase, optically-thin medium collapse into a cool atomic-molecular medium with warm dust in the galactic nucleus, then eventually see the system converge towards black-body like behavior with \(T_{\rm rad}\sim T_{\rm dust}\sim T_{\rm gas}\) in the center, with the dust sublimating. The scatter at a given \(r\) in e.g. \(x_{\rm e}\) is large, and shown below. _Bottom Left:_ Magnetic field strengths: we plot the energy-weighted rms field strength, and radial/toroidal/poloidal components. To crude approximation \(\langle|{\bf B}|\rangle^{1/2}\propto r^{-1}\) from \(\sim 10^{-3}-10^{6}\,{\rm pc}\), with crudely isotropic (some mildly radial-dominant reflecting inflows) fields at most radii until the inner disk forms, and the field becomes primarily toroidal (see Paper II). _Bottom Right:_ Radial profile (compensated by \(r^{2}\) to make comparison easier) of different components of the pressure/stress tensor (for anisotropic components, we plot \(|{\bf I}({\bf I})|\) for the stress \({\bf II}\)). At all radii, the kinetic energy density/ram pressure is important, and it is largely isotropic with mild radial bias at most radii until the disk forms where it becomes tangential. We see the transition from plasma \(\beta\gg 1\) at large radii to \(\ll 1\) at small radii. Radiation (again excluding the CMB term here) is generally sub-dominant at all radii, and other stresses (viscous, cosmic ray) are even smaller. timescales (of order many galaxy dynamical times). The time evolution of the innermost magnetic field structure, and its relation to amplification mechanisms, will be studied in Paper II. ### Star Formation & Fragmentation Dynamics On Different Scales We next turn to examining more dynamical properties of the simulation, in order to better understand what drives fragmentation and star formation (or the lack thereof) and inflows on various scales. #### 3.5.1 Definitions of "Disk" Dynamical Properties Fig. 12 shows radial profiles (as Figs. 7-10) for different dynamical properties of the simulation. We show the Toomre \(Q\) parameter of the gas, defined as \(Q_{x}\approx\sigma_{x}\,\kappa/\pi\,G\,\Sigma_{\rm disk}\) where \(\sigma_{x}=c_{s}\) for the thermal \(Q_{\rm thermal}\), \(=v_{\rm A}\) for the magnetic \(Q_{\rm mag}\), \(=\delta v_{\rm turb}\)15 for the turbulent \(Q_{\rm turb}\), and \(\sigma_{\rm eff}^{2}=c_{s}^{2}+v_{\rm A}^{2}+\delta v_{\rm turb}^{2}\) for the total effective \(Q_{\rm eff}\). We also show how the different thermal/magnetic/turbulent components contribute to the vertical support of the gas and the gas scale height, the sonic \(\mathcal{M}_{\rm s}\equiv\delta v_{\rm turb}/c_{s}\) and Alfvenic \(\mathcal{M}_{\rm A}\equiv\delta v_{\rm turb}/v_{\rm A}\) Mach numbers, and the characteristic fragmentation scales of the disk determined by the characteristic maximum/dominant fragment mass \(\sim\pi\,\Sigma_{\rm gas}\,H_{\rm i}^{2}\)(see Hopkins, 2013) and minimal Jeans mass \(\sim(\pi/6)\,\sigma_{\rm i}^{3}\,G^{-3/2}\,\rho^{-1/2}\). Unless otherwise specified, these are mass-weighted averages in each radial annulus. Footnote 15: More formally we follow Orr et al. (2019, 2021) and define \(Q\) using the expressions for a multi-component disk from e.g. Romeo, 1992, using the appropriate _mass-weighted_ integrals over the distribution function (similar to defining \((\sigma_{x})^{-1}\sim(\Delta A)^{-1}\,\left/\,\left(\rho/\sigma_{x}\right)\, dA\,dz\right.\)) to define the dispersion/sound speed in the gas (since the system is multi-phase). Note this gives significantly lower \(Q\) (so is more conservative for our purposes here – essentially measuring \(Q\) in the “most unstable” gas phases) than using e.g. a thermally-weighted average or a simple rms value of \(\delta v\), which can be strongly biased by outflow motions or small amounts of gas in hot phases populating the “tails.” #### 3.5.2 Fragmentation and Star Formation In the CGM/IGM, we see the gas is thermally stable against self-gravity (\(Q_{\rm therm}\gtrsim 1\)), the turbulence is trans-sonic (or sub-sonic in the hottest phases), and the gas is quasi-spherical (\(H\sim R\)), all as expected. On galaxy scales from \(\sim 1\,{\rm pc}\) to \(\sim 10\,{\rm kpc}\), the gas is not thermally stable, but has a \(Q_{\rm therm}\ll 1\), so it fragments and forms stars, which in turn maintain supersonic turbulence (with turbulent dispersion dominating the effective scale-height, i.e. \(H\sim\delta v_{\rm turb,\,\,2}/\Omega\)) with an approximately constant (self-regulating) turbulent \(Q_{\rm turb}\sim 1\), as observed in both nearby galaxies (Leroy et al., 2008) and high-redshift starburst and quasar host systems (Forster Schreiber et al., 2009; Fisher et al., 2022; Reichardt Chu et al., 2022), and seen in previous simulations with similar physics (Hopkins et al., 2011, 2023; Orr et al., 2019, 2021). The galaxy size (\(\sim\,{\rm kpc}\)) and compactness are also reasonably similar to observed massive galaxies at these redshifts (compare Bezan Figure 11.— Two-dimensional projections of the mass-weighted mean thermochemical properties (temperature \(T\) in K; density \(\rho\) compensated by \(r^{1.7}\) for the sake of visualization, in units of \(\rm cm^{-3}\,kpc^{1.7}\); free electron fraction \(x_{e}\); plasma \(\beta\)) from Fig. 10. Horizontal axis shows distance \(r\) from the SMBH, log-scaled, along a wedge with opening angle \(|\sin\phi|<0.3\), as Fig. 9, while the vertical axis shows the height \(z/r\) as Fig. 5. The wedge is oriented along the axis of the inner accretion disk. The radial trends from Fig. 10 are evident as is highly multi-phase structure at most radii. son et al., 2009; Damjanov et al., 2009; van Dokkum et al., 2010; Hopkins et al., 2009, 2010). The characteristic fragment mass at \(\sim\) kpc scales is relatively large, \(\sim 10^{8}\,M_{\odot}\), and this corresponds to the mass of the large star-forming cloud complexes or "clumps" seen in the gas morphology in Fig. 2 - these are more massive than Milky Way GMCs as expected because, for constant \(Q_{\rm turb}\), the clump mass scales as the gas fraction \(\propto f_{\rm gas}^{3}\) and this galaxy (with a gas fraction of \(\sim 30\%\) at \(\sim 1\) kpc) is a factor of \(\sim 6\) more gas-rich than the Milky Way (so we expect a maximal clump mass \(\sim 200\) times larger than the largest GMC complexes in the Milky Way), as studied in more detail for similar systems in Oklopcic et al. (2017). Given the large gas fractions and \(Q_{\rm turb}\), the system is still thick, with \(H/R\sim 0.3-0.5\) at these radii. All of these behaviors are consistent with many previous studies of the star-forming ISM in both idealized and cosmological galaxy simulations (Noguchi, 1999; Bournaud et al., 2007; Ceverino et al., 2010; Hopkins et al., 2012, 2013), and again generically expected. As the ISM becomes thermally cold at \(\ll 100\) pc, we see the turbulence go from super-Alfvenic to trans or even mildly sub-Alfvenic, and highly super-sonic, and \(c_{s}\) contributes negligibly (even in a volume-weighted sense) to the vertical disk support. It is important to stress that even though star-forming systems with _thermal_\(Q_{\rm therm}\ll 1\) can and do self-regulate to turbulent (and magnetic) \(Q_{\rm turb}\sim 1\), these are not locally stable against fragmentation and star formation (see references above). In fact, Hopkins & Christiansen (2013) show that such systems will _always_ produce more fragmentation on small scales as \(Q_{\rm turb}\) increases if \(Q_{\rm therm}\) remains constant, owing to shocks and compressions generating locally overdense regions with \(Q_{\rm eff}\ll 1\) (including the local turbulent, magnetic, and thermal energy densities). This is of course implicitly Figure 12.— _Top Left:_ Toomre \(Q\) parameter of the gas (accounting for a multi-component potential) in annuli \(R\), accounting for the thermal (\(Q_{\rm therm}\)), magnetic (\(Q_{\rm turb}\)), turbulent (\(Q_{\rm turb}\)), or combined support of the gas. The ISM/CGM are semi-stable as expected, the galactic ISM is thermally unstable and with marginal turbulent stability, until \(\lesssim 0.1\) pc when stability leads to cessation of star formation. _Top Right:_ Scale height \(H/R\) of the gas (mass-weighted), directly measured in each annulus as the median or rms \(|z|\) after rotating to the angular momentum axis in that annulus, compared to different velocities (thermal sound speed \(c_{s}\), Alfvén \(v_{A}\), or vertical turbulent \(\delta v_{z}\)) relative to \(V_{c}\). At most scales, kinetic/turbulent support dominates, with trans-sonic thermal support in the CGM and magnetic support taking over \(\lesssim 0.01\) pc. _Bottom Left:_ Sonic (\(\mathcal{M}_{\rm A}\equiv\delta v_{\rm turb}/c_{s}\)) and Alfvémic (\(\mathcal{M}_{\rm A}\equiv\delta v_{\rm turb}/v_{A}\)) Mach numbers in each annulus (mass-weighted), for each component of the random motions. The velocity dispersions are broadly isotropic at most radii but inflow/outflow leads to a mild radial bias at \(\gtrsim\) kpc scales. The CGM/IGM are trans-sonic (sub-sonic (sub-sonic in the diffuse gas, but the average here is dominated by dense substructure), galactic and smaller scales highly super-sonic; we see a clear trend of \(\beta\) decreasing at small \(r\) so the accretion disk is modestly sub/trans-Alfévénic. _Bottom Right:_ Enclosed gas mass inside \(r\), versus characteristic maximum gravitational fragmentation (Hill/Toomre) mass \(\sim\Sigma_{\rm gas}H^{2}\), maximal turbulent fragmentation mass from Hopkins (2013b), and mass-weighted (so biased to much lower values versus volume of thermal-energy weighted) thermal or magnetic Jeans masses. At galactic radii, the gas-rich galaxy produces massive clump complexes with masses \(\gtrsim 10^{9}\,M_{\odot}\). Between \(\sim 0.1-1\) pc, the thermal Jeans mass approaches \(\Sigma_{\rm gas}\,H^{2}\) (equivalent to \(Q_{\rm thermal}\gtrsim 1\)), and the magnetic Jeans mass exceeds \(M_{\rm gas}(<r)\) (equivalent to all possible scales being magnetically sub-critical). necessary for star formation to explain their turbulent self-regulation. #### 3.5.3 The Cessation of Star Formation at Small Radii At smaller radii inside the BHROI, we see (1) \(Q\) begins to rise for all components, owing to the steep rise in \(\Omega\), and the in particular the thermal \(Q_{\rm therm}\gtrsim 1\); (2) the disk begins to become thinner (as for relatively slowly-varying \(v_{\rm A}\), \(V_{\rm c}\) begins to rise); (3) correspondingly the turbulence becomes somewhat weaker (more sub-Alfvenic); (4) as the optical depth increases and gas becomes more thermally homogeneous at warm temperatures (Fig. 10) the minimum Jeans mass stabilizes10 while the characteristic upper fragmentation mass11 decreases into the stellar-mass range and becomes comparable to the minimum thermal Jeans mass (implying all scales are thermally stable); (5) the magnetic field becomes more or Figure 13.— As Fig. 2, but showing the edge-on projection to the inner disk at the same time. We can more clearly see the formation of the thin disk at sub-pc scales from capture via filamentary, misaligned inflow from a highly chaotic ISM/CGM at larger radii. dered and dominated by a coherent toroidal component as the disk becomes more organized; (6) the "magnetic Jeans mass" becomes larger than the enclosed gas mass, so all scales are magnetically sub-critical. The combination of these effects leads to the cessation of star formation. When \(Q_{\rm therm}\gtrsim 1\), the system is nominally locally "stable" in a formal sense. Still, since the cooling time is short compared to the free-fall time at these radii, we see \(Q_{\rm therm}\) remains modest (not \(\gg 10\), for example) at all but the smallest radii \(\lesssim 0.01\) pc. As a result, one might expect an intermediate gravitoturbulent regime (e.g. Gammie, 2001), and indeed the gas morphology in Fig. 2 appears consistent with this. While fragmentation in the gravitoturbulent regime is not "catastrophic" in the same sense as in the galactic regime (where \(Q_{\rm thermal}\ll 1\) and gas locally fragments on its free-fall time), it could in principle still produce efficient fragmentation if we neglected some of the other effects above (Meru & Bate, 2011, 2011; Paardekooper et al., 2011; Meru & Bate, 2012; Hopkins & Christiansen, 2013; Deng et al., 2017; Fletcher et al., 2019; Zicr & Springel, 2022). Most importantly, we are in a regime with thermal \(Q_{\rm therm}\gtrsim 1\) but magnetic \(Q_{\rm mag}\gg 1\), i.e. \(\beta\ll 1\). Idealized experiments have shown that this strongly stabilizes gravitoturbulence against fragmentation (even a modest \(\beta\lesssim 1\) is usually sufficient for this, let along the extremely small values of \(\beta\) we see here; as argued analytically in Lizano et al. 2010; Lin 2014; Jafari 2019 and in simulations in e.g. Riols & Latter 2016, 2018; Forgan et al. 2017). Moreover, the field geometry being toroidal is essentially the "most stable" against local self-gravitational fragmentation.10 And the combined action of gravitoturbulence with these fields can create a dynamo or locally mix/re-order the field lines in a manner which further suppresses local collapse (Deng et al., 2020, 2021). Footnote 10: In a pure radial field, pure radial modes assumed in \(Q\) would not feel \(M_{\rm gas}(<r)\). While the magnetic Jeans mass in and of itself is not a strong determinant of star formation in the same way as the thermal Jeans mass, and the "magnetic \(Q\) parameter" \(Q_{\rm mag}\gg 1\) likewise does not alone formally ensure local stability (Lynden-Bell, 1966), when \(M_{\rm J}^{\rm B}\) exceeds \(M_{\rm gas}(<r)\) (which occurs here at \(\ll 1\) pc, it is equivalent to the statement (for a homogeneous disk or spheroid) that any perturbation of any wavelength/size \(\leq r\) is magnetically sub-critical (has a mass-to-flux ratio sufficiently low that it cannot collapse), i.e. that fragmentation is strongly suppressed (Armitage, 2015). A second, less important but still non-negligible barrier to fragmentation at these radii is the strong torques which we see are producing angular momentum loss and inspiral on a timescale of order the orbital time (discussed in detail below, but this can be read directly off from Fig. 8, or inferred from Fig. 7 by simply noting \(\dot{M}_{\rm in}\sim 0.1\,\Sigma_{\rm gas}\,R^{2}\)). Akin to the problem of giant planet formation which was the focus of many historical gravito-turbulence experiments above (see e.g. Armitage, 2015; Kratter & Lodato, 2016), even if we neglect the magnetic fields, vigorous gravitoturbulence would lead to an initial collapse of a perturbation by a factor of a few in density, at which point (since the cooling time, while shorter than dynamical, is not completely negligible, and the clumps of order the most unstable wavelength in size are optically thick to their cooling radiation) its cooling would proceed more slowly and it would contract quasi-adiabatically on a cooling time, but this must occur before inspiral. For planet formation one might have an inspiral time of millions of orbits in the gas-rich disk; here, one has only a few orbits. As a result, we see that with magnetic fields present so fragmentation is already suppressed, most of the mildly-overdense clumps that do form (e.g. one evident in Fig. 2) spiral inwards and are tidally sheared out upon reaching smaller radii (or simply accrete into our sink particle SMBH) before they can reach even order-of-magnitude overdensities (let alone become anywhere near dense enough to approach star formation). It is worth noting that none of the above effects _completely_ eliminate all fragmentation and star formation. We only see that occur on even smaller scales, \(r\ll 0.01\) pc, where the thermal Toomre \(Q_{\rm thermal}\) rises extremely rapidly to values \(\gtrsim 1000\) by \(r\lesssim 0.001\) pc (much more strongly suppressing gravitoturbulence). However, it is sufficient to ensure the star formation rate and total gas accretion rates onto stars are negligible compared to the gas inflow rates, and therefore that star formation (as well as stellar feedback, at least for the duration of this simulation at highest resolution) plays an essentially negligible role in the global dynamics of the system on \(\ll\) pc scales. More detailed properties of the star formation at these innermost radii, exploring how the rare stars that do form are influenced by their environment (and how their feedback does or does not influence that environment locally) will be studied in Hopkins (2023b) (henceforth Paper III). ### Torques and Inflow Driving at Different Radii #### 3.6.1 Different Contributions to the Torques Figure 14.— Angular momentum direction of the gas (\(\cos\,\theta\equiv\mathbf{j}_{z}\equiv\mathbf{j}\cdot\mathbf{j}_{\rm inner}\)), relative to the mean angular momentum direction of the inner disk \(\mathbf{j}_{\rm inner}\) (averaged within \(<0.01\) pc), as a function of BH-centric radius \(r\). We clearly see the aligned, dynamically cold disk (\(\cos\,\theta\approx 1\), with relatively little variation) in the central \(\lesssim 0.1\) pc, with an un-aligned, and much more spherical/kinematically hot (broad distribution of \(\cos\,\theta\)) at larger radii. In Fig. 15, we now turn to understanding the torques driving gas inflows in more detail. We first simply plot the actual torques in the simulation. For every gas cell, we calculate the specific torque vector \(\mathbf{\tau}\equiv\mathbf{r}\times\mathbf{a}\), where \(\mathbf{a}\) is the acceleration from various sources (recording the value directly computed in-code), and we consider the component along the existing specific angular momentum direction \(\mathbf{j}\equiv\mathbf{r}\times\mathbf{v}\) (where \(\mathbf{r}\) is defined as the vector distance to the SMBH), and plot this in units of \(V_{c}^{2}=|\mathbf{r}|\,V_{c}(r)\,\Omega(r)\), so that a value of \(\mathbf{\tau}\cdot\hat{\mathbf{j}}=\epsilon\,V_{c}^{2}\) corresponds to the torque removing all of the angular momentum from an initially circular orbit in a time \(\Delta t\approx(\epsilon\,\Omega)^{-1}\). We separately quantify this for the acceleration from MHD forces (the Riemann problem in the code), cosmic ray forces (using the full expressions from Hopkins et al., 2022,f, which allow for both the tight-coupling and free-streaming limits), radiation forces (likewise allowing for both limits and anisotropic radiation tensors following Hopkins et al., 2020), and gravitational forces. Closely related to this, we quantify different components of the stress tensor in the code. The momentum equation solved in the simulation can be written: \(\partial(\rho\mathbf{v})/\partial t+\nabla\cdot\mathbf{\Pi}^{*}=\mathbf{S}\), where in the source term \(\mathbf{S}\) includes e.g. non-hyperbolic terms from radiation and cosmic rays (see references above) and other terms relevant in the weak-coupling limit, and \(\mathbf{\Pi}^{*}\) is the stress tensor which can be decomposed into: \[\mathbf{\Pi}^{*}\equiv\mathbf{\Pi}_{\rm internal}+\mathbf{\Pi}_{\rm grav}= \tag{1}\] \[\mathbf{\Pi}_{\rm kin}+\mathbf{\Pi}_{\rm mag}+\mathbf{\Pi}_{\rm therm}+\mathbf{\Pi }_{\rm visc}+\mathbf{\Pi}_{\rm cr}+\mathbf{\Pi}_{\rm rad}+\mathbf{\Pi}_{\rm grav}\] representing the sum of kinetic (including turbulent) \(\mathbf{\Pi}_{\rm kin}\), magnetic \(\mathbf{\Pi}_{\rm mag}\), thermal \(\mathbf{\Pi}_{\rm therm}\), viscous \(\mathbf{\Pi}_{\rm visc}\), cosmic ray \(\mathbf{\Pi}_{\rm cr}\), radiation \(\mathbf{\Pi}_{\rm rad}\) stress tensors constituting the usual "total stress tensor" \(\mathbf{\Pi}=\mathbf{\Pi}_{\rm internal}\), plus gravitational \(\mathbf{\Pi}_{\rm grav}\) forces. Figure 15.— _Top Left:_ Instantaneous (gas mass-weighted) mean torques \(\tau\) in the direction of the mean angular momentum vector \(\mathbf{j}\) within each radial annulus (normalized to \(r\)\(V_{c}\,\Omega=V_{c}^{2}\)) in shells as a function of distance \(r\). We restrict to cool gas with \(T<10^{4.5}\,\)K but the trends are qualitatively similar regardless. Shaded region shows the \(\sim 90\%\) inclusion interval. We plot the torques directly from the simulation from gravitational forces, radiation pressure forces, and MHD (magnetic, thermal, turbulent) forces. At all radii torques are efficient, implying angular momentum loss in a few orbital times. At large radii gravitational torques from stars on gas dominate. Where there are few stars, magnetic and Reynolds torques take over. _Top Right:_ Fractional (Fourier) mode amplitudes of asymmetric modes in the face-on projected gas surface density \(\Delta(\langle r,\theta\rangle)\equiv\mathbf{\Sigma}_{0}\,(1+\sum_{m}a_{\rm rms} \cos\left(m\,\phi+\phi_{0},m\right))\) in cylindrical annuli. At radii \(\gtrsim 0.1\,\)pc there are order-unity asymmetries, often dominated by the lowest-\(m\) modes (global asymmetries rather than just small-scale clumping). At small, increasingly Keplerian radii \(|a_{m}|\) decreases but only modestly, to \(\sim 0.1\) at \(\lesssim 10^{-3}\,\)pc. _Bottom Left:_ Profile of different (volume-weighted) mean components of the magnetic stress tensor \(\mathbf{\Pi}^{\rm mag}\equiv(1/4\pi)\,\left(|\mathbf{B}|^{2}1-\mathbf{B}\mathbf{B} /2\right)\), in spherical coordinates, versus radius. We normalize to the mean value of the total stress tensor \(\mathbf{\Pi}\) at each radius. Solid (_dotted_) line correspond to \(\mathbf{\Pi}>0\,\left(\mathbf{\Pi}<0\right)\). _Bottom Right:_ Same, for the (total) kinetic stress \(\mathbf{\Pi}^{\rm kin}\equiv\rho\,\mathbf{v}\,\mathbf{v}\) (note this is distinct from e.g. a Reynolds stress). These terms are defined as: \[\mathbf{\Pi}_{\rm kin} \equiv\rho\mathbf{v}\mathbf{v} \tag{2}\] \[\mathbf{\Pi}_{\rm rad} \equiv\int\frac{e_{\rm rad,\,\nu}}{3}\,\mathbb{D}_{\nu}\,\mathrm{d}\nu\] (3) \[\mathbf{\Pi}_{\rm therm} \equiv P_{\rm therm}\mathbf{I}\equiv nk_{\rm B}T\mathbf{I}\] (4) \[\mathbf{\Pi}_{\rm mag} \equiv\mathbf{\Pi}_{\rm B,pressure}+\mathbf{\Pi}_{\rm B,\, tension}\equiv\frac{\mathbf{B}\cdot\mathbf{B}}{8\pi}\mathbf{I}-\frac{\mathbf{B} \mathbf{B}}{4\pi}\] (5) \[\mathbf{\Pi}_{\rm visc} \equiv\frac{\nu_{\rm visc}}{3}\,\left(3\hat{\mathbf{B}}\hat{ \mathbf{B}}-\mathbf{I}\right)\left(3\hat{\mathbf{B}}\hat{\mathbf{B}}-\mathbf{I }\right):\left(\nabla\mathbf{v}\right)\] (6) \[\mathbf{\Pi}_{\rm cr} \equiv\int\,\mathbf{p}_{\rm cr}\,\mathbf{v}_{\rm cr}(\mathbf{p}_ {\rm cr})f_{\rm cr}(\mathbf{p}_{\rm cr})\,d^{3}\mathbf{p}_{\rm cr}\] (7) \[\mathbf{\Pi}_{\rm grav} \equiv\frac{1}{4\pi\,G}\,\left(\mathbf{g}-\frac{\mathbf{g}\cdot \mathbf{g}}{2}\,\mathbf{I}\right) \tag{8}\] (with \(\mathbf{g}\equiv-\nabla\Phi_{\rm grav}\)). In order to better understand the origin of the gravitational and kinetic stresses in the disk plane in particular, it is also helpful to quantify the degree of non-axisymmetry of the system. Noting that the total surface mass density \(\Sigma_{\rm tot}(R)\) within each cylindrical annulus \(R\) can be Fourier decomposed into \(\Sigma_{\rm tot}(R)=\left\langle\Sigma_{\rm tot}(R)\right\rangle\,\left[1+ \sum_{m=1}^{\infty}\,a_{m}(R)\,\cos\,(m\,\left[\phi-\phi_{0,\,m}(R)\right])\right]\), we extract the coefficients \(|a_{m}(R)|\), and plot the first few coefficients \(a_{m}\) as a measure of the global asymmetry. The behavior is similar for higher-\(m\) modes, but \(m=1\) is most relevant for linear global gravitational instabilities interior to the BHROI; see Hopkins and Quataert, 2010, 2011; Hopkins et al., 2009, e.g. The first thing to note is that the torques are large, in a dimensionless sense, \(|\mathbf{\tau}\cdot\mathbf{j}|\sim 0.1\,V_{c}^{2}\), i.e. the timescale for angular momentum loss of an initially-circular orbit is just a couple of orbital times (\(t_{\rm orbit}=2\pi/\Omega\)). This is expected from the very large \(\dot{M}_{\rm in}\) in Fig. 7 and shown in Fig. 8: we anticipate \(\dot{M}_{\rm in}\sim M_{\rm gas}(r)\,|\mathbf{\tau}\cdot\mathbf{j}|/(r\,V_{c}) \sim\left(|\mathbf{\tau}\cdot\mathbf{j}|/V_{c}^{2}\right)\pi\,\Sigma_{\rm gas }\,r\,V_{c}\sim 10-100\,M_{\odot}\,\mathrm{yr}^{-1}\) at these radii (inserting typical values from Fig. 7 & Fig. 15 in the final evaluation). This means that accretion is fundamentally _dynamical_ here, occurring on of order the dynamical time, as opposed to a slow, secular, viscous-type process as often assumed for much lower accretion-rate systems. From a cursory examination of the components of \(\mathbf{\Pi}^{*}\) extracted directly from the simulation, or from the torques in Fig. 15 (where various torques fall below the plotted range), or our discussion above of relevant physics and scalings, it is easy to confirm that the physical viscosity (\(\mathbf{\Pi}_{\rm visc}\)), cosmic ray (\(\mathbf{\Pi}_{\rm cr}\)), radiation (\(\mathbf{\Pi}_{\rm rad}\)), and pure-thermal (isotropic by definition \(\mathbf{\Pi}_{\rm therm}\)) terms in the stress tensor contribute negligibly to the torques at essentially all radii modeled here. There are really only three contributions of broad importance: the "gravitational torque" (\(\mathbf{r}\times\mathbf{g}\)), and the MHD torques arising from a combination of magnetic (\(\mathbf{\Pi}_{\rm mag}\)) and kinetic or Reynolds-like (\(\mathbf{\Pi}_{\rm kin}\)) stresses. #### 3.6.2 The Gravitational & "Stellar Feedback" Torques On scales \(\gtrsim\) pc, we see that gravitational torques are important (even if not always dominant) for the dynamics of angular momentum exchange, with large-amplitude \(|a_{m}|\sim\mathcal{O}(1)\) asymmetries (obvious in the visual morphology of gas and stars) producing strong torques. This is studied on these scales in much greater detail in Angles-Alcazar et al. (2021), who show that such torques act most efficiently with gas forced into shocks and dissipation (allowing gas orbits to decay rapidly) via asymmetries in the _stellar_ distribution which dominates the local mass density (hence the actual masses _exerting_ the torques in Fig. 15). This does mean that even when gravitational torques are dominant on these scales, the MHD torque is generally order-of-magnitude comparable (the torques induce shocks which have comparable amplitude and [usually] opposite sign, as we see). As shown in Angles-Alcazar et al. (2021), phenomena like the sign flip we see in Fig. 15 at \(\sim 0.5-10\,\mathrm{kpc}\) are often transient and can flip back-and-forth, with the time-averaged effect of these torques on these scales being to reduce gas angular momentum. This also agrees with the results of previous "nuclear zoom-in" simulations (Levine et al., 2008; Prieto and Escala, 2016; Prieto et al., 2017; Angles-Alcazar et al., 2021) and both idealized simulations of small scales in gas-stellar nuclear disks (Hopkins and Quataert, 2010; Hopkins et al., 2016; Williamson et al., 2022), observations of nearby nuclear disks (Lauer et al., 2002; Hopkins and Quataert, 2010; Querejeta et al., 2016) as well as galaxy-scale simulations of galaxy mergers, strong bars, and large clump-type perturbations, which were the first to describe this gravitational torques process as uniquely efficient in mixed (collisional+collisionless) systems (Barnes and Hernquist, 1991, 1996; Hopkins et al., 2009, 2009, 2010). On galactic scales \(\gtrsim\) kpc, we also see comparable and sometimes dominant MHD torques to gravitational torques, which relate to strong shocks sometimes (as noted above) driven by gravitational motions (e.g. infall/accretion, bar or clump-induced shocks) but sometimes also due to e.g. strong shocks owing to stellar feedback events (motions that can be traced directly back to e.g. superbubbles and outflows). This is again consistent with previous studies and very similar to the results in e.g. Prieto and Escala (2016); Prieto et al. (2017). If feedback is "self-regulating" on these scales (e.g. on-average balances gravitational collapse and maintains a super-sonic turbulent \(Q_{\rm turb}\sim 1\)), then it is (by definition) true that the non-circular (or non-hydrostatic) motions generated by such feedback should be comparable to those generated by gravity (and generally have the opposite sign). This is closely related to the question of "what powers the turbulence" in the ISM (gravity or stellar feedback), where self-regulating models necessarily predict that if inflow is balanced by outflow and star formation (as it is here on super-kpc scales; see Figs. 7 & 8) the two should be comparable (Orr et al., 2020). However, at smaller radii \(\ll\) pc, with star formation efficiently shut down, we see a transition around \(\sim 0.3-0.6\,\mathrm{pc}\) (Fig. 7) from the local mass density and gravitational field being dominated by stars at larger radii, to entirely gas-dominated at smaller radii. This dramatically reduces the efficacy of gravitational torques and (of course) any torques owing directly or indirectly to stellar feedback. As noted above and shown more formally in Hopkins and Quataert (2011), the leading-order gravitational torque arises when one has a two-component system with a collisional, dissipative gas component being acted upon by a dominant collisionless component (e.g. stars). When the collisionless component becomes small, so the gas disk is effectively "one-component," the strength of the torques (averaged over the orbit of some gas parcel) drops dramatically, until it eventually is reduced to the small and higher-order resonant-only contributions given by Kalnajs (1971). The analytic prediction from Hopkins and Quataert (2011) is that the torque drops \(\propto f_{\rm s}\equiv\Sigma_{*}/(\Sigma_{*}+\Sigma_{\rm gas})\) (in terms of the mean gas and stellar mass densities in an annulus of some radius \(r\) from the BH) when \(\Sigma_{*}\) becomes smaller, which agrees well with the trend we see in the "transition region" (comparing Fig. 15 & Fig. 7). Note that this does not mean that the \(m=1\) modes themselves cease; as shown in Fig. 15 they propagate towards small \(r\), albeit with decreasing amplitude. Indeed Hopkins & Quataert (2010, 2010); Hopkins (2010) showed that this lopsided disk mode, if excited at large radius where the disk is marginally self-gravitating, can excite a response at all radii \(r\to 0\). However, the point in Hopkins & Quataert (2011) is that _for a given asymmetric mode amplitude_\(|a_{m=1}|\), the effective net torque on gas is weaker if \(f_{*}\) is smaller. Moreover, as shown in Hopkins (2010), if the mass profile of the collisionless component ceases to rise sufficiently steeply towards \(r\to 0\) (e.g. for \(\Sigma_{*}\propto r^{-\eta}\) with \(\eta\lesssim 0.5\)), then there is a refraction barrier (akin to an inner Lindblad resonance in many ways) across which the torque switches sign, exactly as we see in Fig. 15.20 Footnote 20: As shown in the linear analysis in Hopkins & Quataert (2011b) and standard Solar-system texts (e.g. Murray & Dermott, 2000), the exact rate at which the amplitude of the sign-flipped gravitational torque declines as \(r\to 0\) is partly an artifact of our definitions, as it relates to defining \(\mathbf{r}\) relative to the BH position, but this is not important for our analysis here. If gravitational torques were the _only_ mechanism for angular momentum transfer, then as speculated in Hopkins & Quataert (2011b), this barrier would create a trap and "pileup" of gas between \(\sim 0.1-1\,\mathrm{pc}\), until it became so dense it would necessarily fragment and form stars, until sufficient stars formed (assuming a large gas supply continued to flow in) to steepen the profile and reverse the barrier (making the stars-on-gas gravitational torque strong again), moving the barrier gradually inwards and building a steep stellar cusp. And this is what appears to happen in the simulations in Angles-Alcazar et al. (2021), as discussed below. However, here we see something at first apparently rather remarkable (though perhaps not on further reflection): at the radii where the gravitational torque becomes highly inefficient, the MHD torques "take over," with the same sign and broadly similar magnitude. #### 3.6.3 The MHD Torques In Paper II, we will study the torques in the inner accretion disk in much greater detail, in order to understand the origins of the strong toroidal field, relative role of the Maxwell versus Reynolds torques, their dominant components, physical origin and ultimate energy sources, and how they are dynamically maintaining accretion. Here, we simply wish to identify and summarize some basic properties of the "MHD torques" across a broad range of radii. As noted above, we see in Fig. 15 that at radii \(\sim 1-1000\,\mathrm{pc}\), the MHD torques are generally sub-dominant to gravitational torques, consistent with previous work (SS 6 below). There are occasional, usually transient, exceptions, when e.g. strong shocks are induced in mergers and the shocks instantaneously dominate the torque, or strong feedback events occur - though as discussed above in such a situation the angular momentum exchange in the shock can be itself ultimately driven/determined by the gravitational forces, so the "labeling" can be somewhat ambiguous (Hopkins & Quataert, 2011b). But to better understand the structure of these torques in either case, we plot the different components (in spherical coordinates centered on the BH) of the magnetic \(\mathbf{\Pi}^{\mathrm{mag}}\) and kinetic \(\mathbf{\Pi}^{\mathrm{kin}}\) stress tensors versus radius in Fig. 15. We normalize the components to the magnitude (Frobenius norm) of the sum internal stress tensor \(\|\mathbf{\Pi}^{\mathrm{internal}}\|\equiv\|\mathbf{\Pi}_{\mathrm{kin}}+ \mathbf{\Pi}_{\mathrm{mag}}+\mathbf{\Pi}_{\mathrm{therm}}+\mathbf{\Pi}_{ \mathrm{visc}}+\mathbf{\Pi}_{\mathrm{cr}}+\mathbf{\Pi}_{\mathrm{rad}}\|\) (i.e. the total stress ignoring the source terms so assuming the tightly-coupling limit for cosmic rays and radiation, and excluding gravitational forces, so representing the internal forces from the gas). At all radii, we see the kinetic components sum close to unity, i.e. represent a dominant term in the total.2 Other than a small range of radii in the CGM, where the thermal pressure contribution to the stress is comparable to the kinetic (as we showed in Fig. 12, where the turbulence is trans-sonic), the other (non-kinetic, non-magnetic) terms in the stress are generally fractionally small. At large radii \(\gtrsim\,\mathrm{pc}\), the kinetic terms are quasi-isotropic (with a mild radial bias from inflow/outflow motion), and dominated by random motions (e.g. \(|\mathbf{\Pi}_{rr}^{\mathrm{kin}}|\equiv\langle\rho\,v_{r}^{2}\rangle\gg \langle\rho\rangle\langle v_{r}\rangle^{2}\)), with the mixed/off-diagonal terms (\(\mathbf{\Pi}_{r\theta}^{\mathrm{kin}}\), \(\mathbf{\Pi}_{r\phi}^{\mathrm{kin}}\), \(\mathbf{\Pi}_{\theta\phi}^{\mathrm{kin}}\)) having essentially random (rapidly alternating) signs as they average out to smaller values than the diagonal terms. All of this is consistent with incoherent motions at a non-negligible fraction of the circular velocity - i.e. roughly as expected from the virial theorem - at similar Mach numbers for the components shown in Fig. 12. At \(\ll 1\,\mathrm{pc}\) we clearly see the ordered disk form: the azimuthal (rotational) component \(\mathbf{\Pi}_{\phi\phi}\) dominates the total stress, this component is itself strongly dominated by its mean/coherent component (\(\mathbf{\Pi}_{\phi\phi}\approx\langle\rho\rangle\langle v_{\phi}\rangle^{2}\)), and the radial and azimuthal terms become sub-dominant by a factor of \(\sim 100\) (implying turbulent/incoherent velocities more like \(\sim 0.1\,V_{\mathrm{c}}\)). Footnote 2: Note, as defined here, components can have fractional values \(>1\), if there are other components of similar magnitude with opposite sign. We will analyze these terms to study e.g. the Reynolds stresses within the accretion disk in Paper II, but note that what is plotted here, for the sake of comparing the entire stress tensor and radial range, is not the Reynolds stress. Specifically, components like \(\mathbf{\Pi}_{r\phi}^{\mathrm{kin}}=\langle\rho\rangle\langle v_{r}\,v_{\phi}\rangle\) are defined as the average of the total values of the relevant velocity components like \(v_{\phi}\), whereas the Reynolds stress is defined in terms of the incoherent components \(\delta v_{\phi}=v_{\phi}-\langle v_{\phi}\rangle\). So the fact that, for example, \(\mathbf{\Pi}_{r\phi}^{\mathrm{kin}}<0\) here at all radii \(\lesssim 1\,\mathrm{pc}\) simply means that there is, at this snapshot in time, net inflow through all radii \(\lesssim\,\mathrm{pc}\) in the disk, because it is dominated by its coherent components \(\mathbf{\Pi}_{r\phi}^{\mathrm{kin}}\sim\langle\rho\rangle\langle v_{r}\rangle \langle v_{\phi}\rangle\) (and \(\langle v_{\phi}\rangle>0\) by definition of our coordinate convention for the inner, rotating disk, while \(\langle v_{r}\rangle<0\) denotes inflow). For the magnetic stresses \(\mathbf{\Pi}^{\mathrm{mag}}\), we see the corresponding expected behavior: overall \(\|\mathbf{\Pi}^{\mathrm{mag}}\|\) is a fractionally small contribution to \(\|\mathbf{\Pi}^{\mathrm{internal}}\|\) at large radii \(\gg\,\mathrm{pc}\) where magnetic field effects on the dynamics are small and \(\beta\) is increasingly large, and at radii \(\gtrsim\,\mathrm{pc}\) the magnetic fields are quasi-isotropic/tangled (with again a mild radial bias), and magnitudes relative to velocity consistent with the Alfven Mach numbers in Fig. 12. At sub-pc scales, we see the toroidal magnetic field becomes dominant and at \(\lesssim 0.01\,\mathrm{pc}\) begins to contribute at up to an order-unity level to the total stress, though it is still sub-dominant to rotational support of the disk. This component is dominated by the coherent field, while the others remain dominated by largely incoherent fields (more detailed analysis in Paper II). Here the \(r\phi\) component is a more traditional Maxwell stress, though it is dominated by a mix of coherent and incoherent components depending on exactly which radius we analyze; we will study this in detail in Paper II but for our purposes here we can note (a) the sign is positive, which in the convention here means it is transporting angular momentum outwards and therefore promoting inflow; (b) the fractional magnitude (assuming the disk is orbiting at \(\sim V_{\rm c}\)) is comparable to the values needed to explain the fractional MHD torques and inflow rates in Figs. 15 & 7; and (c) at the smallest radii, the magnitude \(|\Pi_{r\phi}^{\rm mag}|\) is comparable to its kinetic counterpart even with the kinetic term defined in terms of the bulk/coherent components, meaning that the Maxwell stress must be at least comparable to the Reynolds stress (if not larger) at these radii. Why do these torques appear to smoothly "take over" from the gravitational torques with broadly similar magnitude? This might at first to appear to require some sort of "conspiracy," but closer examination of Fig. 15 suggests a more mundane explanation, namely that this is required for continuity/steady-state. First, upon examination of Fig. 15 we see that it is _not_ the case that there is some sort of "exact" boundary-condition matching occurring here. Both the gravitational torque and (especially) MHD torque have huge fluctuations in magnitude, so the "transition" between one and the other dominating is actually spread out, in a local sense (e.g. considering different narrow annuli in solid angle from the SMBH), over at least an order of magnitude in radius. The transition only appears "narrow" because we follow and plot so many orders of magnitude in radius. Second, this large scatter also means there are large fluctuations where one or the other dominates outside/inside this radius. Third, we see that even the instantaneous mass-weighed mean specific torque is very clearly not precisely constant as a function of radius, but fluctuates by an order of magnitude: the boundary between the mean torque being MHD-dominated and gravity-dominated is one such example (there is a factor \(\sim 10\) fluctuation in their sum between \(\sim 0.1-10\) pc). It is true that the nearest "peaks" in the mean specific torque on either side of \(\sim 1\) pc happen to have remarkably similar amplitude in this particular snapshot, but comparing other snapshots even these peaks are only comparable in an order-of-magnitude sense. With this in mind, it is much easier to understand. Continuity means that if there were a sudden change in the torque efficiency around \(\sim 1\) pc, mass would either "pile up" (or be evacuated), which would, for most reasonable models for the origin of the MHD torques (e.g. those discussed in Paper II in detail) and for the gravitational torque models lead to a corresponding increase (decrease) in the torques, until \(\dot{M}_{\rm in}\sim\,\)constant with radius was in approximate steady-state. While it is true in principle that a "sharp" discontinuity in the torques could be balanced (for the same \(\dot{M}_{\rm in}\)) by a similar discontinuity in the gas surface density, the fact that the transition is "smeared out" in both space and time by large local fluctuations and turbulence means that this cannot reasonably be self-sustaining (so \(\Sigma_{\rm gas}\) must be smooth, hence the specific torques being smooth). In summary, the "transition" being continuous is a statistical, order-of-magnitude statement over a fairly wide range of radii. ## 4. Simulation without magnetic fields In this section, we compare a simulation run without magnetic fields. We begin from the same initial condition/snapshot used for "zooming in" as our fiducial simulation, at the same time when we would normally begin our hyper-refinement process. But now we remove magnetic fields. This technically means the "initial condition" for the zoom-in is slightly out-of-equilibrium, but recall as shown above (1) it is already a highly non-equilibrium system, (2) on large scales outside of where the iterative hyper-refinement procedure begins, the magnetic fields are not as dynamically important, and (3) we evolve each hierarchical level of the hyper-refinement several dynamical times before allowing a subsequent level of refinement so that each level can re-equilibrate, so this is given time to occur in these runs. Altogether, this experiment appears to support all of the statements above regarding the role of magnetic fields. Figs. 16 & 17 repeat some of our earlier comparisons in e.g. Fig. 2-3 and Figs. 7-15, respectively, but for this simulation without magnetic fields. Unsurprisingly, on large scales (again, outside of where the iterative hyper-refinement procedure begins), the magnetic fields do relatively little to modify system. Indeed at all \(r\gg 1\) pc, the morphologies, star formation rates, gas and stellar densities, \(m=1\) mode amplitudes, scale-heights, thermochemical properties of the gas (phase distribution, temperatures, ionization states), and strength of gravitational torques are basically the same as in our "default" simulation with MHD. But at sub-pc scales, we can immediately see some major _qualitative_ differences. Visually, we can see much stronger fragmentation setting in on scales \(\lesssim 0.1-0.5\) pc. This is also immediately evident in the surface density of star formation, which in the runs with MHD "cuts off" and is strongly suppressed at \(\ll 1\) pc, while without MHD it continues to rise monotonically to small radii. The integrated SFR (averaged over a few dynamical times) inside of \(<\,(0.1,\,1,\,10)\) pc rises from \(\sim(0.1,\,10,\,30)\) M\({}_{\odot}\) yr\({}^{-1}\) with MHD to \(\sim(5,\,120,\,300)\) M\({}_{\odot}\) yr\({}^{-1}\) without. The surface density of stars begins to rapidly rise on small scales as gas is depleted: whereas with MHD we saw gas dominate the local density over stars within \(r\lesssim 0.1\) pc, we now see stellar densities increase rapidly and beginning to dominate after a few tens of dynamical times at all radii \(r\to 0\). Explaining this rapid enhancement of fragmentation, we see that without magnetic support, the disk \(H/R\) becomes smaller/thinner (without MHD) inside of \(r\ll\) pc, by a factor of \(\sim 10-30\). There is still highly super-sonic turbulence supporting it, but it is well-established that absent magnetic fields, a disk supported by stronger super-sonic turbulence will actually have more rapid fragmentation (Hopkins, 2012). We see this manifest in much larger \(a_{m}\) especially for \(m\gg 1\) without MHD, as gravo-turbulent fragmentation runs away (boosting the negative gravitational torque at small radii). We also see that without magnetic fields, the hydrodynamic torques on sub-pc scales are much weaker than the magnetized "MHD torques" - moreover the sign of the hydrodynamic torques without MHD is actually opposite (they are net moving material outward). Thus while the stellar densities are still relatively low at \(r\ll\) pc, this creates a "bottleneck" or "pileup" of material at these radii, which further assists the runaway fragmentation. We also show in Paper III that this leads to an even more top-heavy stellar IMF at these radii. The inefficient hydrodynamic torques without magnetic fields, coupled to efficient fragmentation, mean that the inflow rate to the BH is strongly suppressed at \(\ll\) pc radii (especially at the smallest radii we follow, \(\sim 10^{-3}\) pc). The _net_ inflow rate into our central resolution element - i.e. total mass growth rate of the sink interior to \(<80\) au is lower by a factor of \(\sim 200-300\) yr\({}^{-2}\) over the duration of the more limited no-MHD run).22 So while still non-zero owing to some transient and non-spherically-homogeneous structure (and indeed still fairly large in an absolute sense), the accretion rates without MHD are dramatically suppressed relative to those with magnetic fields present, at least until a much larger stellar density is able to build up (beyond the duration of our simulation to explore). We will study the more detailed consequences for the disk at \(\ll\) pc scales in this simulation in Paper II. ## 5. Scales Where Different Simulation Physics "Ingredients" Become Important From the above, we can refer back to Table 1 and review the major physical "ingredients" in these simulations, in order to discuss where each plays a crucial role in determining the _dynamics_ of the system. We emphasize this because of course, certain physics will always be important by definition if one is interested in them for their own sake, or their influence on certain observations (e.g. one could have dynamically negligible magnetic fields but they would still be "important" to predict Zeeman observations). This is summarized in Table 2 and Fig. 18. ### Gravity & Collisional vs. Collisionless Dynamics The importance of gravitational dynamics is self-evident. At radii \(\gtrsim 0.01\) pc, self-gravity is essential to follow the formation of the galaxy, inflows, feedback, and (especially crucial even in idealized simulations of a "patch" of this medium) fragmentation to form multi-phase ISM structure and stars. Without this, no meaningful predictions for inflow rates to the SMBH can be made, since this is the primary "competitor" with inflow to determine whether or not gas can actually reach the BH (not to mention how it qualitatively changes much of the dynamics). The presence of stars (and at larger radii, dark matter) also means one must be able to integrate collisional+collisionless systems simultaneously. At smaller radii, where star formation has ceased and the potential is dominated by the SMBH, accurate gravitational orbit integration is obviously necessary: certain numerical methods for example cannot accurately integrate warped or precessing disks, or nearly-Keplerian cold disks, for many orbits before spurious numerical torques or "grid alignment effects" (in e.g. fixed-mesh codes or many smoothed-particle hydrodynamics methods) will destroy or artificially grid-align the disks (for extensive discussion of this and validation of the methods here in test problems, see Gaburov & Nitadori, 2011; Hopkins, 2015; Zhu & Li, 2016; Deng et al., 2017, 2020, 2021; Deng & Ogilvie, 2022; Hubber et al., 2018; Fletcher et al., 2019; Bonetti et al., 2020; Yamamoto et al., 2021; Franchini et al., 2022; Bortolas et al., 2022). Here our high-order Hermite integrator provides the ability to, for example, reasonably integrate a hard stellar binary for \(\gg 10^{5}\) orbital times in a strong tidal field (Grudic et al., 2021), much longer than necessary given the duration that we actually run our simulations to at their highest refine Figure 16.— Images of a re-run of our fiducial simulation _without_ magnetic fields (§ 4). We show gas face-on (_top_) and edge-on (_bottom_) as Fig. 2-13. We overlay the star particles which form (blue points) to emphasize that the extreme “clumpiness” in the gas is a real effect: these are collapsing dense gas clouds which rapidly form stars and produce runaway star formation on sub-pc scales. A much smaller (spatially and in mass/surface density) inner non-star-forming disk remains, but it is truncated at the radii where the _thermal-only_ Toomre \(Q\) parameter falls below \(Q\ll 10\) (\(\sim 0.01\) pc; see Fig. 12). Figure 17.— Radial profiles as Figs. 7-15, for the resimulation without magnetic fields (§ 4). We specifically compare the inflow/outflow/SF rates (_top-left_) and surface densities (_top-right_) as Fig. 7; characteristic fragmentation scales (_middle-left_) and scale-heights (_middle-right_) as Fig. 12; and non-axisymmetric mode amplitudes (_bottom-left_) and torques (_bottom-right_) as Fig. 15. Consistent with the morphology in Fig. 16, we see that fragmentation and star formation proceeds much more rapidly at sub-pc scales, without magnetic fields to resist gravito-turbulent/learns fragmentation nor support a thicker (higher \(H/R\)), lower-density disk. The SFR density rises monotonically to \(r\to 0\), and the total SFR in the last dynamical time exceeds the inflow rate at all radii \(\gtrsim 1\) pc. Meanwhile the MHD torques are much weaker (and have the opposite sign from that required for accretion) at \(\ll 1\) pc, so we actually see net outflow (a decretion disk) with only small episodes of accretion of clumps of gas, in the disk at \(\lesssim 0.01\) pc. Note the “peak” in inflow-outflow with the two nearly identical at \(\sim 0.01\) pc arises owing to coherent eccentric motion from the obvious large single-component boosted disk mode (large \(m=1\) mode at small \(r\)). Together this reduces the total accretion rate (over the duration of this test) into the accretion disk at \(R\lesssim 10^{-3}\) pc by a factor of \(\gg 100\), and produces runaway fragmentation and star formation at radii \(\lesssim\) pc. More details of the disk structure are contrasted in Paper II. ment level. But more importantly, we see that self-gravity is not negligible even at radii \(\sim 1000\,R_{\rm g}\). The spiral arms and \(m=1\) modes seen plainly in Fig. 2 and discussed above can play an important role in the dynamics even for a gaseous disk-to-BH mass \(M_{\rm disk}(<r)/M_{\rm BH}\ll 1\). ### Magnetic Fields As expected based on most previous studies on galactic scales, at \(\gtrsim 100\,\)pc magnetic fields play a relatively minimal role in the dynamics or gas thermodynamics (Kim & Ostriker, 2015; Su et al., 2017, 2018, 2019; Hopkins et al., 2020; Ji et al., 2020; Steinwandel et al., 2019, 2022; Martin-Alvarez et al., 2021; Ponnada et al., 2022; Whitworth et al., 2022). Even on scales from \(\sim 1\,\)pc to \(\sim 100\,\)pc, we see no evidence that the magnetic fields play a major role in the overall gas dynamics, and re-starting our simulation and re-running for \(\sim 100\) dynamical times on these scales without magnetic fields (without refining down to \(\ll 1\) pc scales) produces no major qualitative differences in our predictions for these scales, despite a plasma \(\beta\ll 1\). This is also expected, based on many previous studies of magnetic fields in the cold, neutral ISM on similar scales, where despite \(\beta\ll 1\) (because the gas is thermally cold), the magnetic pressure is still sub-dominant to other forms of pressure such as the "turbulent pressure" or (in the outer ISM) cosmic ray pressure (Federrath et al., 2014; Martin-Alvarez et al., 2018, 2022; Guszejnov et al., 2020, 2022; Hopkins et al., 2020, 2022; Grudic et al., 2022; Seta & Federrath, 2022). Equivalently, we see above that the turbulence is still super-Alfvenic on these scales. In such situations the magnetic fields are largely "passively" tracing the local gas dynamics, rather than controlling it. There can of course still be indirect effects via smaller-scale dynamics (e.g. magnetic fields modifying the IMF which in turn modifies feedback; see Guszejnov et al., 2020, 2022 and references therein). On smaller scales, however, we clearly see a reversal in this situation: the magnetic field strengths continue to grow, magnetic pressure dominates the vertical disk support and torques (discussed in greater detail in Paper II), the turbulence becomes trans or sub-Alfvenic, and so magnetic fields become essential to the dynamics. The role of non-ideal effects is more minor. Though formally included, atomic & molecular viscosities and conductivities are everywhere negligible compared to numerical diffusion (and other physical processes), as expected. Anisotropic Braginskii viscosity and conductivity, given their strong temperature dependence, are only expected to be important in the most diffuse, hot phases of the CGM and ISM, and even there have relatively small effects (Su et al., 2017), so while included here we do not expect it to change any of our conclusions if they were excluded (and we see the viscous stress tensor is almost always relatively small). In highly-neutral gas, we have re-run our simulation briefly turning on and off Ohmic resistivity, the Hall effect, and ambipolar diffusion in turn23 to examine their relative importance. A conservative estimate of this comes from comparing the relevant timescale or timestep \(\propto\lambda^{2}/\eta_{i}\) for process \(i\) on scale \(\lambda\) to the other code timescales/timesteps (for e.g. other diffusion processes, sound-crossing, Alfven-wave crossing, etc.). We see that Ohmic resistivity is never dominant given the density and ionization fractions we resolve. Ambipolar diffusion can be the most important effect of these three non-ideal terms in the least-dense but still overwhelmingly-neutral phases of gas (e.g. ISM-like molecular phases), but we see that including or excluding it has almost no effect on the global dynamics in the simulation, because even in the regions where it dominates over Hall and Ohmic terms, the ambipolar diffusion time is almost always much longer (often by orders of magnitude) than other transport process timescales such as the turbulent dissipation/reconnection timescale (\(\mathcal{O}(\lambda/v_{\rm turb})\)), as seen in most modern idealized simulations of protostellar core collapse (Chen & Ostriker, 2014; Wurster et al., 2021) and studies of individual GMCs with realistic star formation and feedback (Mac Low & Klessen, 2004; Vazquez-Semadeni et al., 2011; Sadanari et al., 2022). The Hall term is most potentially interesting: we do see a regime where the Hall term is dominant among non-ideal MHD effects _and_ where the relevant timescale is shorter than other resolved timescales. This specifically occurs in extremely-dense gas forming protostellar disks at our highest resolution level (\(\sim 10\,\)au distance from sink particles) in the star-forming disk (e.g. mostly at \(\gtrsim 1\,\)pc) where the local densities are \(\gg 10^{12}\,\)cm\({}^{-3}\) (about a million times higher than the average at those radii; Fig. 7) and the ion fractions can (locally) become extremely small (typically \(x_{i}\equiv n_{\rm ion}/n_{\rm neutral}\sim 10^{-17}\)). This is not surprising: indeed, studies have shown that Hall effects can be important for the dynamics of proto-stellar disks and planetary disks on these spatial and ionization fraction scales (Bai & Stone, 2017; Zhao et al., 2020; Lee et al., 2021; Tsukamoto et al., 2017). However, this is clearly not directly important for the global, quasar accretion disk-scale dynamics (the fraction of the total gas mass or volume in such protostellar disks is, as we showed above, negligibly small at these radii). Though the Hall effect could in principle indirectly alter e.g. the IMF of stars if it helps regulate accretion through individual protostellar disks onto the proto-stars themselves via Hall MRI, it would require much higher resolution (compared to our simulation) within the individual protostellar disks to resolve this. Crucially, within the global quasar accretion disk on sub-pc scales, even though the densities are very large, the warmer temperatures mean that the ionization fractions are vastly larger, \(\sim 0.01\) as shown in Fig. 10. This means that the characteristic timescales for Hall MHD effects within the quasar accretion disk and ISM as a whole are typically \(\sim 11-15\) orders of magnitude longer than the disk dynamical time at the radii we model here, so can be safely neglected.24 Footnote 24: Given the timestep penalties involved (which come from resolving fast whistler waves in the small number of cells in the \(\sim 10\,\)au protostellar disks at \(\gtrsim 1\,\)pc) and related numerical integration challenges (Marchand et al., 2018), and the fact that the coefficients are extremely uncertain in the regime of greatest interest owing to their sensitivity to the detailed assumptions of the grain chemistry and size distribution (see Tsukamoto & Okuzumi, 2022, for a review), we therefore neglect the Hall term in our default simulation and only include it in these tests run for a shorter time period. ### Cosmic Rays We see fairly minor effects turning on/off explicit cosmic ray transport, or switching between the simpler sub-grid model from Hopkins et al. (2022) and the more detailed and physically-derived explicit cosmic ray dynamics models developed in Hopkins et al. (2022); Hopkins (2022); Hopkins et al. (2022); Hopkins et al. (2022), or even simply assuming a uniform cosmic ray background for purposes of ionization rate calculations. This is expected given the detailed studies of cosmic ray dynamics in e.g. Su et al. (2019, 2020, 2021); Chan et al. (2019, 2022); Hopkins et al. (2020, 2021b, 2021b, 2020, 2021); Ji et al. (2020, 2021); Buck et al. (2020); Peschken et al. (2022); Martin-Alvarez et al. (2022) as well as observational constraints from starburst galaxies (Lacki et al., 2011; Griffin et al., 2016; Zhang et al., 2019; Heesen, 2021), which show that for starburst systems and massive high-redshift galaxies the cosmic ray energy is lost via catastrophic and Coulomb+ionization interactions on a timescale short compared to other timescales of interest (i.e. the galaxies are approximate proton calorimeters). It is precisely the opposite regime: tenuous CGM/IGM gas around low-redshift dwarf and \(\sim L^{*}\) galaxies, where CRs are seen in the studies above to have the largest effects (where they are observed to escape from galaxies into the CGM efficiently; see Lacki et al., 2011; Rojas-Bravo & Araya, 2016; Lopez et al., 2018; Persic & Rephaeli, 2022; Butsky et al., 2022). This is not surprising, based both on the more detailed studies above, but also simple analytic considerations. If we assume CRs are injected in the midplane and treat the gas as a uniform slab with tangled magnetic fields, then assuming a typical scattering rate similar to the constraints from the Solar system applies (see Hopkins et al., 2022, for details), then given the density profiles in Fig. 7 at initial injection radii \(R_{\rm inj}\lesssim 10\,\)kpc catastrophic losses would remove all of the proton energy before propagation to a distance \(\lesssim 0.3\,R_{\rm inj}\). If we further assume the injection is proportional to the SNe rate assuming a steady-state SNe rate proportional to the star formation rate, itself scaling as some efficiency \(\epsilon_{\rm SF}\) per free-fall time, this would produce a steady-state CR energy density in the ISM (again, taking the profiles from Fig. 7) of \(\sim 30\,\)eV\(\,\)cm\({}^{-3}\,(\epsilon_{\rm SF}/0.01)\,\)(kpc/\(R_{\rm inj}\)) - reasonably similar to what we measure in the simulation and much less than any of the dominant energy densities that we plot at any radii \(r\lesssim 100\,\)kpc in Fig. 10. And this is essentially an upper limit to the CR energy density, as it ignores other loss terms from trapping, denser sub-structure, advection or streaming. ### Radiation Transport & Thermo-Chemistry Clearly _some_ cooling physics is important on all scales we simulate, but again the nature of that cooling and the role of radiation changes with scale. On large scales \(\gtrsim 10\,\)pc, the cooling can be well-approximated as optically-thin (the usual approximation in galaxy formation simulations), although the actual chemistry can be enormously complex (as Figure 18.— Cartoon illustrating the hierarchy of scales as Fig. 5, with a heuristic description of the process driving fastest angular momentum loss on each scale. We show the image from our simulation, size scale, and descriptor as Fig. 5, together with a list of characteristic processes that drive angular momentum loss on these scales in a “slow” or “secular” fashion (timescales much longer than the dynamical time) or in a “fast” or “dynamical” fashion (timescales of order dynamical times). See discussion in § 5-6. Illustrations of numerical simulations for each “fast” scale/process are shown, taken from simulations first presented in Hopkins & Quataert (2010); Hopkins et al. (2014); Torrey et al. (2017) for \(\gtrsim\) pc scales and from the simulations here on sub-pc scales. e.g. the models here account for a huge range of atomic and molecular and ionization and dust and other processes, with a multi-band radiation background, interactions with cosmic-rays, non-equilibrium photo-chemistry, etc.). On these scales radiation is important for determining self-consistent ionization and photo-heating and radiation pressure dynamics from stars within the galaxy but its global effects can be captured reasonably well by simple approximations such as the LE-BRON method (see Hopkins et al., 2020), and the gas cooling radiation itself can be largely neglected in the dynamics. On the smallest scales we resolve, cooling is still important - in fact the disk has a cooling time short compared to its dynamical time even at the smallest scales we resolve (discussed in more detail in Paper II), so one cannot simply approximate the disk as strictly adiabatic. But the chemistry becomes substantially less complex as dust is sublimated, molecules dissociated, and in general the system becomes more and more locally black-body-like (and eventually at sufficiently small radii in the accretion disk the medium will be largely ionized, with chemistry relevant for second-order [though still potentially important] effects like metal line absorption, see e.g. Proga, 2007; Jiang et al., 2016 and references therein). In this regime radiation is dominated by the cooling radiation itself, although it can increasingly be approximated via simple black-body or gray-body approximations and the photon mean-free paths become short, so methods like flux-limited diffusion or other even simpler analytic radiation treatments may be valid (as in e.g. Thompson et al., 2005; Rafikov, 2007; Derdzinski and Mayer, 2022), and one could even approximate the disk as e.g. locally isothermal (or following some effective adiabatic index, with a mean temperature that depends on the distance from the BH, as is common in e.g. shearing-box simulations). The complexity is maximized "in between," here from radii \(\sim 0.01-10\,\)pc (or more generally given how this should depend on the opacities, from surface densities \(\Sigma_{\rm gas}\sim 10^{4}-10^{7}\,\mathrm{M}_{\odot}\,\mathrm{pc}^{-2}\)). Here the system is optically thick to its cooling radiation, but not so optically thick that one can treat the radiation/dust/gas temperatures as in strict local thermodynamic equilibrium (LTE); more complex species such as dust, atoms, and molecules are all present in varying abundances (also not in LTE); and processes such as H\({}^{-}\) Kramers opacity - often neglected in _both_ galaxy-scale simulations and accretion disk simulations - can dominate the opacities (this is dominant over a significant fraction of the dynamic range at \(\ll 1\,\)pc, given the high atomic abundance and relatively high free electron fraction producing significant H\({}^{-}\)). Notably, the opacities would be orders-of-magnitude incorrect if we simply ignored dust destruction or neglected H\({}^{-}\) or detailed gas-phase opacities, and the temperature and cooling rates would also be order-of-magnitude incorrect if we simply assumed LTE and that all (radiation/dust/gas) temperatures were in equilibrium.25 Above, we discuss how this produces some substantial differences (most notably, shutting down star formation), compared to previous studies which treated the transition region more simply. Footnote 25: Notably, Derdzinski and Mayer (2022) do point out the importance of these opacity terms in quasar accretion disks at broadly similar radii, though the parameter space they explore is rather distinct from that here, and they adopt a simpler analytic disk model with opacity fitting functions calibrated for proto-stellar disks in Lin and Papaloizou (1985); Bell and Lin (1994). However direct comparison with e.g. their Table 1 or Figures 1-3 shows that the explicit chemical network and non-equilibrium dynamics evolved in our simulation – important for capturing conditions in the sub-pc AGN disk that are _not_ analogous to protoplanetary disks at a similar density and temperature (e.g. much shorter dynamical times, stronger radiation and cosmic ray fields, sublimation of dust grains, higher accretion rates) – can produce order-of-magnitude quantitative differences in the detailed opacities. ### Star Formation (Sink Particle and Unresolved) & Stellar Feedback Star formation is clearly important on scales \(\sim 0.1-10^{4}\,\)pc. On much larger CGM/IGM scales we do not expect star formation to occur given the low densities; on much smaller scales we self-consistently see it suppressed so it can be again neglected. On scales \(\gg\) pc, we see that the characteristic "fragmentation mass" \(\sim\Sigma_{\rm gas}\,H_{\rm gas}^{2}\sim\Sigma_{\rm gas}\,(\delta v/\Omega)^ {2}\) expected in any turbulent fragmentation cascade (Hopkins, 2012) (akin to the "Toomre mass" for a marginally-stable disk) is \(\gg 10^{4}\,\)M\({}_{\odot}\), so the stellar IMF should be well-sampled by the typical fragmenting clouds containing most of the mass and most of the star formation (Evans, 1999; Vazquez-Semadeni et al., 2003; Hopkins, 2012, 2013; Grudic and Hopkins, 2019). This means on these scales, approximating the star formation via "galactic-type" models wherein one simply seeks to identify fragmenting sub-regions and, from them, statistically samples some IMF, is reasonable. Of course, one could always imagine "zooming in" to some sub-region (e.g. an individual GMC) on these large scales and studying it with resolved star formation models as in STARFORGE, but this would be more akin to standard studies of star formation in isolated clouds in the typical ISM (Guszejnov et al., 2021; Grudic et al., 2021; Guszejnov et al., 2022, 2022, ) and is not strictly necessary for recovery of the large-scale dynamics (though it could of course be indirectly important via calibration of the "correct" IMF to use and other related properties of the stars themselves). "Resolved" star formation physics is therefore strictly necessary over a relatively narrow range of intermediate radii, here primarily from \(\sim 0.1-1\,\)pc. Still it plays a crucial role in the simulation here of actually allowing us to _validate_ that SF should indeed cease at \(\ll 0.1\,\)pc. The galactic-type models cannot truly self-consistently predict this, since the SFR for a small "patch" of the ISM with certain properties is assumed, not self-consistently resolved. On these scales the characteristic upper limit of the fragmentation mass is still relatively large, \(\gtrsim 100\,M_{\odot}\), but not so massive that a well-sampled IMF can be assumed. But the thermal conditions of the gas and its increasing warmth and magnetic support mean that the sites of individual star formation are highly constrained, as we discuss above and in more detail in Paper III. ## 6 Comparison to Previous Results ### Galactic Scales (\(\gtrsim 100\,\)pc) As noted in SS 5, on relatively large scales \(\gtrsim 100\,\)pc, which one could reasonably call "galactic," our results are broadly consistent with previous FIRE studies of massive, high-redshift galaxies. But it is worth reiterating some basic conclusions (some of which are summarized in Fig. 18): (1) the galaxies are very much not in steady-state or equilibrium, with large clumps, mergers, and feedback-driven perturbations to the potential of order unity (Ma et al., 2015, 2018; Oklopci et al., 2017; Price et al., 2017); (2) they feature prominent "cold flows" in the halo contributing substantial "cold mode" accretion onto the galaxy (Feldmann et al., 2016; Faucher-Giguere et al., 2015, 2016; Sravan et al., 2016; Chen et al., 2020); (3) "gravitational torques" play a key role in the dynamics of angular momentum exchange, with the gas predominantly being forced into shocks and dissipation by asymmetries in the stars (Angles-Alcazar et al., 2017; Ma et al., 2021; Trapp et al. 2022); (4) the galactic ISM is highly multi-phase and unstable, with relatively short-lived structures (Orr et al., 2017; Kim et al., 2018; Ma et al., 2020; Smith et al., 2019); (5) the plasma \(\beta\gg 1\) except in the cold-phase ISM, where magnetic pressure is larger than thermal but still order-of-magnitude sub-dominant to turbulent energy densities (so magnetic fields are not dynamically dominant; Su et al., 2017, 2018, 2019; Guszejnov et al., 2017, 2019, 2020; Hopkins et al., 2020); (6) stellar feedback rapidly becomes less efficient above a critical acceleration scale \(\sim(p_{*}/m_{*})\sim 10^{-8}\,\rm{cm^{2}\,s^{-1}}\) (corresponding to a total-enclosed-mass effective surface density \(M_{\rm enc}/\pi\,r^{2}\gtrsim 10^{3}\,\rm{M_{\odot}\,pc^{-2}}\); Grudic et al., 2018, 2019, 2022a; Ma et al., 2020; Shi et al., 2021; Byrne et al., 2023); (7) star formation is rapid but inflows are _dynamical_, with \(\dot{M}\sim M_{\rm gas}\,\Omega\), and co-exist with outflows (as there is no spherical symmetry) so can still out-pace star formation (Sparre et al., 2017; Orr et al., 2021; Flores Velazquez et al., 2021; Ma et al., 2018). There are novel advantages of the simulation here. For one, it includes a variety of physics not included in all previous FIRE studies: e.g. magnetic fields with non-ideal MHD, detailed thermochemical treatments of non-equilibrium chemistry and opacities for the highly optically-thick and/or dust-free regimes, explicit M1 radiation hydrodynamics (as compared to simpler RHD treatments). For another, it reaches significantly higher resolution than some previous FIRE studies cited above, with mass resolution \(\sim 10^{3}-10^{4}\,\rm{M_{\odot}}\) throughout the galaxy. This allows us to confirm that, at least in the simulation run up to the time of "hyper-refinement" in the nucleus, these additional physics and numerics improvements do not appear to have a major qualitative effect on any of the conclusions of those previous papers regarding global galaxy properties. The weak effects of these physics on gross properties at large scales had been noted before (Hopkins et al., 2018, 2020, 2022; Su et al., 2017, 2019; Wheeler et al., 2019), but those studies focused on lower-mass systems, so we extend them here. However, the simulation here also has a serious and obvious disadvantage compared to previous studies: we only simulate one case study, and once we turn on hyper-refinement, only simulate it for a very short cosmic time. So studies of galaxy-scale properties are generally better-served by dedicated simulations without hyper-refinement (using e.g. sub-grid models for BH accretion and feedback) that can evolve for longer times. The primary purpose of our including these scales in our simulation here is to generate self-consistent initial and boundary conditions for smaller scales, and of course to inform and refine such sub-grid models for future studies. ### Galactic Nuclei Scales (\(\sim 1-100\,pc\)) On scales \(\sim 1-100\,\rm{pc}\), our conclusions are largely similar to those in the dedicated hyper-refinement experiments in Angles-Alcazar et al. (2021) (themselves in key respects similar to previous studies such as Levine et al., 2008, 2010; Wada et al., 2009; Hopkins & Quataert, 2010; Hopkins et al., 2016; Torrey et al., 2017; Prieto & Escala, 2016; Prieto et al., 2017 or other more idealized nuclear simulations in e.g. Emsellem et al., 2015; Beckmann et al., 2019; Sivasankaran et al., 2022; see Angles-Alcazar et al., 2021 SS 5.1 for a summary). For example (see also Fig. 18): (1) gravitational torques between gas and stars (largely stars at similar radii to the gas) again dominate the accretion physics (even more strongly than on galactic scales); (2) angular momentum support is the key "barrier" to inflows and accretion; the accretion is qualitatively distinct from a radial or Bondi or turbulent accretion problem, and application of Bondi-Hoyle-Lyttleton-type accretion-rate estimators based on the gas properties (as opposed to those which account for effects like gravitational torques, star formation, and stellar feedback; see Wada et al., 2009; Hopkins & Quataert, 2011, 2022) at these scales gives an accretion rate which is typically incorrect by \(\sim 4-8\) orders of magnitude; (3) the ISM is highly multi-phase and unstable and rapidly star-forming, with most of the gas mass at these radii in cold/warm neutral phases, but (4) accretion is dynamical owing to said gravitational torques. The much more detailed physics (MHD both ideal and non-ideal, multi-band explicit RHD, expanded opacities, individual star formation/evolution) and higher resolution (\(\sim 3\) orders-of-magnitude improved) here do lead to some differences of potential importance for observational diagnostics on these scales, but do not change the key qualitative physics of accretion and star formation above. Given the cold ISM prominence, we see \(\beta\ll 1\), but magnetic fields do not dominate the torques and are sub-dominant to turbulent and "gravitational" pressure and do not strongly alter the dynamics on these scales (much like in GMCs and HI filaments in the "normal" ISM of \(z\sim 0\) galaxies; see Su et al., 2017; Martin-Alvarez et al., 2018, 2021, 2022; Guszejnov et al., 2020; Benincasa et al., 2020). The inner galaxy begins to become optically thick to its own cooling radiation as we reach \(\Sigma_{\rm gas}\gtrsim 10^{3}-10^{4}\,\rm{M_{\odot}}\) inside of \(\sim 10-100\,\rm{pc}\), and this does modify the phase structure: there is a large amount of warm molecular gas at \(\sim 10^{3}\,\rm{K}\), the dust temperature rises to \(\sim 100\,\rm{K}\) (well above the CMB temperature at this redshift), and the ISRF becomes strongly dominated by the re-radiated/IR radiation. These are notably similar to observed properties of gas at similar densities in the nuclei of local starburst galaxies such as Arp 220, NGC 6240 and others (Tacconi et al., 1999; Lonsdale et al., 2003; Evans et al., 2006; Iono et al., 2007; Mason et al., 2006; Greve et al., 2009; Ott et al., 2011; Scoville et al., 2017) as well as inferences in high-redshift quasar hosts (Casey et al., 2009; Riechers et al., 2009; Younger et al., 2009; Wang et al., 2008; Izumi et al., 2016; Lelli et al., 2022). In future work, we will investigate their consequences for observables as well as whether they have any impact on the stellar IMF, but for now, they do not appear to qualitatively change the most important conclusions from Angles-Alcazar et al. (2021) regarding accretion. Given this, the primary purpose of our extended physics and resolution on these scales is to (1) test and validate the conclusions of Angles-Alcazar et al. (2021) with more detailed simulations accounting for a range of physics neglected therein; (2) make more accurate predictions for observables and future sub-grid models on these scales; (3) enable exploration of more detailed quantities like the IMF; and (4) to provide self-consistent initial and boundary conditions for even smaller scales. ### Approaching The Accretion Disk (\(\sim 0.01-1\,pc\)) On scales \(\ll 1\,\rm{pc}\), however, we see significant deviations from the behavior seen in Angles-Alcazar et al. (2021) and other similar studies described above (and see also Kawakatu & Wada, 2008; Hopkins & Quataert, 2010, 2010; Hopkins et al., 2016; Wada et al., 2009; Schartmann et al., 2010; Hobbs et al., 2011; Izumi et al., 2016; Williamson et al., 2020; Kawakatu et al., 2020). There are several closely-related key qualitative differences. Perhaps most importantly, we see star formation shut down, as the magnetic \(Q_{\rm mag}\gg 1\) and thermal \(Q_{\rm thermal}\gtrsim 1\) as the system becomes more optically-thick and magnetically-dominated. This obviously involves both the MHD and RHD physics (coupled explicitly to the thermo-chemistry) here, as well as a resolved individual star model which can detect and mass-resolve individual stellar-mass patches that might be collapsing (or not). In contrast, in simulations like Angles-Alcazar et al. (2021) and almost all the other simulation examples in SS 6.2 above, a simple "galaxy-scale" sub-grid star formation prescription was adopted at all scales and magnetic fields were neglected, which meant star formation could not "cease" in this manner, and as a result, continued to be efficient (even growing in efficiency at smaller and smaller scales) at all resolved scales therein. This immediately produces important consequences. With star formation suppressed, we see a transition at \(\lesssim 0.5\,\)pc where the local mass density becomes gas-dominated, and gravitational torques (whose efficiency depends strongly on there being a dominant _collisionless_ component of the local mass density to drive shocks in the gas) become less efficient (and even reverse sign). However, Maxwell and Reynolds torques from the strongly-magnetized, gravito-turbulent disk take over and continue efficient inflow (Fig. 18). The detailed structure of the disk, its turbulence and magnetic fields, their origins, and how they drive accretion, will be the subject of detailed study in Paper II, but depend directly on magnetic fields. Strong \(m=1\) modes persist and a lopsided disk with clear spiral structure forms, and some star formation does occur, which will be studied in Paper III, but the SFR inside these radii is small compared to inflow rates. As noted above, there is a body of work with overlapping physics and results here in historical idealized simulations of nuclear "torus" scales around AGN, such as those in Kawakatu & Wada (2008); Hopkins & Quataert (2010a,b); Hopkins et al. (2016); Wada et al. (2009); Schartmann et al. (2010); Hobbs et al. (2011); Izumi et al. (2016); Williamson et al. (2020); Kawakatu et al. (2020). These studies generally were more akin to Angles-Alcazar et al. (2021) in that they included a more limited range of physics (often, but not always, neglecting MHD and RHD, and in all cases using a much simpler thermo-chemical network compared to that here). But most crucially, these were like Angles-Alcazar et al. (2021) in using idealized star formation prescriptions with some statistically-averaged star formation rate per free-fall time above some density put in "by hand" as opposed to explicitly resolving individual stars and star formation. As such, their conclusions, like Angles-Alcazar et al. (2021) as summarized above, have a great deal in common with ours on larger scales but diverge from ours when star formation shuts down. There has however been some work specifically using simulations designed to resolve individual star formation to predict e.g. the IMF of stars forming in circum-nuclear disks (Nayakshin & Sunyaev, 2005; Nayakshin et al., 2007; Klessen et al., 2007; Hopkins, 2013; Bonnell & Rice, 2008; Alexander et al., 2008; Hobbs & Nayakshin, 2009; Frazer & Heitsch, 2019), to which we will compare in more detail in Paper III. These studies have found some similar conclusions to those here, e.g. that coherent lopsided modes in disks are ubiquitous, as expected from analytic considerations as discussed above (and other conclusions specific to the IMF, see Paper III). But again these usually neglected physics such as magnetic fields and self-consistent radiation hydrodynamics tied to the thermochemistry of molecular/atomic/neutral phases - all crucial, as we argued above, to follow the self-consistent suppression of star formation and transition in structure on these scales. Even more importantly, these simulations in the past have been much more limited in the dynamic range of scales around the SMBH which they could probe, so needed to adopt somewhat ad-hoc initial and outer boundary conditions at \(\sim\,\)pc, and therefore could not self-consistently predict the transition between star-forming and accretion disk. Of equal importance, all of those IMF studies referenced above explored parameter space orders-of-magnitude distinct from that here, with much lower gas masses and densities (in most cases because they were designed to specifically understand the sub-pc stellar disk around Sgr A\({}^{*}\), rather than the most luminous quasar environments like our study here). Alternatively, some recent studies have extended accretion disk models "outwards" to these scales (Namekata & Umemura, 2016; Chen et al., 2022) to explore fragmentation. But again, these simulations necessarily focused on a small dynamic range with specific initial/boundary conditions and physics, so were not attempting to link different scales in the same way as we do here. As such, we stress that these different types of intermediate-scale simulations are highly complementary to studies like those here. Our hope is that a study like this provides additional motivation and improved understanding of the necessary initial and boundary conditions and choice of "physics included" in these sorts of idealized, more-restricted-in-scale nuclear simulations, in the future. ### Within the Accretion Disk (\(\lesssim 0.01\,pc\)) By the time we get to the more traditional "accretion disk" scales at \(\lesssim 0.01\,\)pc, star formation is inefficient, so the dominant _physical ingredients_ are broadly similar to those traditionally invoked in AGN accretion disk simulations: ideal MHD and radiation-hydrodynamics in a nearly-Keplerian potential. Still, we see a several key _qualitative_ differences between our predictions and the assumptions of the vast majority of existing quasar accretion disk simulation literature (although some recent work shows striking similarity, see e.g. Kudoh et al., 2020), which will be studied in detail in Paper II so we only briefly review them here. Most of these have to do with the initial/boundary conditions. A strong \(m=1\) coherent eccentric gas disk mode persists, induced (and propagating inwards) by the asymmetry from large radii (Hopkins, 2010). Self-gravity is not completely negligible and some non-zero star formation persists (Paper III), along with some gravito-turbulence (Paper II). The disk is still predominantly neutral even at these outer radii (though this will change when the disk temperature rises to \(\gg 10^{4}\,\)K at smaller radii) and can still efficiently cool, with a cooling time still less than its dynamical time, so the gas cannot be treated as adiabatic, and the opacities include important contributions from mostly-neutral gas contributors like H\({}^{-}\), usually ignored in accretion disk simulations (which generally, if they follow explicit RHD, assume just some combination of free-free/Compton and metal line opacities). As a result, the turbulence is vigorous: trans-Alfvenic and highly super-sonic. And perhaps most importantly, the disk is strongly magnetized as a result of flux-freezing from the magnetic flux being fed to it from the ISM (as detailed in Paper II), sustaining a "flux-frozen" or "flux-fed" disk with plasma \(\beta\ll 1\), even in the midplane. Again, the hope is that the simulations here will provide additional motivation for new generations of accretion-disk simulations exploring these rather distinct portions of parameter space. We present the first simulation to simultaneously incorporate the galaxy-scale cosmological physics of the FIRE simulations and small-scale physics of individual star formation and stellar evolution of STARFORGE. This allows us to run a cosmological simulation employing a super-Lagrangian refinement technique to reach \(\sim 10^{-4}\,\)pc resolution in a \(\sim(100\,\)cMpc\()^{3}\) box, i.e. the equivalent of a \(\gtrsim(10^{12})^{3}\) uniform-resolution simulation. More importantly, we incorporate a range of physics including (non-ideal and kinetic) magnetohydrodynamics; self-gravity, star formation, stellar evolution, and (proto)stellar feedback (including jets, main-sequence mass-loss, multi-band radiation, core-collapse and Ia supernovae); explicit multi-band radiation-MHD (with separately evolved dust, gas, and radiation field temperatures/bands); and detailed thermo-chemistry accounting for a huge range of processes and opacities including dust-gas coupling, sublimation, non-equilibrium atomic and molecular chemistry, metal lines, H\({}^{-}\) and others directly coupled to the RMHD solver. This allows us, for the first time, to self-consistently treat _both_ the limits of "traditional" accretion disk simulations and traditional "ISM-scale" simulations and the transition in between, all in the same simulation. Our most notable conclusions from this particular study include: * Magnetic fields play a key role: We show that magnetic fields are critical for a wide range of effects on sub-pc scales within the accretion disk, ranging from maintaining efficient torques and high inflow rates, explaining the scale heights and vertical profiles of the disk structure, the outer size/boundary of the accretion disk, and perhaps most importantly the suppression of star formation at sub-pc scales. Without magnetic fields, a disk still forms, but it is an order of magnitude or more smaller in spatial scale and mass, and produces factor of \(\gtrsim 100\) times lower accretion rates into the SMBH, with runaway fragmentation and orders-of-magnitude larger nuclear SFRs on \(\sim 0.1-10\,\)pc scales. The accretion disk that forms also has qualitatively different structures as a result of magnetic flux-freezing and flux-feeding from the ISM, to be studied in detail in Paper II. * Quasar-Level Inflow Rates are Plausible and Can be Maintained: Extending previous studies like those in Angles-Alcazar et al. (2021) with both much higher resolution and more detailed micro-physics relevant on small scales, we confirm that strong torques on sub-kpc scales can maintain inflow rates as large as \(\gtrsim 10\,\)M\({}_{\odot}\) yr\({}^{-1}\) into a QSO accretion disk at \(<80\,\)au, for extended periods of time (hundreds of thousands of dynamical times at the smallest radii simulated here). On scales \(\sim\,\)pc-kpc these are dominated by "gravitational torques" in a multi-component (gas+stellar) disk, inducing strong shocks and inflow. On sub-pc scales where star formation becomes inefficient these become weak but strong MHD torques (Maxwell+Reynolds stress) in a turbulent, strongly-magnetized outer flux-fed accretion disk take over and are able to sustain such large inflow rates down to the smallest resolved radii here, well within the "traditional accretion disk" range of scales. * Suppression of star formation: On sub-pc scales, star formation is strongly suppressed. Models which simply assume some fixed star formation efficiency per free-fall time in sufficiently dense and/or self-gravitating gas (the standard on galactic scales and in our FIRE simulations) would not necessarily fully capture or be able to predict this effect _a priori_, but on these scales the simulations here explicitly resolve individual proto-stellar cores (at \(<0.01\,M_{\odot}\) mass resolution) with models for resolved single-star formation from STARFORGE that would, if star formation were "missed" incorrectly, allow the gas to collapse to infinitely high densities. Just as important, for the first time this prediction is made using physics and numerical methods which have been explicitly shown to reproduce reasonable observed star formation efficiencies, stellar masses/IMFs, and stellar multiplicity distributions under typical Solar-neighborhood ISM/GMC conditions. With these physics, we show that a combination of increasing optical depths producing warmer gas in the galactic nucleus, plus (crucially) strong toroidal magnetic fields raising the magnetic critical mass to be larger than the disk mass and strongly suppressing gravitoturbulent fragmentation, leads to a dramatic, almost complete suppression of star formation at distances \(\ll\,\)pc from the SMBH. There are many properties which could and should be studied in more detail in the simulations here. In Paper II, we explore the structure of the strongly-magnetized and flux-frozen/fed accretion disk, nature of the MHD (Maxwell/Reynolds) torques, origin and dynamics of these strong fields on \(\ll\,\)pc scales, and their consequences for accretion disk theory. In Paper III we explore the detailed predictions for the dynamics and process of star formation and the resulting IMF of stars in the hyper-resolved inner region around the QSO accretion disk (the "circum-quasar medium"). There are obvious extensions of the work here and therein modeling many different observables, contrasting how, for example, the much more detailed radiation-magneto-thermochemistry models produce different predictions for dust and atomic and molecular gas properties (and hence observables) from galactic nuclei and the quasar "torus" region, compared to previous-generation simulations with more simplified physics. In future work, there are also multiple ways one might extend the actual simulation work here. The most obvious is to consider other initial conditions of different galaxies at different times, to explore other regimes. Perhaps the biggest caveat here is that (owing to the computational expense of these simulations) we have studied just one case, so it is not obvious how much of our conclusions can be generalized to very different conditions with e.g. much lower accretion rates, let alone extremely low-accretion rate systems like M87 or Sgr A\({}^{*}\). Another obvious limitation of the work here is that we do not include any "outward" fluxes from the un-resolved accretion disk at \(<80\,\)au, e.g. radiation or jets from the inner disk. Our hope is that, given the unexpected properties of the disks which form here, this work will first motivate smaller-scale simulations of accretion disks (going down to the ISCO) with outer boundary conditions broadly similar to our inner boundary conditions, and those can provide some motivation for including such "inner disk feedback" prescriptions in a subsequent generation of simulations. In principle, one could also attempt to improve the simulations here even further in resolution or run-time. However the simulations here already push the boundaries of what is possible, and it is not obvious such a strategy would be most efficient. Instead, we argued that conclusions on large (galactic) scales (where one would ideally like to run the simulations for much longer) are better studied in simulations without "hyper refinement" (but using simulations like those here to inform the sub-grid models for BH accretion and feedback). Meanwhile one can much more efficiently explore the physics and parameter space of accretion disk physics with classical, dedicated accretion disk simulations. But in those simulations, the initial and boundary conditions are largely arbitrary, so our goal here is to provide predictive values for those, motivating qualitatively new parameter space to be studied in future work. Support for PFH was provided by NSF Research Grants 1911233, 20009234, 2108318, NSF CAREER grant 1455342, NASA grants 80NSSC18K0562, HST-AR-15800. DAA acknowledges support by NSF grants AST-2009687 and AST-2108944, CXO grant TM2-23006X, Simons Foundation Award CCA-1018464, and Cottrell Scholar Award CS-CSA-2023-028 by the Research Corporation for Science Advancement. CAFG was supported by NSF through grants AST-2108230 and CAREER award AST-1652522; by NASA through grants 17-ATP17-0067 and 21-ATP21-0036; by STScI through grant HST-GO-16730.016-A; and by CXO through grant TM2-23005X. Support for MYG was provided by NASA through the NASA Hubble Fellowship grant #HST-HF2-51479 awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., for NASA, under contract NAS5-26555. Numerical calculations were run on the Caltech compute cluster "Wheeler," allocations AST21010 and AST20016 supported by the NSF and TACC, and NASA HEC SMD-16-7592. This research is part of the Frontera computing project at the Texas Advanced Computing Center. Frontera is made possible by National Science Foundation award OAC-1818253.
2309.04469
Multi-contact Stochastic Predictive Control for Legged Robots with Contact Locations Uncertainty
Trajectory optimization under uncertainties is a challenging problem for robots in contact with the environment. Such uncertainties are inevitable due to estimation errors, control imperfections, and model mismatches between planning models used for control and the real robot dynamics. This induces control policies that could violate the contact location constraints by making contact at unintended locations, and as a consequence leading to unsafe motion plans. This work addresses the problem of robust kino-dynamic whole-body trajectory optimization using stochastic nonlinear model predictive control (SNMPC) by considering additive uncertainties on the model dynamics subject to contact location chance-constraints as a function of robot's full kinematics. We demonstrate the benefit of using SNMPC over classic nonlinear MPC (NMPC) for whole-body trajectory optimization in terms of contact location constraint satisfaction (safety). We run extensive Monte-Carlo simulations for a quadruped robot performing agile trotting and bounding motions over small stepping stones, where contact location satisfaction becomes critical. Our results show that SNMPC is able to perform all motions safely with 100% success rate, while NMPC failed 48.3% of all motions.
Ahmad Gazar, Majid Khadiv, Andrea Del Prete, Ludovic Righetti
2023-09-08T17:55:35Z
http://arxiv.org/abs/2309.04469v2
# Multi-contact Stochastic Predictive Control for Legged Robots with Contact Locations Uncertainty ###### Abstract Trajectory optimization under uncertainties is a challenging problem for robots in contact with the environment. Such uncertainties are inevitable due to estimation errors, control imperfections, and model mismatches between planning models used for control and the real robot dynamics. This induces control policies that could violate the contact location constraints by making contact at unintended locations, and as a consequence leading to unsafe motion plans. This work addresses the problem of robust kino-dynamic whole-body trajectory optimization using stochastic nonlinear model predictive control (SNMPC) by considering additive uncertainties on the model dynamics subject to contact location chance-constraints as a function of robot's full kinematics. We demonstrate the benefit of using SNMPC over classic nonlinear MPC (NMPC) for whole-body trajectory optimization in terms of contact location constraint satisfaction (safety). We run extensive Monte-Carlo simulations for a quadruped robot performing agile trutfing and bounding motions over small stepping stones, where contact location satisfaction becomes critical. Our results show that SNMPC is able to perform all motions safely with \(100\%\) success rate, while NMPC failed \(48.3\%\) of all motions. ## I Introduction Trajectory optimization for robots in contact-rich scenarios poses several control challenges due to the hybrid nature of their under-actuated dynamics that needs to be stabilized through constrained contact forces at desired contact locations with the environment [1][2]. Model Predictive Control (MPC) has been a favorable tool of choice for trajectory optimization as it exploits the causal structure of the rolled out dynamics while guaranteeing constraint satisfaction [3]. Despite the inherent robustness of MPC for disturbance rejection through high frequency re-planning, dealing with persistent disturbances remains critical for successful execution of agile motions for legged robots. Such disturbances might arise from estimation errors, model mismatches or imperfect controls. This induces control policies that misplaces end-effectors at unintended contact locations, leading to failed motions. Thus, planning under uncertainties becomes a necessity for safe trajectory optimization. Recently, planning under uncertainties gained more attention in the legged robotics community. For example, Villa et. al [4] used a tube-based Robust MPC (RMPC) by taking into account additive uncertainties on the dynamics using a Linear Inverted Pendulum Model (LIPM) for bipedal walking, and designing robust Center of Pressure (CoP) constraints that accommodate for all disturbance realizations inside of the CoP disturbance set. However, RMPC approaches are rather conservative and tend to sacrifice performance to guarantee robustness. To this end, we resorted to a less conservative chance-constrained stochastic MPC formulation by treating the additive uncertainties stochastically, and satisfying state constraints in a probabilistic sense known as _chance-constraints_[5]. Although LIPM allows to apply linear SMPC approaches, it limits the range of agile motions and relevant uncertainties to be considered for legged robots, and their effect on contact location constraint satisfaction. For optimizing whole-body motions, indirect methods like iLQR/Differential Dynamic Programming (DDP) [6] have become a popular choice in the robotics community [7][8]. To incorporate uncertainties in DDP formulations, Morimoto et al. considered a minimax DDP for simple bipedal walking dynamics subject to additive disturbances on the viscous friction of the robot joints [9]. Recently [10, 11] used risk-sensitive DDP that accounts for process and measurement uncertainties. Other lines of work resorted to sampling-based methods to approximate the stochastic optimal control problem. For instance Mordatch et al. used an ensemble of perturbed models that allowed them to transfer the control policy to a humanoid robot [12]. However, sampling-based approaches are computationally expensive for real-time MPC applications of high-dimensional robotic systems. Despite the different risk measures adopted in the above approaches, they do not include constraints in their formulations, which is essential for safe trajectory optimization. Other approaches incorporated uncertainties through contact by solving a Stochastic Linear Complementarity Problem (SLCP). For example, [13] solved a SLCP in order to avoid the discontinuities of the complementarity problem. This allowed them to optimize smoothly through contacts by offering a trade-off between contact complementarity accuracy, and the feasibility of the problem. Despite the success of DDP approaches, they do not consider the effect of uncertainties on constraint satisfaction, which is crucial for robotic systems. Finally, Drnach et al. used a direct contact-implicit approach to solve a SLCP with chance-constraints [14]. Due to the nature of the non-smooth mixed-integer problem of contact-implicit approaches, they are hard to solve and are best suited for offline trajectory optimization. Some of the limitations of previous approaches are: 1) they do not consider explicitly the effect of uncertainty on constraint satisfaction, which is the case in most aforementioned DDP approaches. 2) Contact-implicit approaches are usually hard to tune and get easily stuck in local minima, which limit their applicability for MPC. 3) Unlike stochastic trajectory optimization, robust approaches are conservative as they sacrifice performance for safety. This work addresses the above limitations with the following contributions: * We solve a stochastic kino-dynamic whole-body trajectory optimization subject to additive uncertainties on the dynamics. Contrary to our previous work on stochastic centroidal momentum trajectory optimization [15], we optimize both the centroidal dynamics and the full robot kinematics, which allows us to model uncertainties on the optimized contact locations in a receding horizon fashion rather than on fixed contact locations with fixed parametric contact location uncertainties. * We design contact location chance-constraints inside an approximate real-time SQP-type iteration. This is less conservative than considering worst-case disturbance in robust optimization, where constraints are to be satisfied for all possible realizations. Instead, we satisfy constraints in a probabilistic sense, while maintaining the same computational complexity as NMPC without degrading the performance. * We compared SNMPC against NMPC by running extensive Monte-Carlo simulations of the quadruped robot Solo for dynamic trotting and bounding gaits on a challenging non-coplanar terrain. Furthermore, We compared the robustness induced by SNMPC against a heuristic-based NMPC (HNMPC), where the contact location constraints were shrank by hand using a heuristic safety margin. Our results show that SNMPC was able to perform all motions safely with \(100\%\) success rate, while NMPC and HNMPC failed \(48.3\%\) and \(47.6\%\) of the time respectively. ## II Background **Notation:** A random variable \(x\) following a distribution \(\mathcal{Q}\) is denoted as \(x\sim\mathcal{Q}\), with \(\mathbb{E}[x]\) being the expected value of \(x\), and \(\boldsymbol{\Sigma}_{x}\triangleq\mathbb{E}[(\boldsymbol{x}-\mathbb{E}[ \boldsymbol{x}])(\boldsymbol{x}-\mathbb{E}[\boldsymbol{x}])^{\top}]\), and the weighted \(l_{2}\) norm is \(\|\boldsymbol{y}\|_{\boldsymbol{P}}\triangleq\boldsymbol{y}^{\top}\boldsymbol {P}\boldsymbol{y}\). ### _Multi-Contact Robot Dynamics_ The full-body dynamics of a floating-base robot can be derived using Euler-Lagrange equations of motion \[\boldsymbol{M}(\boldsymbol{q})\ddot{\boldsymbol{q}}+\boldsymbol{h}(\boldsymbol {q},\dot{\boldsymbol{q}})=\sum_{i=0}^{n_{c}}\boldsymbol{J}_{i}^{\top}( \boldsymbol{q})\boldsymbol{\lambda}_{i}+\boldsymbol{S}^{\top}\boldsymbol{ \tau}_{q}. \tag{1}\] The generalized robot position \(\boldsymbol{q}=\left[\boldsymbol{x}_{b}^{\top},\boldsymbol{\theta}_{j}^{\top} \right]^{\top}\in\mathbb{SE}(3)\times\mathbb{R}^{n_{j}}\) represents the robot's floating base pose, and joint positions respectively. The inertia matrix is denoted as \(\boldsymbol{M}(\boldsymbol{q})\in\mathbb{R}^{(6+n_{j})\times(6+n_{j})}\), and \(\boldsymbol{h}(\boldsymbol{q},\dot{\boldsymbol{q}})\in\mathbb{R}^{6+n_{j}}\) is the vector capturing the Coriolis, centrifugal, gravity and joint friction forces. \(\boldsymbol{J}_{i}\) represents the associated jacobian of the \(i\)-th end-effector contact force \(\boldsymbol{\lambda}_{i}\in\mathbb{R}^{3}\) acting on the environment for a point-foot robot. Finally, \(\boldsymbol{S}=\left[\boldsymbol{0}_{(n_{j}\times 6)},\boldsymbol{I}_{n}\right]\) is the selector matrix of the actuated joint torques \(\boldsymbol{\tau}_{q}\). By focusing on the under-actuated part of the dynamics in (1) (i.e. first 6 equations), one can plan centroidal momentum trajectories by exploiting the relationship between the momenta about the CUM and the generalized robot velocities \(\dot{\boldsymbol{q}}\) as \(\dot{\boldsymbol{h}}_{\dot{\boldsymbol{Q}}}=\left[\dot{\boldsymbol{x}},\dot{ \boldsymbol{l}}\right]^{\top}=\dot{\boldsymbol{A}}_{\dot{\boldsymbol{Q}}}( \boldsymbol{q})\ddot{\boldsymbol{q}}+\dot{\boldsymbol{A}}_{\dot{\boldsymbol{Q} }}(\boldsymbol{q})\dot{\boldsymbol{q}}\) via the _Centroidal Momentum Matrix_ (CMM) \(\boldsymbol{A}_{\dot{\boldsymbol{Q}}}\in\mathbb{R}^{6\times(6+n_{j})}\)[16]. With the same spirit as [17], we are interested in planning kino-dynamic whole-body motions using centroidal momentum dynamics and full robot kinematics (2nd order kinematics) as follows: \[\frac{d}{dt}\begin{bmatrix}\boldsymbol{c}\\ \boldsymbol{l}\\ \boldsymbol{\kappa}\end{bmatrix}=\begin{bmatrix}\frac{1}{m}\boldsymbol{l}\\ mg+\sum_{i=1}^{n_{c}}\boldsymbol{\lambda}_{i}\\ \sum_{i=1}^{n_{c}}\left(\mathbf{F}\mathbf{K}_{i}(\ddot{\boldsymbol{q}})- \boldsymbol{c}\right)\times\boldsymbol{\lambda}_{i}\end{bmatrix}, \tag{2a}\] \[\frac{d}{dt}\begin{bmatrix}\boldsymbol{p}_{b}\\ \boldsymbol{\Delta}\boldsymbol{q}_{b}\\ \boldsymbol{v}_{b}\\ \boldsymbol{v}_{b}\\ \boldsymbol{v}_{b}\\ \boldsymbol{v}_{j}\end{bmatrix}=\begin{bmatrix}\boldsymbol{v}_{b}\\ \boldsymbol{v}_{b}\\ \boldsymbol{v}_{j}\\ \boldsymbol{a}_{b}\\ \boldsymbol{v}_{b}\\ \boldsymbol{v}_{j}\end{bmatrix},\quad\ddot{\boldsymbol{q}}\triangleq\begin{bmatrix} \boldsymbol{p}_{b}\\ \boldsymbol{\theta}_{b}\\ \boldsymbol{\theta}_{j}\end{bmatrix}. \tag{2b}\] \(\boldsymbol{c}\in\mathbb{R}^{3}\) represents the CoM of the robot, with \(m\) being the total mass of the robot subject to the gravity vector \(\boldsymbol{g}\). The forward kinematics function \(\mathbf{F}\mathbf{K}_{i}(.):\mathbb{Q}\mapsto\mathbb{R}^{3}\) computes the \(i\)-th end-effector's contact position for a given robot configuration. For the simplicity of dynamics integration and constraints linearization later, we choose to optimize for the relative base orientation \(\Delta\boldsymbol{q}_{b}\) w.r.t. an absolute base reference \(\boldsymbol{q}_{\mathrm{ref}_{b}}\) instead of \(\boldsymbol{q}_{b}\) directly. Finally, we _transcribe_ the above continuous dynamics using direct collocation into the following MPC problem with pre-specified contact mode and timing \(\Delta_{k}\). The state and control optimizations at the \(k\)-th discretization step are \(\boldsymbol{x}_{k}=\left[\boldsymbol{c}_{k}^{\top},\boldsymbol{t}_{k}^{\top}, \boldsymbol{\kappa}_{k}^{\top},\,\boldsymbol{p}_{b_{k}}^{\top},\Delta \boldsymbol{q}_{b_{k}}^{\top},\boldsymbol{\theta}_{j_{k}}^{\top},\boldsymbol{ \upsilon}_{b_{k}}^{\top},\boldsymbol{\upsilon}_{b_{k}}^{\top},\,\boldsymbol{ \upsilon}_{j_{k}}^{\top}\right]^{\top}\in\mathbb{R}^{n}\), and \(\boldsymbol{u}_{k}=\left[\boldsymbol{\lambda}_{i,k}^{\top},\ldots,\boldsymbol{ \lambda}_{n_{c},k}^{\top},\boldsymbol{a}_{b_{k}}^{\top},\boldsymbol{\psi}_{b_{k}}^ {\top},\boldsymbol{a}_{j_{k}}^{\top}\right]^{\top}\in\mathbb{R}^{m}\) with \(n=21+6n_{c}\), and \(m=6+6n_{c}\) for point feet robots. **Problem 1**.: _Kino-Dynamic NMPC Problem_ \[\underset{\boldsymbol{X},\boldsymbol{U},\boldsymbol{S}}{\mathrm{ minimize}} \mathcal{L}_{\mathrm{total}}(\boldsymbol{X},\boldsymbol{U},\boldsymbol{S}) \tag{3a}\] \[\mathrm{s.t.} \boldsymbol{f}_{\mathrm{impl}}(\boldsymbol{x}_{k},\boldsymbol{x}_{k+ 1},\boldsymbol{u}_{k})=\boldsymbol{0},\] (3b) \[\boldsymbol{h}(\boldsymbol{x}_{k},\boldsymbol{u}_{k})+\boldsymbol{J }_{\mathrm{sh}}\boldsymbol{s}_{k}\leq\boldsymbol{0},\] (3c) \[-\boldsymbol{s}_{k}\leq\boldsymbol{0},\] (3d) \[\boldsymbol{x}_{0}-\boldsymbol{x}(t)=\boldsymbol{0},\quad\forall k \in\{0,1,\ldots,N-1\}. \tag{3e}\] where \(\boldsymbol{X}\triangleq\{\boldsymbol{x}_{0},\ldots,\boldsymbol{x}_{N}\}\) and \(\boldsymbol{U}\triangleq\{\boldsymbol{u}_{0},\ldots,\boldsymbol{u}_{N-1}\}\) are the states and control variables along the control horizon N. The implicit discrete dynamics \(\boldsymbol{f}_{\mathrm{impl}}(.):\mathbb{R}^{n}\times\mathbb{R}^{n}\times \mathbb{R}^{m}\mapsto\mathbb{R}^{n}\) (3b) captures the kino-dynamic equality path constraints in (2) discretized using first-order implicit-Euler integration scheme, and transcribed using Gauss-Legendre collocation method. The remaining nonlinear path constraints \(\mathbf{h}(.)\) are implemented softly to avoid infeasibilities of the OCP by introducing extra slack variables \(\mathbf{S}\triangleq\{\mathbf{s}_{0},\ldots,\mathbf{s}_{N}\}\), where \(\mathbf{J}_{\mathrm{sh}}\) selects the slack variable attached to the respective constraint. At every receeding horizon, the initial condition of the OCP is reset with the current measured state \(\mathbf{x}(t)\) using constraint (3e). We enforce Kino-dynamic consistency between the (2a), and (2b) with the following constraints: \[\mathbf{h}_{\mathrm{kindyn}}(\mathbf{c}_{k},\mathbf{h}_{\mathcal{G}_{k}}, \tilde{\mathbf{q}}_{k})+\mathbf{s}_{\mathrm{kindyn}_{k}}=\mathbf{0}, \tag{4a}\] \[\mathbf{h}_{\mathrm{kindyn}}\triangleq\left[\begin{bmatrix}\mathbf{c}_{k }-\mathbf{COM}(\tilde{\mathbf{q}}_{k}),\\ \mathbf{\kappa}_{k}^{\top},\mathbf{l}_{k}^{\top}\end{bmatrix}^{\top}-\mathbf{A}_{\mathcal{ G}}(\tilde{\mathbf{q}}_{k})\right], \tag{4b}\] where \(\mathbf{COM}(.):\mathbb{Q}\mapsto\mathbb{R}^{3}\) function computes the center of mass of the robot for a given configuration, and \(\mathbf{s}_{\mathrm{kindyn}_{k}}\in\mathbb{R}^{9}\) are the slack variables associated with those kino-dynamic constraints. In order to avoid contact slippage, the tangential contact forces in the end-effector frame (\(\tilde{\mathbf{f}}_{i,k}=\mathbf{R}_{i,k}^{\top}\mathbf{\lambda}_{i,k}\)) are constrained inside the friction cone \[\gamma_{i,k}\cdot\left[h_{\mathrm{cone}_{i,k}}(\mathbf{\lambda}_{i,k} )+s_{\mathrm{cone}_{i,k}}\leq 0\right]\quad\gamma_{i,k}\in\mathcal{C}, \tag{5a}\] \[h_{\mathrm{cone}_{i,k}}\triangleq\sqrt{\tilde{\mathbf{f}}_{x,i_{k}}^ {2}+\tilde{\mathbf{f}}_{y,i_{k}}^{2}}-\mu\tilde{\mathbf{f}}_{z,i_{k}}, \tag{5b}\] where \(\mathcal{C}=\{0,1\}\). The contact mode (fixed apriori) \(\gamma_{i,k}=1\) when the \(i\)-th foot is in contact with the ground, and \(\gamma_{i,k}=0\) otherwise. The coefficient of friction is denoted by \(\mu\), and \(s_{\mathrm{cone}_{i,k}}\in\mathbb{R}\) is the slack variable associated to the friction cone constraint. During contact, the \(i\)-th end-effector position in the \(z\)-direction must be at the height of the contact surface \(\mathcal{S}_{i,k}^{z}\), and be within the contact surface boundaries \(\mathcal{S}_{i}^{x,y}\). \[\gamma_{i,k}\cdot\left[h_{\mathrm{pos}_{i,k}}^{z}(\tilde{\mathbf{q}}_ {k})+s_{\mathrm{pos}_{i,k}}^{z}=\mathcal{S}_{i,k}^{z}\right]\quad\gamma_{i,k} \in\mathcal{C}, \tag{6a}\] \[\gamma_{i,k}\cdot\left[\mathbf{h}_{\mathrm{pos}_{i,k}}^{x,y}(\tilde{ \mathbf{q}}_{k})+\mathbf{s}_{\mathrm{pos}_{i,k}}^{x,y}\right]\quad\gamma_{i,k}\in \mathcal{C}, \tag{6b}\] where \(\mathbf{h}_{\mathrm{pos}_{i,k}}\triangleq\text{FK}_{i,k}(\tilde{\mathbf{q}}_{k})\), and \(\mathbf{s}_{\mathrm{pos}_{i,k}}\in\mathbb{R}^{3}\) are the slack variables associated with the contact position constraints. For simplicity, we assume that \(\mathcal{S}_{i,k}\in\mathbb{R}^{3}\) is a rectangular polytope. Finally, the end-effector velocities during contact are constrained to be zero by enforcing the holonomic constraint: \[\gamma_{i,k}\cdot\left[\mathbf{h}_{\mathrm{vel}_{i,k}}(\tilde{\mathbf{q}}_ {k},\dot{\mathbf{q}}_{k})+\mathbf{s}_{\mathrm{vel}_{i,k}}=\mathbf{0}\right]\quad\gamma_{i,k} \in\mathcal{C},\] (7a) where \[\mathbf{h}_{\mathrm{vel}_{i,k}}\triangleq\mathbf{J}_{i,k}(\tilde{\mathbf{q}}_{k})\dot{ \mathbf{q}}_{k}\], and \[\mathbf{s}_{\mathrm{vel}_{i,k}}\in\mathbb{R}^{3}\] are the associated slack variables. ### _Cost function and constraint penalties_ In the above optimal control problem, we track a whole-body reference trajectory \(\mathbf{x}_{\mathrm{r}}\) optimized apriori offline. The total cost in (3a) is split between the least-squares tracking cost \(\mathcal{L}_{\text{LS}}\), and the penalty cost \(\mathcal{L}_{\text{penalty}}\) penalizing the violations of the nonlinear constraints (3c) as \(\mathcal{L}_{\text{total}}=\mathcal{L}_{\text{LS}}+\mathcal{L}_{\text{penalty}}\) \[\mathcal{L}_{\text{LS}}\triangleq\sum_{k=0}^{N-1}\!\!\frac{1}{2} \!\!\left(\!\left\|\mathbf{x}_{k}-\mathbf{x}_{\mathrm{r}_{k}}\right\|_{\mathbf{Q}}^{2}+\! \left\|\mathbf{u}_{k}\right\|_{\mathbf{R}}^{2}\!\right)\!+\!\frac{1}{2}\!\!\left\|\mathbf{x }_{N}-\mathbf{x}_{\mathrm{r}_{N}}\right\|_{\mathbf{Q}_{N}}^{2}\!\!\!\left(\!\left.\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! ### _Tractable formulation of Joint Chance-constraints_ The goal of the following subsections is to design safety margins/upper bounds known as _back-offs_ on the contact location chance-constraints (10d) to accommodate for the additive stochastic disturbances on the dynamics that is difficult for only feedback to deal with. This is particularly crucial for legged robots since making contact at unintended contact locations can lead to unsafe motions. We design such back-offs formally based on the evolution statistical information along the horizon inside the optimization problem, such that we are able to provide probabilistic statements about constraint satisfaction without degrading the performance. Notice that those margins are not fixed compared to designing them heuristically by hand (check the results section). In another words, if the variance of the uncertainty is large, or we want to satisfy the constraints with larger probability, then it reflects automatically on the back-off magnitude by increasing the safety margin accordingly to ensure the expected probability of constraint satisfaction. In order to reduce the computational complexity for solving the contact-location joint chance-constraints (10d), we first linearize the nonlinear constraints around the \(j\)-th SQP iteration at \(\Delta\mathbf{x}_{k}\triangleq\mathbf{x}_{k}-\mathbf{x}_{k}^{j}\), then solve for each individual half-space chance-constraints forming the linearized feasible set as follows: \[\nabla_{\mathbf{x}_{k}}\eqref{eq: ### _Deterministic Reformulation of SNMPC_ Given the previous reformulation of the individual chance constraints, we can write down a deterministic reformulation of the original SNMPC problem (2) on the mean of the nonlinear dynamics. Despite the reformulated chance-constraints constraints, this problem has the same number of optimization variables as the NMPC problem (1), which means that with the gained robustness, we don't increase the computational complexity of the problem over NMPC. **Problem 3**.: _Kino-dynamic SNMPC problem with individual chance-constraints_ \[\operatorname*{minimize}_{\bar{\mathbf{X}},\bar{\mathbf{U}},\mathbf{S}}\ \mathcal{L}_{\rm total}(\bar{\mathbf{X}},\bar{\mathbf{U}},\mathbf{S}) \tag{18a}\] \[\mathrm{s.t.}\ \mathbf{F}(\bar{\mathbf{x}}_{k+1},\bar{\mathbf{x}}_{k},\bar{\mathbf{u} }_{k})\triangleq\mathbf{f}_{\rm impl}(\bar{\mathbf{x}}_{k},\bar{\mathbf{x}}_{k+1},\bar{\bm {u}}_{k})=\mathbf{0},\] (18b) \[\mathbf{E}(\bar{\mathbf{x}}_{k},\bar{\mathbf{u}}_{k})\triangleq\left\{\begin{array} []{l}\mathbf{h}_{\rm eq}(\bar{\mathbf{x}}_{k},\bar{\mathbf{u}}_{k},\mathbf{s}_{k})=\mathbf{0},\\ \bar{\mathbf{x}}_{0}-\mathbf{x}(t)=\mathbf{0},\end{array}\right.\] (18c) \[\mathbf{I}(\bar{\mathbf{x}}_{k},\bar{\mathbf{u}}_{k})\triangleq\left\{ \begin{array}{l}\mathbf{h}_{\rm ineq}(\bar{\mathbf{x}}_{k},\bar{\mathbf{u}}_{k},\mathbf{s}_ {k})\leq\mathbf{0},\\ \mathbf{G}(\bar{\mathbf{x}}_{k}^{j})\bar{\mathbf{x}}_{k}\ +\mathbf{J}_{\rm sg}\mathbf{s}_{k}\leq\mathbf{g}_{k}-\mathbf{ \eta}_{k},\end{array}\right.\] (18d) \[-\mathbf{s}_{k}\leq\mathbf{0},\quad\forall k\in\{0,1,\ldots,N-1\}. \tag{18e}\] This SMPC problem optimizes for the open-loop mean states \(\bar{\mathbf{X}}=\{\bar{\mathbf{x}}_{0},\ldots,\bar{\mathbf{x}}_{N}\}\), and feedforward controls \(\bar{\mathbf{U}}=\{\bar{\mathbf{u}}_{0},\ldots,\bar{\mathbf{u}}_{N}\}\). All nonlinear equality constraints are captured inside \(\mathbf{h}_{\rm eq}(.)=\big{[}(4)^{\top},(6\texttt{a})^{\top},(7)^{\top}\big{]}^{\top}\), while \(\mathbf{h}_{\rm ineq}(.)=(5)\) captures the friction cone inequality constraints. Finally, the second row of the inequality constraints (18d) are the backed-off contact location constraints in the \(x-y\) directions, where \(\mathbf{\eta}_{\mathbf{k}}=\eta_{k}.\mathbf{1}_{2n_{c}}\). These linearized constraints are implemented softly with \(\mathbf{J}_{\rm sg}\) being the slack selector matrix. The above OCP is solved using Sequential Quadratic Programming (SQP) [18] by constructing a quadratic model of the cost objective subject to linearized constraints that solves the Karush-Kuhn-Tucker (KKT) system associated with the following Lagrangian: \[\Psi(\mathbf{z})=\mathcal{L}_{\rm total}+\mathbf{\zeta}^{\top}\mathbf{F}+\mathbf{\beta}^{\top} \mathbf{E}+\mathbf{\gamma}^{\top}\mathbf{I}, \tag{19}\] where \(\mathbf{z}\triangleq[\bar{\mathbf{x}}^{\top},\bar{\mathbf{u}}^{\top}]^{\top}\) is the concatenated vector of states and controls. \(\mathbf{\zeta}\), \(\mathbf{\beta}\) are the associated Lagrange multipliers to the equality constraints, and \(\mathbf{\gamma}\) are the ones corresponding to the inequality constraints. Given a perturbation \(\Delta\mathbf{z}_{k}\triangleq\mathbf{z}_{k}-\mathbf{z}_{k}^{j}\), where \(\mathbf{z}^{j}\) is the current initial guess along the control horizon, the following QP subproblem is solved: **Problem 4**.: _QP subproblem_ \[\operatorname*{minimize}_{\Delta\mathbf{Z},\mathbf{S}}\ \frac{1}{2}\Delta \mathbf{z}^{j^{\top}}\mathbf{H}\Delta\mathbf{z}^{j}+\mathbf{p}^{\top}\Delta\mathbf{z}^{j} \tag{20a}\] \[\mathrm{s.t.}\ \mathbf{F}(\mathbf{z}^{j})+\nabla_{\mathbf{z}}\mathbf{F}(\mathbf{z}^{j}) \Delta\mathbf{z}^{j}=\mathbf{0},\] (20b) \[\mathbf{E}(\mathbf{z}^{j})+\nabla_{\mathbf{z}}\mathbf{E}(\mathbf{z}^{j})\Delta\mathbf{z}^ {j}=\mathbf{0},\] (20c) \[\mathbf{I}(\mathbf{z}^{j})\ +\nabla_{\mathbf{z}}\mathbf{I}(\mathbf{z}^{j})\Delta\mathbf{z}^{j} \leq\mathbf{0},\ \ \mathbf{s}^{j}\leq\mathbf{0}. \tag{20d}\] The Hessian of the Lagrangian \(\mathbf{H}\triangleq\nabla_{\mathbf{z}}^{2}\Psi(\mathbf{z}^{j})\) is approximated using the Generalized Gauss-Newton (GGN) variant of SQP as \(\mathbf{H}\approx\nabla_{\mathbf{z}}^{\top}\Psi(\mathbf{z}^{j})\nabla_{\mathbf{z}}\Psi(\mathbf{z }^{j})\), and the gradient of the residual is defined as \(\mathbf{p}\triangleq\nabla_{\mathbf{z}}^{\top}\Psi(\mathbf{z}^{j})\Psi(\mathbf{z}^{j})\). For an exact SQP iteration, the linearization of the backed-off contact location constraints included in the above inequality constraints includes the extra derivative \(\nabla_{\mathbf{z}}\eta(\mathbf{z}_{k}^{j})\): \[\underbrace{\mathbf{G}_{i,k}^{l}(\bar{\mathbf{x}}_{k}^{j})\bar{\mathbf{x}}_{k }\leq g_{i,k}^{l}-\eta_{k}^{l}}_{\mathbf{I}\mathbf{G}(\mathbf{z}_{k}^{j})}-\] \[\underbrace{\Phi^{-1}(1-\epsilon_{i,k}^{l})\Big{(}\nabla_{\mathbf{z} }\Big{\|}\mathbf{G}_{i,k}^{l}(\bar{\mathbf{x}}_{k}^{j})\Big{\|}_{\mathbf{\Sigma}_{\mathbf{z}_{k }}}(\mathbf{z}_{k}-\mathbf{z}_{k}^{j})\Big{)}}_{\nabla_{\mathbf{z}}\eta(\mathbf{z}_{k}^{j})}( \mathbf{z}_{k}-\mathbf{z}_{k}^{j})},\] \[\nabla_{\mathbf{z}}\Big{\|}\mathbf{G}_{i,k}^{l}(\bar{\mathbf{x}}_{k}^{j}) \Big{\|}_{\mathbf{\Sigma}_{\mathbf{z}_{k}}}\triangleq\big{(}2\big{\|}\mathbf{G}_{i,k}^{l} \Big{\|}_{\mathbf{\Sigma}_{\mathbf{z}_{k}}}\Big{)}^{-1}\Big{(}2\mathbf{G}_{i,k}^{l^{\top}} \mathbf{\Sigma}_{\mathbf{z}_{k}}\nabla_{\mathbf{z}}\mathbf{G}_{i,k}^{l}+\] \[\qquad\qquad\qquad\qquad\sum_{i=0}^{n}\sum_{j=0}^{n}g_{i}^{l}g_{j} ^{l}\nabla_{\mathbf{z}}\Sigma_{ij}\Big{)}, \tag{21a}\] \[\forall l\in\{1,\ldots,4\},\forall i\in\{1,\ldots,n_{c}\},\forall k \in\{0,\ldots,N\}. \tag{21b}\] The above derivative involves the tensor derivative of the covariance matrix \(\nabla_{\mathbf{z}}\mathbf{\Sigma}_{\mathbf{z}_{k}}\), which is expensive to compute. **Remark 1**.: _For real-time computational tractability, we adopt a SQP-type iteration by approximating \(\nabla_{\mathbf{z}}\eta(\mathbf{z}_{k}^{j})=0\) as in [24]. This SQP approach is sub-optimal due to the Fig. 1: Effect of equally distributed back-offs design of the linearized contact location chance-constraints. fact that we don't compute the exact Jacobian of the contact location inequality constraint as in [15, 23]. Despite this sub-optimality, this scheme yields good results in practice without sacrificing computational complexity over NMPC._ The OCP is implemented with real-time iteration [25], where one QP sub-problem is solved at a time using a full Newton-type step without a line search (see Algorithm 1). ## IV Simulations Results We report simulation results comparing SNMPC against NMPC for the quadruped robot Solo [26] performing dynamic trotting and bounding gaits on non-coplanar small stepping stones. The robustness of both controllers is tested in terms of contact location constraint satisfaction (safety), and performance computed using the least-squares tracking cost (8a). Moreover, we test the safety margins induced by SNMPC against heuristic-based NMPC (HNMPC) and NMPC. For HNMPC, we shrank the contact-location constraints heuristically by hand by performing a grid search on an interval between 1 cm and 3 cm. A safety margin of 3 cm was selected as it was the first value where the contact-location constraints became active and differed from NMPC. We conducted two sets of simulations: A) **Kino-dynamic Monte-Carlo Simulations**, where we test the robustness of the kino-dynamic model against persistent disturbance realizations. B) **Whole-body simulations**, to test the effect of model mismatch between kino-dynamic model and whole-body model of the robot in the Pybullet simulator. All the three MPC approaches follow a trajectory generated offline using whole-body DDP from the Croccodyl solver [8] with pre-planned contact locations at the center of the contact surfaces. Also, the first MPC iteration is warm-started using this trajectory, while subsequent MPC iterations are warm-started from the previous MPC solution. The cost weights for the kino-dynamic MPC are summarized in Table I. All problems were discretized with a sampling time of \(\Delta_{k}=10\) ms for an MPC horizon length of \(N=40\), and \(N=55\) for the trot and bound motions respectively. The motion plans were designed with a coefficient of friction \(\mu=0.5\). Finally, the real-time iteration scheme was performed using the optimal control solver ACADOS [27], exploiting Casadi's automatic differentiation [28], and Pinocchio's analytical derivatives for rigid body kinematic functions for computing the underlying derivatives [29]. ### _Kino-dynamic Monte-Carlo Simulations_ We run 500 closed-loop kino-dynamic Monte-Carlo simulations for each motion (trotting and bounding). We sample additive kinematic disturbance realizations from a multi-variate Gaussian distribution with zero mean and a covariance \(\mathbf{\Sigma}_{w}=\textbf{DIAG}\) [\(\mathbf{0}_{6},0.3^{2},0.3^{2},0.2^{2},0.2^{2},0.2^{2},0.7^{2},0.7^{2},0.7^{2},0.7^{2},0.7^{2},0.7^{2},\\ 0.7^{2},0.7^{2},0.7^{2},0.7^{2},0.8^{2},0.8^{2},0.8^{2},0.1^{2},0.1^{2},0.1^{2 },0.7^{2},\\ 0.7^{2},0.7^{2},0.7^{2},0.7^{2},0.7^{2},0.7^{2},0.7^{2},0.7^{2},0.7^{2}]^{\top}\). We tune the risk of violating the contact location constraints for SNMPC to be \(\epsilon=0.01\) for all feet and contact surfaces. The disturbances are applied on the base velocity at the time contacts are made to mimic the effect of impacts on the kino-dynamic model, as well as on the swing leg joint velocities during take-off and landing to simulate persistent disturbances and control imperfections on the swing legs. Finally, no disturbances are applied at the feet after impact based on the assumption that the feet do not slip. The disturbance realizations are discretized and integrated on the dynamics (9) using the Implicit-Euler integration scheme. We report in the percentage of successful motions in Table IIa. As shown, SNMPC manages to perform all motions successfully without violating any of the contact location constraints despite the disturbances, which satisfies the expected probability of constraint satisfaction (\(99\%\)) thanks to the design of contact location constraints back-off design in (17). On the contrary, NMPC violated the contact location constraints \(48.3\%\) of all motions. Finally, HNMPC violated less constraints than NMPC, but still worse than SNMPC despite the robustness induced by shrinking the constraint set by hand. We highlight that although this heuristic works fairly for the trot case (success rate of \(85.4\%\)), using the same metric performed worse for a more agile bounding motion with a success rate of \(67\%\), which dictates that the user needs to keep tuning the controller blindly every time the OCP parameters changes in order to attain the desired empirical results. To quantify the safety margin induced by all controllers, we plot the mean and the \(2\sigma\) distance between the end-effector positions and the center of the contact surface in Fig. 2 showing that SNMPC induced the best safety margin being the closest to the center of the contact surfaces. Finally, we plot the performance of the three controllers in Table IIb based on open-loop MPC, where we plug the predicted open-loop state instead of the measured state during re-planning. We highlight that we didn't use Monte-carlo simulations in \begin{table} \begin{tabular}{c c c} \hline \hline & \multicolumn{2}{c}{**Kin-dyn weights**} \\ \cline{2-3} **Task** & Trot & Bound \\ \hline CoM tracking & 2e3 & 2e3 \\ linear momentum tracking & 2e2 & 2e2 \\ angular momentum tracking & 2e4 & 2e4 \\ base position tracking & 2e1 & 2e1 \\ base relative orientation regulation & 2e2 & 2e2 \\ joint positions tracking & 1e3 & 1e3 \\ base linear velocity tracking & 2e2 & 2e2 \\ base angular velocity tracking & 6e2 & 2e2 \\ joint velocities tracking & 8e1 & 6e1 \\ force regulation (x-direction) & 1e1 & 1e2 \\ force regulation (y-direction) & 1e1 & 2e1 \\ force regulation (z-direction) & 2e0 & 2e0 \\ joints acceleration regulation & 6e-3 & 1e-2 \\ \hline \hline \multicolumn{3}{c}{**Slack L1/L2 weights**} \\ \cline{2-3} **Constraint** & Trot & Bound \\ \hline friction cones constraint & 5e0/5e-1 & 1e3/0e0 \\ foot velocity equality constraint & 5e2/0e0 & 1e4/0e0 \\ CoM kin-dyn equality constraint & 0e0/5e1 & 0e0/5e1 \\ lin. mom. kin-dyn equality constraint & 0e0/1e1 & 0e0/1e1 \\ ang. mom. kin-dyn equality constraint & 0e0/1e1 & 0e0/1e1 \\ contact location chance-constraints (x-y) & 1e4/0e0 & 1e4/0e0 \\ contact location chance-constraint (x) & 5e4/0e0 & 3e4/0e0 \\ \hline \hline \end{tabular} \end{table} TABLE I: MPC cost weights. due to presence of large number failed trajectories in both NMPC and HNMPC cases. Although SNMPC sacrifices a bit the performance for safety, the performance of the three controllers is comparable. This is due to the fact that the swing foot tracking of the controllers is affected by real-time iteration schemes (non full-convergence of the OCPs), and slack penalties on the constraints. ### _Whole-body Simulation_ In this subsection, we test the effect of model mismatch between the kino-dynamic model and whole-body model (i.e. the dynamic effects of the legs). Although in practice the legs are assumed to be massless for quadruped robots, their effect cannot be neglected for agile motions. Moreover, impulsive dynamics during impacts are also ignored since they are usually hard to model. Finally, since we are running real-time iterations, neither solver achieves full convergence in one Newton/Newton-type step. As a consequence, the previous effects can hinder the satisfaction of the contact location constraints. To test those effects, we report whole-body simulations of the quadruped robot Solo [26] in the Pybullet simulation environment [30] for dynamic trot and bound motions shown in Fig. 3 and Fig. 4 respectively. The whole-body simulation runs with a discretization time of \(\Delta_{sim_{k}}=1\) ms, where the feedforward MPC trajectories are linearly interpolated. We apply following state feedback control law to both controllers: \[\boldsymbol{\tau}_{k} =\boldsymbol{\tau}_{k}^{*}+\boldsymbol{K}_{p}(\boldsymbol{q}_{j_ {k}}^{*}-\boldsymbol{q}_{j_{k}})+\boldsymbol{K}_{d}(\hat{\boldsymbol{q}}_{j_{k }}-\hat{\boldsymbol{q}}_{j_{k}}), \tag{22}\] \[\boldsymbol{\tau}_{k} \triangleq\mathrm{RNEA}(\tilde{\boldsymbol{q}}_{k}^{*},\hat{ \boldsymbol{q}}_{k}^{*},\tilde{\boldsymbol{q}}_{k}^{*})-\sum_{i=0}^{n} \boldsymbol{J}_{i}^{\top}(\tilde{\boldsymbol{q}}_{k}^{*})\boldsymbol{\lambda} _{i}^{*}. \tag{23}\] The feedforward torques are computed using the Recursive Newton-Euler Algorithm (\(\mathrm{RNEA}\)) [31]. The joint position and velocity feedback gains are set to \(\boldsymbol{K}_{p}=2.\mathbb{I}_{n_{j}\times n_{j}}\) and \(\boldsymbol{K}_{d}=0.15.\mathbb{I}_{n_{j}\times n_{j}}\) respectively. The superscript \({}^{*}\) represents the optimized quantities coming from MPC. For the trot motion, NMPC and HNMPC failed to complete the motion by breaking contact in the second step as shown in Fig. 2(a). On the contrary, SNMPC manages to complete the motion successfully until the end (see Fig. 2(b)). We tested the effect of leg inertia for an agile bounding motion, where NMPC and HNMPC failed again during the second bounding step (see Fig. 3(a)), while SNMPC successfully completed the motion as shown in Fig. 3(b) (check the submission video). ## V Discussion and Conclusions In this work, we tackled the problem of kino-dynamic stochastic trajectory optimization subject to additive uncertainties on the dynamics and contact location chance-constraints. We designed contact location safety constraints by computing upper bounds (back-offs) that takes into account the linearized propagated uncertainties along the planning horizon assuming a Gaussian distribution of those uncertainties. The final solution is an approximate solution of the original SNMPC problem with a real-time iteration scheme. We compared the robustness of SNMPC against NMPC by running 1000 Monte-Carlo kino-dynamic simulations for agile trotting and bounding motions for the quadruped robot Solo on a challenging non-coplanar environment with small stepping stones as well as whole-body simulations. SNMPC completed all the motions successfully without violating the contact locations constraints, while \begin{table} \end{table} TABLE II: Robustness and performance. Fig. 4: Comparison of whole-body bounding motion on non-coplanar stepping stones using NMPC and SNMPC. Fig. 3: Comparison of whole-body trotting motion on non-coplanar stepping stones using NMPC and SNMPC. Fig. 2: Norm of the contact location deviations from the contact surface center using NMPC, HNMPC, and SNMPC. NMPC violated them \(48.3\%\) of the time. Moreover, we ran whole-body simulations in Pybullet to study the effects of mismatch between the kino-dynamic and whole-body model; SNMPC was able to complete both motions successfully, while NMPC failed in both cases showing the benefit of SNMPC over deterministic planning in safety-critical scenarios. Finally, we also compared the robustness of SNMPC against HNMPC. Since the robustness of SNMPC is induced by designing proper back-offs, then one might think why not design such safety margins heuristically by shrinking the constraint set by hand? We argue that although this approach might work in practice for some cases, it does not provide an automatic procedure of designing such safety margins leaving this to a process of trial and error. For instance, what should be the proper safety margin for different agile motion plan without degrading performance? As shown in our empirical results, using the same heuristic safety margin for both trotting and bounding motions, yielded different safety rates of successful motions. Moreover, this heuristic-based approach does not relate the magnitude of back-offs design with the uncertainty statistics which might be available from previously collected data about the system in simulation or on the real robot. On the contrary, SNMPC addresses those issues in a methodological fashion by computing such bounds automatically, which vary at each point in time based on expected uncertainty propagation along the horizon, the time-varying closed-loop feedback gain, and the desired probability of satisfying such constraints (17). One limitation of the current work is that it does not take into account contact mode uncertainties, which is of combinatorial nature. We would like to explore tractable SNMPC formulations that take into account contact time uncertainties induced by uncertainties in the discrete contact modes, which is beneficial for sequential manipulation and locomotion tasks. Moreover, we intend to test the current SNMPC scheme on real robot experiments in a future work.
2309.05011
Depth of powers of edge ideals of Cohen-Macaulay trees
Let $I$ be the edge ideal of a Cohen-Macaulay tree of dimension $d$ over a polynomial ring $S = \mathrm{k}[x_1,\ldots,x_{d},y_1,\ldots,y_d]$. We prove that for all $t \ge 1$, $$\operatorname{depth} (S/I^t) = \operatorname{max} \{d -t + 1, 1 \}.$$
Nguyen Thu Hang, Truong Thi Hien, Thanh Vu
2023-09-10T12:16:44Z
http://arxiv.org/abs/2309.05011v1
# Depth of powers of edge ideals of Cohen-Macaulay trees ###### Abstract. Let \(I\) be the edge ideal of a Cohen-Macaulay tree of dimension \(d\) over a polynomial ring \(S=\mathrm{k}[x_{1},\ldots,x_{d},y_{1},\ldots,y_{d}]\). We prove that for all \(t\geq 1\), \[\operatorname{depth}(S/I^{t})=\max\{d-t+1,1\}.\] Key words and phrases:depth of powers; Cohen-Macaulay trees; Cohen-Macaulay bipartite graphs 2020 Mathematics Subject Classification: 05E40, 13D02, 13F55 ## 1. Introduction Let \(S=\mathrm{k}[x_{1},\ldots,x_{n}]\) be a standard graded polynomial ring over a field \(\mathrm{k}\). For a homogeneous ideal \(I\subset S\), we denote by \(\operatorname{dstab}(I)\) the _index of depth stability_ of \(I\), i.e., the smallest positive natural number \(k\) such that \(\operatorname{depth}S/I^{\ell}=\operatorname{depth}S/I^{k}\) for all \(\ell\geq k\). Such a number exists due to the result of Brodmann [Br]. Let \(G\) be a simple graph on the vertex set \(V(G)=\{x_{1},\ldots,x_{n}\}\) and edge set \(E(G)\subseteq V(G)\times V(G)\). The edge ideal of \(G\), denoted by \(I(G)\), is the squarefree monomial ideal generated by \(x_{i}x_{j}\) where \(\{x_{i},x_{j}\}\) is an edge of \(G\). In a fundamental paper [T], Trung found a combinatorial formula for \(\operatorname{dstab}(I(G))\) for large classes of graphs, including unicyclic graphs. In particular, when \(G\) is a tree, \(\operatorname{dstab}(I(G))=n-\epsilon_{0}(G)\) where \(\epsilon_{0}(G)\) is the number of leaves of \(G\). Though we know the limit depth and its index of depth stability, intermediate values for depth of powers of edge ideals were only known in very few cases, e.g., for path graphs by Balanescu and Cimpoeas [BC] and cycles and starlike trees by Minh, Trung, and the last author [MTV1]. While the depth of powers of edge ideals of general trees is very mysterious [MTV1], in this paper, we compute the depth of powers of edge ideals of all Cohen-Macaulay trees. In [V], Villarreal classified all Cohen-Macaulay trees. It says that a tree \(G\) is Cohen-Macaulay if and only if it is the whisker graph of another tree \(T\). In other words, \(V(G)=V(T)\cup\{y_{1},\ldots,y_{d}\}\) and \(E(G)=E(T)\cup\{\{x_{i},y_{i}\}\mid i=1,\ldots,d\}\), where \(T\) is an arbitrary tree on \(d\) vertices. While the structure of \(T\) could be very complicated, surprisingly, the depth of powers of \(G\) does not depend on \(T\). Namely, **Theorem 1.1**.: _Let \(I(G)\subset S=\mathrm{k}[x_{1},\ldots,x_{d},y_{1},\ldots,y_{d}]\) be the edge ideal of a Cohen-Macaulay tree \(G\) of dimension \(d\). Then for all \(t\geq 1\),_ \[\operatorname{depth}S/I(G)^{t}=\max\{d-t+1,1\}.\] We now outline the ideas to carry out this computation. First, we show a general upper bound for the depth of powers of trees. For that purpose, we introduce some notation. Let \(\mathbf{e}=\{e_{1},\ldots,e_{t}\}\) be a set of \(t\) distinct edges of \(G\). We consider \(\mathbf{e}\) itself as a subgraph of \(G\) with edge set \(E(\mathbf{e})=\{e_{1},\ldots,e_{t}\}\). We denote by \(N[\mathbf{e}]\) the closed neighbourhood of \(\mathbf{e}\) in \(G\) (see Subsection 2.2 for the definition of closed neighbourhood). Furthermore, \(G[\mathbf{e}]\) denotes the induced subgraph of \(G\) on \(N[\mathbf{e}]\) and \(G[\bar{\mathbf{e}}]\) denotes the induced subgraph of \(G\) on \(V(G)\setminus N[\mathbf{e}]\). **Lemma 1.2**.: _Let \(\mathbf{e}=\{e_{1},\ldots,e_{t}\}\) be a set of \(t\) distinct edges of a simple graph \(G\). Assume that \(G[\mathbf{e}]\) is bipartite, \(G[\bar{\mathbf{e}}]\) is weakly chordal, and \(\mathbf{e}\), viewed as a subgraph of \(G\), is connected. Let \(R\) be the polynomial ring over the vertex set of \(G[\bar{\mathbf{e}}]\). Then_ \[\operatorname{depth}S/I(G)^{t+1}\leq 1+\operatorname{depth}R/I(G[\bar{\mathbf{e }}]).\] For the lower bound, we prove the following general bound for the depth of powers of edge ideals of whisker trees. **Lemma 1.3**.: _Let \(T\) be a forest on \(n\) vertices. Let \(U\subset V(T)\) be a subset of vertices of \(T\), and \(G\) be the new forest obtained by adding a whisker to each vertex in \(U\). Namely, \(V(G)=V(T)\cup\{y_{i}\mid x_{i}\in U\}\) and \(E(G)=E(T)\cup\{\{x_{i},y_{i}\}\mid x_{i}\in U\}\). Then for all \(t\geq 1\), we have_ \[\operatorname{depth}(R/I(G)^{t})\geq|U|-t+1,\] _where \(R=\operatorname{k}[x_{1},\ldots,x_{n},y_{j}\mid x_{j}\in U]\) is the polynomial ring over the variables corresponding to vertices of \(G\)._ Our method allows us to compute the depth of powers of the edge ideal of a Cohen-Macaulay bipartite graph constructed by Banerjee and Mukundan [BM]. **Theorem 1.4**.: _Let \(S=k[x_{1},\ldots,x_{d},y_{1},\ldots,y_{d}]\). For a fix integer \(j\) such that \(1\leq j\leq d\), let \(G_{d,j}\) be a bipartite graph whose edge ideal is \(I(G_{d,j})=(x_{i}y_{i},x_{1}y_{i},x_{k}y_{j}\mid 1\leq i\leq d,1\leq k\leq j)\). Then_ \[\operatorname{depth}S/I(G_{d,j})^{s}=\max(1,d-j-s+3),\] _for all \(s\geq 2\)._ We structure the paper as follows. In Section 2, we set up the notation and provide some background. In Section 3, we prove Theorem 1.1. In Section 4, we prove Theorem 1.4. ## 2. Preliminaries In this section, we recall some definitions and properties concerning the depth of monomial ideals and edge ideals of graphs. The interested readers are referred to [BH, D] for more details. Throughout this section, we denote \(S=\operatorname{k}[x_{1},\ldots,x_{n}]\) a standard graded polynomial ring over a field \(\operatorname{k}\). Let \(\mathfrak{m}=(x_{1},\ldots,x_{n})\) be the maximal homogeneous ideal of \(S\). ### Depth For a finitely generated graded \(S\)-module \(L\), the depth of \(L\) is defined to be \[\operatorname{depth}(L)=\min\{i\mid H^{i}_{\mathfrak{m}}(L)\neq 0\},\] where \(H^{i}_{\mathfrak{m}}(L)\) denotes the \(i\)-th local cohomology module of \(L\) with respect to \(\mathfrak{m}\). **Definition 2.1**.: A finitely generated graded \(S\)-module \(L\) is called Cohen-Macaulay if \(\operatorname{depth}L=\dim L\). A homogeneous ideal \(I\subseteq S\) is said to be Cohen-Macaulay if \(S/I\) is a Cohen-Macaulay \(S\)-module. The following two results about the depth of monomial ideals will be used frequently in the sequence. The first one is [R, Corollary 1.3]. The second one is [CHHKTT, Theorem 4.3]. **Lemma 2.2**.: _Let \(I\) be a monomial ideal and \(f\) a monomial such that \(f\notin I\). Then_ \[\operatorname{depth}S/I\leq\operatorname{depth}S/(I:f)\] **Lemma 2.3**.: _Let \(I\) be a monomial ideal and \(f\) a monomial. Then_ \[\operatorname{depth}S/I\in\{\operatorname{depth}(S/I:f),\operatorname{depth}( S/(I,f))\}.\] In the ideals of the form \(I+(f)\) and \(I:f\), some variables will be part of the minimal generators, and some will not appear in any of the minimal generators. A variable that does not divide any minimal generators of a monomial ideal \(J\) will be called a free variable of \(J\). We have **Lemma 2.4**.: _Assume that \(I=J+(x_{a},\ldots,x_{b})\) and \(x_{b+1},\ldots,x_{n}\) are free variables of \(I\) where \(J\) is a monomial ideal in \(R=\operatorname{k}[x_{1},\ldots,x_{a-1}]\). Then_ \[\operatorname{depth}S/I=\operatorname{depth}R/J+(n-b).\] ### Graphs and their edge ideals Let \(G\) denote a finite simple graph over the vertex set \(V(G)=\{x_{1},\ldots,x_{n}\}\) and the edge set \(E(G)\). For a vertex \(x\in V(G)\), let the neighbours of \(x\) be the subset \(N_{G}(x)=\{y\in V(G)\mid\{x,y\}\in E(G)\}\). The closed neighbourhood of \(x\) is \(N_{G}[x]=N_{G}(x)\cup\{x\}\). A vertex \(x\) is called a leaf if it has a unique neighbour. An edge that contains a leaf is called a leaf edge. For a subset \(U\subset V(G)\), \(N_{G}(U)=\cup(N_{G}(x)\mid x\in U)\) and \(N_{G}[U]=\cup(N_{G}[x]\mid x\in U)\). When it is clear from the context, we drop the subscript \(G\) from the notation \(N_{G}\). Let \(\mathbf{e}=\{e_{1},\ldots,e_{t}\}\) be a set of \(t\) distinct edges of \(G\). We denote by \(N[\mathbf{e}]\) the closed neighbourhood of \(\mathbf{e}\) in \(G\), \[N[\mathbf{e}]=\cup(N[x]\mid x\text{ is a vertex of }e_{j}\text{ for some }j=1,\ldots,t).\] Furthermore, \(G[\mathbf{e}]\) denotes the induced subgraph of \(G\) on \(N[\mathbf{e}]\) and \(G[\bar{\mathbf{e}}]\) denotes the induced subgraph of \(G\) on \(V(G)\setminus N[\mathbf{e}]\). A graph \(H\) is called a subgraph of \(G\) if \(V(H)\subseteq V(G)\) and \(E(H)\subseteq E(G)\). Let \(U\subset V(G)\) be a subset of vertices of \(G\). The induced subgraph of \(G\) on \(U\), denoted by \(G[U]\), is the graph such that \(V(G[U])=U\) and for any vertices \(u,v\in U\), \(\{u,v\}\in E(G[U])\) if and only if \(\{u,v\}\in E(G)\). A set \(\mathbf{e}=\{e_{1},\ldots,e_{t}\}\) of \(t\) distinct edges of \(G\) is an induced matching if \(e_{i}\cap e_{j}=\emptyset\) for all \(i\neq j\in\{1,\ldots,t\}\) and \(\mathbf{e}\) is an induced subgraph of \(G\). A tree is a connected acyclic graph. A cycle of length \(m\) in \(G\) is a sequence of distinct vertices \(x_{1},\ldots,x_{m}\) such that \(\{x_{1},x_{2}\},\ldots,\{x_{m-1}x_{m}\},\{x_{m},x_{1}\}\) are edges of \(G\). A graph \(G\) is bipartite if its vertex set has a decomposition \(V(G)=U\cup V\) such that \(E(G)\subset U\times V\). It is called a complete bipartite graph if \(E(G)=U\times V\), denoted by \(K_{U,V}\). A graph \(G\) is called weakly chordal if \(G\) and its complement do not contain an induced cycle of length at least \(5\). The edge ideal of \(G\) is defined to be \[I(G)=(x_{i}x_{j}\mid\{x_{i},x_{j}\}\in E(G))\subseteq S.\] A graph \(G\) is called Cohen-Macaulay if \(I(G)\) is Cohen-Macaulay. For simplicity, we often write \(x_{i}\in G\) (resp. \(x_{i}x_{j}\in G\)) instead of \(x_{i}\in V(G)\) (resp. \(\{x_{i},x_{j}\}\in E(G)\)). By abuse of notation, we also call \(x_{i}x_{j}\in I(G)\) an edge of \(G\). We have the following result [Mo, Lemma 2.10]. **Lemma 2.5**.: _Suppose that \(G\) is a graph and \(xy\) is a leaf edge of \(G\). Then for all \(t\geq 2\), we have_ \[I(G)^{t}:(xy)=I(G)^{t-1}.\] As a consequence, we have the following well-known result. **Lemma 2.6**.: _Let \(G\) be a graph. Assume that \(G\) has a leaf edge. Then the sequence \(\operatorname{depth}S/I(G)^{t}\) is non-increasing._ Proof.: Let \(xy\) be a leaf edge of \(G\). By Lemma 2.5 and Lemma 2.2, we have \[\operatorname{depth}S/I(G)^{t}\leq\operatorname{depth}S/(I(G)^{t}:xy)= \operatorname{depth}S/I(G)^{t-1},\] for all \(t\geq 2\). The conclusion follows. ### Bipartite completion and a colon ideal Let \(H\) be a connected bipartite graph with the partition \(V(H)=U\cup V\). The bipartite completion of \(H\), denoted by \(\widetilde{H}\), is the complete bipartite graph \(K_{U,V}\). We have **Lemma 2.7**.: _Let \(\mathbf{e}=\{e_{1},\ldots,e_{t}\}\) be a set of \(t\) distinct edges of \(G\). Assume that \(\mathbf{e}\), viewed as a subgraph of \(G\), is connected and \(G[\mathbf{e}]\) is bipartite. Then_ \[I(G)^{t+1}:(e_{1}\cdots e_{t})=I(H),\] _where \(H\) is a graph obtained from \(G\) by bipartite completing \(G[\mathbf{e}]\), i.e., \(V(H)=V(G)\) and \(E(H)=E(G)\cup E(\widetilde{G[\mathbf{e}]})\)._ Proof.: We prove by induction on \(t\). The base case \(t=1\) follows from [B, Theorem 6.7]. Now, assume that the statement holds for \(t-1\). There always exists an edge \(e_{j}\in\{e_{1},\ldots,e_{t}\}\) so that \(\mathbf{e}\setminus e_{j}\) is connected. We may assume that \(j=t\). Let \(\mathbf{e}^{\prime}=\{e_{1},\ldots,e_{t-1}\}\). By induction, we have \[I(G)^{t}:(e_{1}\cdots e_{t-1})=I(H^{\prime}), \tag{2.1}\] where \(E(H^{\prime})=E(G)\cup E(\widetilde{G[\mathbf{e}^{\prime}]})\). By [B, Theorem 6.7], \(I(G)^{t+1}:(e_{1}\cdots e_{t})\supseteq I(H^{\prime})^{2}:e_{t}\) (see also [MTV1, Lemma 2.9]). Let \(N_{G}[\mathbf{e}]=U\cup V\) be the partition of \(N_{G}[\mathbf{e}]\). Assume that \(e_{t}=uv\) with \(u\in U\) and \(v\in V\). Since \(\mathbf{e}\) is connected, \(\operatorname{supp}\mathbf{e}^{\prime}\cap\{u,v\}\neq\emptyset\). We may assume that \(u\in\operatorname{supp}\mathbf{e}^{\prime}\). Hence, \[v\in N_{G}[\mathbf{e}^{\prime}]\text{ and }N_{G}[\mathbf{e}]=N_{G}[ \mathbf{e}^{\prime}]\cup N_{G}(v). \tag{2.2}\] In particular, \(N_{G}[\mathbf{e}^{\prime}]=U^{\prime}\cup V\) and \(U=U^{\prime}\cup N_{G}(v)\). Since the induced subgraph of \(H^{\prime}\) on \(N_{G}[\mathbf{e}^{\prime}]\) is the complete bipartite graph \(K_{U^{\prime},V}\), we have \(N_{H^{\prime}}(u)=V\) and \(N_{H^{\prime}}(v)=U\). Thus, \(I(H^{\prime})^{2}:e_{t}\supseteq I(K_{U,V})\). The conclusion follows from [B, Theorem 6.7] as any new edge of \(H\) must have support in \(N[\mathbf{e}]\). **Remark 2.8**.: The notion of bipartite completion of a bipartite subgraph of a simple graph \(G\) was introduced and studied in [MTV2]. It plays an important role in the study of depth of symbolic powers of \(I(G)\). ### Projective dimension of edge ideals of weakly chordal graphs We note that the colon ideals of powers of edge ideals of trees by products of edges are edge ideals of weakly chordal graphs. Their projective dimension can be computed via the notion of strongly disjoint families of complete bipartite subgraphs, introduced by Kimura [K]. For a graph G, we consider all families of (non-induced) subgraphs \(B_{1},\ldots,B_{g}\) of \(G\) such that 1. each \(B_{i}\) is a complete bipartite graph for \(1\leq i\leq g\), 2. the graphs \(B_{1},\ldots,B_{g}\) have pairwise disjoint vertex sets, 3. there exist an induced matching \(e_{1},\ldots,e_{g}\) of \(G\) for each \(e_{i}\in E(B_{i})\) for \(1\leq i\leq g\). Such a family is termed a strongly disjoint family of complete bipartite subgraphs. We define \[d(G)=\max(\sum_{1}^{g}|V(B_{i})|-g),\] where the maximum is taken over all the strongly disjoint families of complete bipartite subgraphs \(B_{1},\ldots,B_{g}\) of \(G\). We have the following [NV1, Theorem 7.7]. **Theorem 2.9**.: _Let \(G\) be a weakly chordal graph with at least one edge. Then_ \[\operatorname{pd}(S/I(G))=d(G).\] ## 3. Depth of powers of edge ideals of Cohen-Macaulay trees In this section, we compute the depth of powers of Cohen-Macaulay trees. First, we prove a general bound for the depth of powers of edge ideals of graphs. Recall that for a set \(\mathbf{e}=\{e_{1},\ldots,e_{t}\}\) of edges of \(G\), \(G[\mathbf{e}]\) denotes the induced subgraph of \(G\) on \(N[\mathbf{e}]\) and \(G[\bar{\mathbf{e}}]\) denotes the induced subgraph of \(G\) on \(V(G)\setminus N[\mathbf{e}]\). **Lemma 3.1**.: _Let \(\mathbf{e}=\{e_{1},\ldots,e_{t}\}\) be a set of \(t\) distinct edges of a simple graph \(G\). Assume that \(G[\mathbf{e}]\) is bipartite, \(G[\bar{\mathbf{e}}]\) is weakly chordal, and \(\mathbf{e}\), viewed as a subgraph of \(G\), is connected. Let \(R\) be the polynomial ring over the vertex set of \(G[\bar{\mathbf{e}}]\). Then_ \[\operatorname{depth}S/I(G)^{t+1}\leq 1+\operatorname{depth}R/I(G[\bar{ \mathbf{e}}]).\] Proof.: Let \(K=I(G[\bar{\mathbf{e}}])\) and \(f=e_{1}\cdots e_{t}\). By Lemma 2.7, \(I(G)^{t+1}:f=I(H)\) where \(E(H)=E(G)\cup E(\widetilde{G[\mathbf{e}]})\). If \(G[\bar{\mathbf{e}}]\) has no edges then \(K\) is the zero ideal and \(\operatorname{depth}R/K=|V(G[\bar{\mathbf{e}}])|\). The conclusion then follows from [K, Theorem 1.1]. Now assume that \(G[\bar{\mathbf{e}}]\) has at least one edge. Since \(G[\bar{\mathbf{e}}]\) is weakly chordal, by Theorem 2.9, there exists a family \(\mathcal{B}=\{B_{1},\ldots,B_{g}\}\) of strongly disjoint family of complete bipartite subgraphs of \(G[\bar{\mathbf{e}}]\) such that \(\operatorname{pd}(R/K)=\sum_{i=1}^{g}|V(B_{i})|-g\). Then \(\mathcal{B}\cup\widetilde{G[\mathbf{e}]}\) is a strongly disjoint family of complete bipartite subgraphs of \(H\), because \(e_{1}\in\widetilde{G[\mathbf{e}]}\) together with the induced matching \(e_{i}^{\prime}\in B_{i}\) form an induced matching of \(H\). By [K, Theorem 1.1], \[\operatorname{pd}S/I(H)\geq\operatorname{pd}(R/K)+|V(\widetilde{G[\mathbf{e} ]})|-1.\] By the Auslander-Buchsbaum formula, we deduce that \(\operatorname{depth}S/I(H)\leq\operatorname{depth}R/K+1\). The conclusion then follows from Lemma 2.2. As a corollary, we deduce an upper bound for the depth of powers of edge ideals of Cohen-Macaulay trees. By the result of Villarreal [V], we may assume that \[V(G)=\{x_{1},\ldots,x_{d},y_{1},\ldots,y_{d}\}\text{ and }E(G)=E(T)\cup\{\{x_{1}, y_{1}\},\ldots,\{x_{d},y_{d}\}\},\] where \(T\) is the induced subgraph of \(G\) on \(\{x_{1},\ldots,x_{d}\}\) which is a tree on \(d\) vertices. **Corollary 3.2**.: _Let \(G\) be a Cohen-Macaulay tree of dimension \(d\). Then for any \(t\) such that \(2\leq t\leq d-2\),_ \[\operatorname{depth}S/I(G)^{t+1}\leq d-t.\] Proof.: Let \(T_{1}\) be any connected subtree of \(T\) with \(|E(T_{1})|=t\). We may assume that \(V(T_{1})=\{x_{1},\ldots,x_{t+1}\}\) and \[N_{G}[T_{1}]=\{x_{1},\ldots,x_{t+1},\ldots,x_{t+1},\ldots,x_{a},y_{1},\ldots,y _{t+1}\}\] for some \(a\geq t+1\). Let \(H_{2}\) be the induced subgraph of \(G\) on \(V(G)\setminus N_{G}[V(T_{1})]=\{x_{a+1},\ldots,x_{d},y_{t+2},\ldots,y_{d}\}\). Then \(H_{2}\) is the whisker graph of \(T_{2}\), the induced subgraph of \(T\) on \(\{x_{a+1},\ldots,x_{d}\}\) and \(\{y_{t+2},\ldots,y_{a}\}\) are isolated vertices of \(H_{2}\). By [V], \(I(H_{2})\) is Cohen-Macaulay of dimension \(d-t-1\). By Lemma 3.1, we have \[\operatorname{depth}S/I(G)^{t+1}\leq 1+\operatorname{depth}R/I(H_{2})=d-t.\] The conclusion follows. We now prove the following lower bound for the depth of powers of edge ideals of general whisker trees. **Lemma 3.3**.: _Let \(T\) be a forest on \(n\) vertices. Let \(U\subset V(T)\) be a subset of vertices of \(T\), and \(G\) be the new forest obtained by adding a whisker to each vertex in \(U\). Namely, \(V(G)=V(T)\cup\{y_{i}\mid x_{i}\in U\}\) and \(E(G)=E(T)\cup\{\{x_{i},y_{i}\}\mid x_{i}\in U\}\). Then for all \(t\geq 1\), we have_ \[\operatorname{depth}(R/I(G)^{t})\geq|U|-t+1,\] _where \(R=\operatorname{k}[x_{1},\ldots,x_{n},y_{j}\mid x_{j}\in U]\) is the polynomial ring over the variables corresponding to vertices of \(G\)._ Proof.: By [23, Theorem 1.1], we may assume that \(T\) is a tree on \(n\) vertices. Furthermore, we may assume that \(U=\{1,\ldots,d\}\). Hence, \(V(G)=\{x_{1},\ldots,x_{n},y_{1},\ldots,y_{d}\}\) and \(E(G)=E(T)\cup\{x_{i}y_{i}\mid i=1,\ldots,d\}\). Since \(G\) is a forest, \(\operatorname{depth}R/I(G)^{t}\geq 1\) for all \(t\geq 1\). Hence, the statement is vacuous when \(t\geq d\). Thus, we may assume that \(1\leq t\leq d\). We prove by induction on the triple \((n,d,t)\) ordered by the lexicographic order the following \[\operatorname{depth}R/(I(G)^{t}+I(H))\geq d-t+1, \tag{3.1}\] for all \(t\leq d\) and all subgraphs \(H\) of \(G\) with \(E(H)\subseteq\{x_{1}y_{1},\ldots,x_{d}y_{d}\}\). For ease of reading, we divide the proof into several steps. **Step 1.** The base case \(d=1\) is clear as \(\mathfrak{m}\), the maximal homogeneous ideal of \(R\) is not an associated prime of \(I(G)^{t}+I(H)\). **Step 2.** The base case \(t=1\). The statement follows from [26, Theorem 4.1] as a maximal independent set of \(G\) must contain either \(x_{i}\) or \(y_{i}\) for each \(i=1,\ldots,d\). **Step 3.** Reduction to the case \(E(H)=\{x_{1}y_{1},\ldots,x_{d}y_{d}\}\). Assume that \(E(H)\) is a proper subset of \(\{x_{1}y_{1},\ldots,x_{d}y_{d}\}\), say \(x_{d}y_{d}\notin E(H)\). Let \(J=I(G)^{t}+I(H)\). By Lemma 2.3, Lemma 2.5, and induction, it suffices to prove that \(\operatorname{depth}R/(J+(x_{d}y_{d}))\geq d-t+1\). Thus, we may assume that \(E(H)=\{x_{1}y_{1},\ldots,x_{d}y_{d}\}\). Eq. (3.1) becomes \[\operatorname{depth}(R/I(T)^{t}+(x_{1}y_{1},\ldots,x_{d}y_{d}))\geq d-t+1. \tag{3.2}\] **Step 4.** Induction step. Assume that Eq. (3.2) holds for all tuples \((n^{\prime},d^{\prime},t^{\prime})\) strictly smaller than \((n,d,t)\) in the lexicographic order. Let \(J=I(T)^{t}+(x_{1}y_{1},\ldots,x_{d}y_{d})\). Let \(u\) be a leaf of \(T\) and \(v\) the unique neighbour of \(u\) in \(T\). There are two cases. **Case 1.**\(u\notin\{x_{1},\ldots,x_{d}\}\). Since \(J+(u)\) is of the same form but in a smaller ring, by induction, we have \(\operatorname{depth}R/(J+(u))\geq d-t+1\). By Lemma 2.3, it suffices to prove that \[\operatorname{depth}R/(J:u)\geq d-t+1.\] But \(K=J:u=vI(T)^{t-1}+(x_{1}y_{1},\ldots,x_{d}y_{d})\). Since \(\operatorname{depth}R/(K+(v))\geq d\), by Lemma 2.3, it suffices to prove that \(\operatorname{depth}R/(K:v)\geq d-t+1\). There are two subcases. Subcase 1.a. \(v\notin\{x_{1},\ldots,x_{d}\}\). Then \(K:v=I^{t-1}+(x_{1}y_{1},\ldots,x_{d}y_{d})\). Hence, by induction on \(t\), we have \(\operatorname{depth}R/(K:v)\geq d-(t-1)+1\). Subcase 1.b. \(v\in\{x_{1},\ldots,x_{d}\}\), say \(v=x_{d}\). Then \[K:v=I^{t-1}+(x_{1}y_{1},\ldots,x_{d-1}y_{d-1})+(y_{d}).\] Hence, by induction, we have \(\operatorname{depth}R/(K:v)\geq(d-1)-(t-1)+1=d-t+1\). **Case 2.**\(u\in\{x_{1},\ldots,x_{d}\}\), say \(u=x_{d}\). Let \(T_{1}\) be the subtree of \(T\) restricted to \(V(T)\setminus\{u\}\). Then \(J+(u)=I(T_{1})^{t}+(x_{1}y_{1},\ldots,x_{d-1}y_{d-1})+(u)\). In particular, \(y_{d}\) is a free variable. Hence, \(\operatorname{depth}R/(J+(u))=1+\operatorname{depth}R_{1}/(I(T_{1})^{t}+(x_{1 }y_{1},\ldots,x_{d-1}y_{d-1}))\), where \(R_{1}=\operatorname{k}[x_{i},y_{j}\mid i,j\neq d]\). By induction, we have \(\operatorname{depth}R/(J+(u))\geq 1+(d-1)-t+1=d-t+1\). By Lemma 2.3, it suffices to prove that \(\operatorname{depth}R/(J:u)\geq d-t+1\). We have \[J:u=vI(T)^{t-1}+(x_{1}y_{1},\ldots,x_{d-1}y_{d-1})+(y_{d}).\] Let \(R_{1}=\operatorname{k}[x_{1},\ldots,x_{n},y_{1},\ldots,y_{d-1}]\) and \(K=vI(T)^{t-1}+(x_{1}y_{1},\ldots,x_{d-1}y_{d-1})\). Then, \(\operatorname{depth}R/K=\operatorname{depth}R_{1}/K\). By Lemma 2.3, it suffices to prove that \(\operatorname{depth}R_{1}/K:v\geq d-t+1\). Again, we have two subcases, Subcase 2.a. \(v\notin\{x_{1},\ldots,x_{d-1}\}\). Then \(K:v=I(T)^{t-1}+(x_{1}y_{1},\ldots,x_{d-1}y_{d-1})\). Hence, by induction, \(\operatorname{depth}R_{1}/(K:v)\geq(d-1)-(t-1)+1\). Subcase 2.b. \(v\in\{x_{1},\ldots,x_{d-1}\}\), say \(v=x_{d-1}\). Then, \(K:v=I(T)^{t-1}+(x_{1}y_{1},\ldots,x_{d-2}y_{d-2})+(y_{d-1}).\) Let \(T^{\prime}\) be the induced subgraph of \(T\) on \(V(T)\setminus\{d\}\). If \(t=2\) then \[K:v=I(T^{\prime})+(x_{1}y_{1},\ldots,x_{d-2}y_{d-2},x_{d-1}x_{d})+(y_{d-1}).\] The conclusion follows from [11, Theorem 4.1]. Now, assume that \(t\geq 3\). Let \(R_{2}=\operatorname{k}[x_{1},\ldots,x_{n},y_{1},\ldots,y_{d-2}]\) and \(L=I(T)^{t-1}+(x_{1}y_{1},\ldots,x_{d-2}y_{d-2})\). We need to prove that \(\operatorname{depth}R_{2}/L\geq d-t+1\). Note that \(I(T)=I(T^{\prime})+(x_{d-1}x_{d})\) and \(x_{d}\) can be considered as a whisker at \(x_{d-1}\). By Lemma 2.3, we have \[\operatorname{depth}R_{2}/L\in\{\operatorname{depth}(R_{2}/L:(x_{d-1}x_{d})), \operatorname{depth}(R_{2}/L+(x_{d-1}x_{d}))\}.\] By Lemma 2.5, \(L:x_{d-1}x_{d}=I(T)^{t-2}+(x_{1}y_{1},\ldots,x_{d-2}y_{d-2})\). Hence, by induction, we have \(\operatorname{depth}R_{2}/(L:x_{d-1}x_{d})\geq(d-2)-(t-2)+1=d-t+1\). Finally, we have \(L+(x_{d-1}x_{d})=I(T^{\prime})^{t-1}+(x_{1}y_{1},\ldots,x_{d-2}y_{d-2},x_{d-1 }x_{d})\), with \(x_{d}\) now play the role of \(y_{d-1}\). By induction, we also have \(\operatorname{depth}R_{2}/(L+(x_{d-1}x_{d}))\geq d-1-(t-1)+1=d-t+1\). That concludes the proof of the Lemma. We are ready for the main result of this section. **Theorem 3.4**.: _Let \(I(G)\subset S=\operatorname{k}[x_{1},\ldots,x_{d},y_{1},\ldots,y_{d}]\) be the edge ideal of a Cohen-Macaulay tree \(G\) of dimension \(d\). Then for all \(t\geq 1\),_ \[\operatorname{depth}S/I(G)^{t}=\max\{d-t+1,1\}.\] Proof.: The conclusion follows from the result of Villarreal [V], Corollary 3.2, and Lemma 3.3. ## 4. Depth of powers of edge ideals of some Cohen-Macaulay bipartite graphs In this section, we study the depth of powers of some Cohen-Macaulay bipartite graphs. First, we consider a graph constructed by Banerjee and Munkudan [BM]. **Theorem 4.1**.: _Let \(S=k[x_{1},\ldots,x_{d},y_{1},\ldots,y_{d}]\). For a fix integer \(j\) such that \(1\leq j\leq d\), let \(G_{d,j}\) be a bipartite graph whose edge ideal is \(I(G_{d,j})=(x_{i}y_{i},x_{1}y_{i},x_{k}y_{j}\mid 1\leq i\leq d,1\leq k\leq j)\). Then_ \[\operatorname{depth}S/I(G_{d,j})^{s}=\max(1,d-j-s+3),\] _for all \(s\geq 2\)._ Proof.: Fix \(j\), we prove by induction on \(d\). The case \(d=j\) follows from [BM, Example 4.2]. Now assume that the statement holds for \(d-1\). Let \(s\) be an exponent such that \(2\leq s\leq d-j-2\). Let \(e_{1}=x_{1}y_{j},e_{2}=x_{1}y_{j+1},\ldots,e_{s-1}=x_{1}y_{j+s-2}\) and \(\mathbf{e}=\{e_{1},\ldots,e_{s-1}\}\). We have \(G_{d,j}[\bar{\mathbf{e}}]\) is the disjoint union on \(d-j-s+2\) edges. By Lemma 3.1, \[\operatorname{depth}S/I(G_{d,j})^{s}\leq 1+\operatorname{depth}R/I(G_{d,j}[ \bar{\mathbf{e}}])=d-j-s+3.\] In particular, \(\operatorname{depth}S/I(G_{d,j})^{d-j-2}\leq 1\). By [T, Theorem 4.4], we deduce that \(\operatorname{depth}S/I(G_{d,j})^{t}=1\) for all \(t\geq d-j-2\). For the lower bound, the proof is similar to that of Lemma 3.3. Let \(H\) be the induced subgraph of \(G_{d,j}\) on \(\{x_{1},\ldots,x_{j},y_{1},\ldots,y_{d}\}\). Then \[I(H) =(x_{1}y_{i},x_{k}y_{k},x_{k}y_{j}\mid 1\leq i\leq d,2\leq k\leq j)\] \[I(G_{d,j}) =I(H)+(x_{i}y_{i}\mid i\geq j+1).\] For ease of reading, we divide the proof into several steps. **Step 1.** With an argument similar to Step 3 of the proof of Lemma 3.3, we reduce to proving the following lower bound \[\operatorname{depth}S/(I(H)^{s}+(x_{i}y_{i}\mid i\geq j+1))\geq d-j-s+3. \tag{4.1}\] **Step 2.** Let \(J=I(H)^{s}+(x_{i}y_{i}\mid i\geq j+1)\). By Lemma 2.3, we will prove the bound for \(\operatorname{depth}S/(J+(y_{d}))\) and \(\operatorname{depth}S/(J:y_{d})\). We have \[J+(y_{d})=I(H)^{s}+(x_{1}y_{1},x_{i}y_{i}\mid j+1\leq i\leq d-1)+(y_{d}).\] Hence, \(x_{d}\) is a free variable of \(J+(y_{d})\). By induction, we have \[\operatorname{depth}S/(J+(y_{d}))\geq 1+(d-1-j+s+3)=d-j-s+3.\] Furthermore, \[J:y_{d}=x_{1}I(H)^{s-1}+(x_{i}y_{i}\mid j+1\leq i\leq d-1)+(x_{d}).\] Let \(K=x_{1}I(H)^{s-1}+(x_{i}y_{i}\mid j+1\leq i\leq d-1)\) and \(R=\operatorname{k}[x_{1},\ldots,x_{d-1},y_{1},\ldots,y_{d-1}]\). Then, \(\operatorname{depth}S/(J:y_{d})=\operatorname{depth}R/K\). Since \(K+(x_{1})=(x_{i}y_{i}\mid j+1\leq i\leq d-1)+(x_{1})\), by Lemma 2.3, it remains to bound \(\operatorname{depth}R/K:x_{1}\). We have \[K:x_{1}=I(H)^{s-1}+(x_{i}y_{i}\mid j+1\leq i\leq d-1).\] Hence, by induction, \[\operatorname{depth}R/(K:x_{1})\geq d-1-(s-1)-j+3=d-j-s+3.\] That concludes the proof of the Theorem. In [BM], Banerjee and Mukundan said that the depth sequence of powers of the edge ideal of a graph \(G\) has a drop at \(k\) if \(\operatorname{depth}S/I^{k}-\operatorname{depth}S/I^{k+1}>1\). They then constructed a Cohen-Macaulay bipartite graph with an arbitrary number of drops in their depth sequence. Nonetheless, their construction is via sum of ideals, and hence, the resulting Cohen-Macaulay bipartite graph \(G\) with \(k\) drops has \(k\) connected components. By the result of Nguyen and the last author [NV2], we may reduce the computation of depth of powers of edge ideals of disconnected graphs to that of their connected components. By our computation experiment, we believe that we may construct a connected Cohen-Macaulay bipartite graph with an arbitrary number of drops. To conclude, we provide an example of a connected Cohen-Macaulay bipartite graph with two drops in its depth sequence of powers. **Theorem 4.2**.: _Assume that \(d\geq 5\). Let \(S=k[x_{1},\dots,x_{d},y_{1},\dots,y_{d}]\). For each \(a\) such that \(2\leq a\leq d-1\), let \(G_{d,a}\) be a bipartite graph whose edge ideal is_ \[I(G_{d,a})=x_{1}(y_{1},\dots,y_{d})+(x_{i}y_{j}\mid 2\leq i\leq j\leq a)+(x_{i }y_{j}\mid a+1\leq i\leq j\leq d).\] _Then_ \[\operatorname{depth}S/I(G_{d,a})^{t}=\begin{cases}d&\text{if }t=1\\ \min(a,d-a+1)&\text{if }t=2\\ 1&\text{if }t\geq 3.\end{cases}\] Proof.: Note that \(G_{d,a}\) is a Cohen-Macaulay bipartite graph of dimension \(d\) by [EV, HH]. By Lemma 2.7, \[I(G_{d,a})^{3}:(x_{1}y_{a}x_{1}y_{d})=I(K_{d,d}).\] By Lemma 2.2 and Lemma 2.6, \(\operatorname{depth}S/I(G_{d,a})^{t}=1\) for all \(t\geq 3\). Thus, it remains to determine \(\operatorname{depth}S/I(G_{d,a})^{2}\). When \(\mathbf{e}=\{x_{1}y_{a}\}\), \(G_{d,a}[\bar{\mathbf{e}}]\) is the empty graph on \(d-a\) vertices \(\{x_{a+1},\dots,x_{d}\}\). When \(\mathbf{e}=\{x_{1}y_{d}\}\), \(G_{d,a}[\bar{\mathbf{e}}]\) is the empty graph on \(a-1\) vertices \(\{x_{2},\dots,x_{a}\}\). Hence, by Lemma 3.1, \[\operatorname{depth}S/I(G_{d,a})^{2}\leq\min(a,d-a+1).\] For the lower bound, we prove by induction on the tuple \((d,a)\). We divide the proof into several steps. **Step 1.**\(\operatorname{depth}S/(I(G_{d,a})^{2}+(y_{d}))\geq\min(a,d-a+1)\). Since \(I(G_{d,a})^{2}+(y_{d})=I(G_{d-1,a})^{2}+(y_{d})\) and \(x_{d}\) is a free variable of \(I(G_{d,a})^{2}+(y_{d})\). By induction, we have \[\operatorname{depth}S/(I(G_{d,a})^{2}+(y_{d}))\geq 1+\min(a,d-1-a+1)\geq\min(a,d-a +1).\] **Step 2.**\(\operatorname{depth}S/(I(G_{d,a})^{2}+(x_{1}))\geq\min(a,d-a+1)\). Since \(I(G_{d,a})^{2}+(x_{1})=(I(H_{1})+I(H_{2}))^{2}+(x_{1})\), where \[I(H_{1}) =(x_{i}y_{j}\mid 2\leq i\leq j\leq a),\] \[I(H_{2}) =(x_{i}y_{j}\mid a+1\leq i\leq j\leq d)\] are Cohen-Macaulay ideals of dimensions \(a-1\) and \(d-a\) respectively. Since \(y_{1}\) is a free variable of \(I(G_{d,a})^{2}+(x_{1})\), by [15, Theorem 1.1], \[\operatorname{depth}S/(I(G_{d,a})^{2}+(x_{1}))=1+\min( \operatorname{depth}R_{1}/I(H_{1})+\operatorname{depth}R_{2}/I(H_{ 2})+1,\] \[\operatorname{depth}R_{1}/I(H_{1})^{2}+\operatorname{depth}R/I(H _{2}),\] \[\operatorname{depth}R_{1}/I(H_{1})+\operatorname{depth}R_{2}/I(H _{2})^{2})\] \[=1+\min(a,d-a+1),\] where \(R_{1}=\operatorname{k}[x_{i},y_{i}\mid i=2,\ldots,a]\) and \(R_{2}=\operatorname{k}[x_{i},y_{i}\mid i=a+1,\ldots,d]\). **Step 3.**\(\operatorname{depth}S/(I(G_{d,a})^{2}+(x_{1},y_{d}))\geq\min(a,d-a+1)\). Since \(I(G_{d,a})+(x_{1},y_{d})\) is the mixed sum of two Cohen-Macaulay ideals of dimensions \(a-1\) and \(d-a-1\) respectively with free variables \(y_{1},x_{d}\) in \(S\). With argument similar to Step 2, we deduce the desired lower bound. **Step 4.**\(\operatorname{depth}S/(I(G_{d,a})^{2}+x_{1}y_{d})\geq\min(a,d-a+1)\). Since \(I(G_{d,a})^{2}+(x_{1}y_{d})=(I(G_{d,a})^{2}+(x_{1}))\cap(I(G_{d,a})^{2}+(y_{d}))\). The conclusion follows from Step 1, Step 2, Step 3, and a standard lemma on depth [BH, Proposition 1.2.9]. **Step 5.**\(\operatorname{depth}S/(I(G_{d,a})^{2}:(x_{1}y_{d}))\geq\min(a,d-a+1)\). By Lemma 2.7, \[I(G_{d,a})^{2}:(x_{1}y_{d})=I(G_{d,a})+(x_{a+1},\ldots,x_{d})\cdot(y_{1}, \ldots,y_{d}). \tag{4.2}\] Let \(L=I(G_{d,a})^{2}:(x_{1}y_{d})\). Then \[L=(I(G_{d,a})+(x_{a+1},\ldots,x_{d}))\cap(y_{1},\ldots,y_{d}).\] Since \(\operatorname{depth}S/(y_{1},\ldots,y_{d})=d\) and \(\operatorname{depth}S/(x_{a+1},\ldots,x_{d},y_{1},\ldots,y_{d})=a\), it remains to show that \[\operatorname{depth}S/M\geq\min(a,d-a+1), \tag{4.3}\] where \(M=I(G_{d,a})+(x_{a+1},\ldots,x_{d})\). We have \(M:x_{1}=(x_{a+1},\ldots,x_{d},y_{1},\ldots,y_{d})\) and \(M+(x_{1})\) is a Cohen-Macaulay ideal of dimension \(a-1\) and \(y_{1},y_{a+1},\ldots,y_{d}\) are free variables of \(M+(x_{1})\). By Lemma 2.3, \(\operatorname{depth}S/M\geq\min(a,d-a+1)\) as required. Thus, we deduce that \(\operatorname{depth}S/I(G_{d,a})^{2}\geq\min(a,d-a+1)\) by Step 4, Step 5 and Lemma 2.3. That concludes the proof of the Theorem. ## Acknowledgments Nguyen Thu Hang is partially supported by the Thai Nguyen University of Sciences (TNUS) under the grant number CS2021-TN06-16.
2308.16422
Dilated convolutional neural network for detecting extreme-mass-ratio inspirals
The detection of Extreme Mass Ratio Inspirals (EMRIs) is intricate due to their complex waveforms, extended duration, and low signal-to-noise ratio (SNR), making them more challenging to be identified compared to compact binary coalescences. While matched filtering-based techniques are known for their computational demands, existing deep learning-based methods primarily handle time-domain data and are often constrained by data duration and SNR. In addition, most existing work ignores time-delay interferometry (TDI) and applies the long-wavelength approximation in detector response calculations, thus limiting their ability to handle laser frequency noise. In this study, we introduce DECODE, an end-to-end model focusing on EMRI signal detection by sequence modeling in the frequency domain. Centered around a dilated causal convolutional neural network, trained on synthetic data considering TDI-1.5 detector response, DECODE can efficiently process a year's worth of multichannel TDI data with an SNR of around 50. We evaluate our model on 1-year data with accumulated SNR ranging from 50 to 120 and achieve a true positive rate of 96.3% at a false positive rate of 1%, keeping an inference time of less than 0.01 seconds. With the visualization of three showcased EMRI signals for interpretability and generalization, DECODE exhibits strong potential for future space-based gravitational wave data analyses.
Tianyu Zhao, Yue Zhou, Ruijun Shi, Zhoujian Cao, Zhixiang Ren
2023-08-31T03:16:38Z
http://arxiv.org/abs/2308.16422v3
# DECODE: DilatEd COnvolutional neural network for Detecting Extreme-mass-ratio inspirals ###### Abstract The detection of Extreme Mass Ratio Inspirals (EMRIs) is intricate due to their complex waveforms, extended duration, and low signal-to-noise ratio (SNR), making them more challenging to be identified compared to compact binary coalescences. While matched filtering-based techniques are known for their computational demands, existing deep learning-based methods primarily handle time-domain data and are often constrained by data duration and SNR. In addition, most existing work ignores time-delay interferometry (TDI) and applies the long-wavelength approximation in detector response calculations, thus limiting their ability to handle laser frequency noise. In this study, we introduce DECODE, an end-to-end model focusing on EMRI signal detection by sequence modeling in the frequency domain. Centered around a dilated causal convolutional neural network, trained on synthetic data considering TDI-1.5 detector response, DECODE can efficiently process a year's worth of multichannel TDI data with an SNR of around 50. We evaluate our model on 1-year data with accumulated SNR ranging from 50 to 120 and achieve a true positive rate of 96.3% at a false positive rate of 1%, keeping an inference time of less than 0.01 seconds. With the visualization of three showcased EMRI signals for interpretability and generalization, DECODE exhibits strong potential for future space-based gravitational wave data analyses. Gravitational Wave Deep Learning EMRI ## 1 Introduction The groundbreaking detection of gravitational waves (GWs) in 2015, exemplified by the GW150914 event, has profoundly impacted the field of astrophysics [1]. Enabled by the Laser Interferemeter Gravitational Wave Observatory (LIGO) [2] and Virgo [3], this remarkable achievement unequivocally confirmed the existence of GWs, providing empirical validation of general relativity (GR) [4]. Beyond enriching our knowledge of the cosmos, this seminal discovery has ushered in a new era of astronomical observation [5]. With the spotlight now turning to space-based GW observatories [6, 7], the absence of terrestrial disturbances allows for a more dedicated exploration of the low-frequency GWs [8]. This exciting pursuit carries the potential to reveal hitherto unobserved phenomena, offering profound insights into the nature of our universe [5]. Space-based GW detection, a largely unexplored domain, marks the next epoch in astrophysics [6]. Pioneering this exciting venture are projects such as the Laser Interferometer Space Antenna (LISA) [9] by the European Space Agency (ESA), with NASA's participation, and Asian projects including Japan's DECi-hertz Interferometer Gravitational wave Observatory (DECIGO) and B-DECIGO [10; 11], as well as China's Taiji [12] and TianQin [13] missions. Targeting the millimetric frequency band, these endeavors offer a novel perspective for the exploration of diverse astrophysical and cosmological phenomena through the detection of low-frequency GWs [6; 14; 15]. The scientific goals are broad, with the intent to shed light on the enigmas of massive black hole binaries (MBHBs), extreme-mass-ratio inspirals (EMRIs), continuous waves from galactic binaries (GBs), and the stochastic GW backgrounds produced by the early universe's myriad of unresolved sources [16]. In the spectrum of potential discoveries, EMRIs hold a unique position. These events, initiated when a compact stellar remnant spirals into a massive black hole (MBH), these events provide opportunities to investigate the MBH characteristics and the nature of the surrounding environments [17]. EMRIs emit low-frequency GWs throughout their extended inspiral phase, serving as a rich source of information for understanding system physical parameters and the MBH's spacetime geometry [18]. The successful detection and parameter estimation of EMRI signals could provide novel insights into the astrophysics of MBHs and the foundational principles of gravity [19; 20]. Traditional methods for EMRI detection, which include both time-domain and time-frequency-domain techniques, have been widely studied in prior research [21; 22; 23; 24; 25]. These strategies mainly employ matched filtering [21; 24] and the Short Time Fourier Transform [22; 23; 25]. However, the inherent complexities of EMRI signals present significant obstacles. Characterized by their complex waveform templates, high-dimensional parameter space, and multiple modes within a single waveform, EMRI signals require over \(\sim 10^{35}\) templates for matched filtering search [18], resulting in a computationally intensive and time-consuming procedure. An example of single EMRI in both the time and frequency domains can be seen in Figure 1, showcasing the aforementioned challenges of signal detection. Additionally, EMRI signals are typically faint and buried within detector and confusion noise, necessitating extended observation durations to achieve an adequate signal-to-noise ratio (SNR) for detection [18]. Time-frequency techniques, offering representations in both time and frequency domains, are frequently less sensitive than matched filtering, which limits their ability to identify weak signals [25]. Given these challenges, exploring alternative methods, such as deep learning, becomes crucial for potentially improving the efficiency of EMRI signal detection. Deep learning, an advanced branch of machine learning, employs neural networks with multiple layers for different types of data. By facilitating the extraction of intricate patterns and representations from large datasets, it has played a crucial role in advancing various fields, from image recognition [26] to natural language processing [27]. Among the numerous architectures, the convolutional neural network (CNN) stands out for its proficiency in handling structured data, such as images and time series, by progressively learning features in a hierarchical manner. Starting with simple features like edges in the initial layers, they gradually combine these to recognize more complex patterns and structures in the deeper layers. This layered approach allows CNNs to automatically recognize and represent intricate details Figure 1: **Visualization of a training data sample. This depicts an EMRI signal from the TDI-A channel spanning 1-year with an SNR of 70. (a), Time-domain representation of the TDI-A strain, showcasing both the combined data (signal + noise) and the signal. The signal’s amplitude is about 3 orders of magnitudes lower than the noise, which makes the detection challenging. (b), Welch PSD of the combined data and the signal, the signal contains lots of modes (peaks), with some reaching the noise level, highlighting the suitability of the frequency domain detection method. The designed detector noise PSD is also presented for reference.** in the data, making them highly effective for tasks like object detection [28] and time-series classification [29]. In the area of GW data analysis, the potential of deep learning, especially CNNs, is becoming increasingly evident. A large amount of studies [30, 31, 32, 33, 34, 35, 36, 37] have demonstrated their effectiveness in ground-based GW detection. Beyond signal detection, deep learning methods have been applied to a variety of tasks, including parameter estimation [38, 39] and glitch classification [40, 41, 42]. However, the application of these methods to space-based GW detection is still in its early stages. While there have been some exploratory efforts, such as the adoption of MFCNN [32] to detect MBHB contaminated by confusion noise [43] and the application of dictionary learning to low-SNR space-based binary black hole (BBH) detection [44]. Notably, Zhang et al. [45] pioneered the detection of EMRIs using CNN, though without incorporating the time delay interferometry (TDI) technique. Therefore, further research is needed to harness the full capabilities of deep learning in space-based GW analysis. In this paper, we introduce the DECODE (DilatEd COnvolutional neural network for Detecting Extreme-mass-ratio inspirals), an end-to-end model designed for detecting EMRI signals in the frequency domain with an SNR of around 50. As showed in Figure 2, the model incorporates dilated causal convolutional layers, which expand its receptive field, allowing it to efficiently process data covering an entire year in one pass. We trained our model using synthetic data that considers the TDI-1.5 detector response, accounting for unequal arm lengths. The results are promising: the DECODE detects EMRI signals with a 1-year accumulated SNR between 50 and 120, achieving a true positive rate (TPR) of 96.3% with a false positive rate (FPR) of 1%. Notably, our model can evaluate one batch of data samples within Figure 2: **Comprehensive EMRI detection framework. (a), Depicts the entire EMRI detection process, from initial data preprocessing to the end-to-end DECODE model. (b), Highlights the mechanism of dilated causal convolution with dilation factors of \((1,2,4,8)\) and a kernel size of 2, emphasizing the exponential growth of the receptive field. (c), Detailed architecture of the residual block in DECODE, comprising two dilated causal convolutional layers, weight normalization, ReLU, and dropout layers. A \(1\times 1\) convolution is introduced to address any dimension discrepancies between the residual input and output.** seconds. Visualizations of the model's intermediate outputs highlight its interpretable feature extraction process and its ability to generalize beyond GR. These findings emphasize the potential of DECODE in future space-based GW data analyses. The remainder of this paper is organized as follows: Section 2 provides a detailed overview of the data generation procedure and outlines the architecture of our proposed model, the DECODE. In Section 3, we present the results of our EMRI detection experiments, demonstrating the effectiveness of our approach. Finally, Section 4 concludes the paper with a summary of our findings and a discussion on potential future work based on our findings. ## 2 Method ### EMRI Waveform Modeling Detecting EMRIs has the potential to reveal key astrophysical insights, but modeling their waveform is challenging due to the delicate balance of strong-field GR and gravitational radiation dynamics. Accurately describing EMRIs demands a solution to the self-force problem, which considers the gravitational impact of the smaller compact object on its own motion within the powerful gravitational field of the central MBH [6]. Because the self-force problem is highly non-linear and defies analytical solutions, researchers have developed approximate waveform models, commonly referred to as kludge models [46, 47]. Two commonly used kludge models in EMRI modeling are the "analytic kludge" (AK) [46] model and the numerical kludge (NK) [47] model. The AK model relies on post-Newtonian expansions and perturbative calculations to evolve the orbital parameters and generate waveforms quickly. It provides computational efficiency but suffers from dephasing compared to more accurate models, leading to potential inaccuracies in parameter estimation. On the other hand, the NK model incorporates the orbital trajectory computed in curved space using Kerr geodesics and includes radiation reaction effects. Although more accurate, the NK model is computationally more expensive, making EMRI signal detection using this template highly formidable. To address the limitations of both models, an argumented analytic kludge (AAK) [48, 49, 50] model has been proposed. The AAK model combines the computational efficiency of the AK model with improved phasing achieved through a mapping to Kerr geodesic frequencies and self-consistent post-Newtonian evolution. By incorporating self-force information and refining the phasing, the AAK model achieves higher waveform fidelity compared to the AK model while remaining computationally efficient. While its computational efficiency may not be adequate for matched filtering-based signal searches, it is suitable for producing training datasets for deep neural networks (DNNs). Despite the advancements in kludge waveform modeling, challenges remain. Incorporating second-order self-force effects into the models and refining them for orbits approaching plunge are ongoing areas of research [6]. Nonetheless, these waveform models are crucial for accurately representing the dynamics of EMRIs and enabling the detection, parameter estimation, and data analysis of these elusive astrophysical sources. ### Data Curation The process of curating training and testing datasets for the identification of EMRI signals using a DNN is a multi-step procedure consisting of signal generation, detector response simulation, and pre-processing. Waveform GenerationThe first step involves the generation of signal templates. The AAK model used for generating these templates is based on [51]. The waveform, denoted as \(h(t)=h_{+}(t)-ih_{\times}(t)\), is typically characterized by 14 physical parameters. The parameter space used for sampling the training and testing dataset parameters in this study is detailed in Table 1. Here, \(M\) and \(a\) represent the mass and the spin parameter of the MBH respectively. The semi-latus rectum is denoted by \(p\), while \(e\) stands for orbital eccentricity, and \(\iota\) signifies the orbit's inclination angle from the equatorial plane. \(Y=\cos\iota\equiv L_{z}/\sqrt{L_{z}^{2}+Q}\), where \(Q\) is the Carter constant, and \(L_{z}\) is the \(z\) component of the specific angular momentum. The polar and azimuthal sky location angles are represented by \(\theta_{S}\), and \(\phi_{S}\), respectively. The orientation of the spin angular momentum vector of the MBH is described by the azimuthal and polar angles \(\theta_{K}\) and \(\phi_{K}\). These parameters are uniformly sampled for our dataset. It is important to note that \(\Phi_{\varphi,0},\Phi_{\theta,0},\Phi_{r,0}\), which represent the phase of azimuthal, polar, and radial modes, are all manually set to 0 respectively. TDI ResponseThe next stage involves simulating the detector's response to these signals. The specific detector configurations utilized in this study are detailed in Table 2. For the breathing arm length, we employed the TDI-1.5 technique, which yielded the GW strain of TDI A and E channels, denoted as \(h_{A}(t)\) and \(h_{E}(t)\), respectively. A detailed derivation of this technique can be found in Ref. [52]. Their CUDA-based implementation, enable us to calculate the response cost in seconds. The signal is then rescaled according to the desired SNR using the formula: \[\mathrm{SNR}^{2}=(h_{A}\mid h_{A})+(h_{E}\mid h_{E}). \tag{1}\] Here, the inner product \((a\mid b)\) is defined as: \[(a\mid b)=2\int_{f_{min}}^{f_{max}}\frac{\tilde{a}^{*}(f)\tilde{b}(f)+\tilde{a} (f)\tilde{b}^{*}(f)}{S_{n}(f)}\ \mathrm{d}f. \tag{2}\] In this equation, \(f_{min}=\frac{1}{\text{Duration}}\simeq 3.17\times 10^{-8}\,\mathrm{Hz}\) and \(f_{max}=\frac{1}{2\cdot\text{Calence}}=\frac{1}{30}\mathrm{Hz}\). \(\tilde{a}(f)\) and \(\tilde{b}(f)\) represent the frequency domain signals, and the superscript \(*\) denotes the complex conjugate. \(S_{n}(f)\) is the one side noise power spectral density (PSD), which will be specified later. Noise GenerationThe third step introduces noise to the signal. This noise, \(n(t)\), is modeled as a colored Gaussian noise with a PSD defined by \[\mathrm{S}_{n}(f)=16\sin^{2}(\omega L)\left(P_{\mathrm{oms}}(f)+(3+\cos(2 \omega L))P_{\mathrm{acc}}(f)\right)\, \tag{3}\] \begin{table} \begin{tabular}{l r r} \hline \hline **Parameter** & \multicolumn{2}{c}{**Lower bound**} & \multicolumn{1}{c}{**Upper bound**} \\ \hline \hline \(\log_{10}(M/M_{\odot})\) & \(5\) & \(8\) \\ \(a\) & \(10^{-3}\) & \(0.99\) \\ \(e_{0}\) & \(10^{-3}\) & \(0.8\) \\ \(p_{0}/M\) & \(15\) & \(25\) \\ \(Y_{0}\) & \(-1\) & \(1\) \\ SNR & \(50\) & \(120\) \\ \(\theta_{S}\) & \(0\) & \(\pi\) \\ \(\phi_{S}\) & \(0\) & \(2\pi\) \\ \(\theta_{K}\) & \(0\) & \(\pi\) \\ \(\phi_{K}\) & \(0\) & \(2\pi\) \\ \hline \hline \end{tabular} \end{table} Table 1: Summary of parameter setups in EMRI signal simulation. \begin{table} \begin{tabular}{l r} \hline \hline **Parameter** & \multicolumn{1}{c}{**Configuration**} \\ \hline \hline Size of training dataset & \(5000\) \\ Size of testing dataset & \(1000\) \\ Cadence & \(15\,\mathrm{s}\) \\ Duration & \(1\,\mathrm{year}\) \\ Re-sampled data length \(N\) & \(1024/2048/4096\) \\ \hline Arm length \(L\) & \(2.5\times 10^{9}\,\mathrm{m}\) \\ Detector orbit & \(1\)st order Keplerian orbit \\ TDI & \({}^{*}\,\mathrm{TDI}\)-1.5 \\ Acceleration noise \(A_{\mathrm{acc}}\) & \(3\,\mathrm{fm}/\sqrt{\mathrm{Hz}}\) \\ OMS noise \(A_{\mathrm{oms}}\) & \(15\,\mathrm{pm}/\mathrm{Hz}\) \\ \hline \hline \end{tabular} \end{table} Table 2: Summary of configurations of training and testing dataset. with \[\begin{split} P_{\text{oms}}(f)&=A_{\text{oms}}^{2} \left[1+\left(\frac{2\,\mathrm{mHz}}{f}\right)^{4}\right]\left(\frac{2\pi f}{c} \right)^{2}\,,\\ P_{\text{acc}}(f)&=A_{\text{acc}}^{2}\left[1+ \left(\frac{0.4\,\mathrm{mHz}}{f}\right)^{2}\right]\\ &\cdot\left[1+\left(\frac{f}{8\,\mathrm{mHz}}\right)^{4}\right] \left(\frac{1}{2\pi fc}\right)^{2}\,.\end{split} \tag{4}\] Where \(A_{\text{acc}}\) and \(A_{\text{oms}}\) are the noise budget of test mass acceleration noise and readout noise coming from the optical metrology system (OMS), \(L\) is the arm length of LISA detector, and \(c\) is the speed of light. Then the signal is injected into the noise, resulting in the synthetic data Figure 1 is a showcase of the training data in the time and frequency domain. Whitening and PSD EstimationIn the final stage of data curation, the data undergoes several pre-processing steps to prepare it for input into the DNN. The first of these steps is whitening, which serves to remove the frequency-dependent variations in the noise. This process allows the DNN to concentrate on the underlying signal patterns, simplifying the learning task and enhancing the network's ability to detect subtle patterns in the data, thereby improving the overall performance of the EMRI signal identification. Following whitening, the PSD of the data is estimated using Welch's method. The data then undergoes sub-sampling, where it is re-sampled onto a log-uniform frequency grid. This step is aimed at reducing the computational load of subsequent analyses by decreasing the number of data points. 3 different grid density is selected as listed in Table 2. The final pre-processing step is standardization, which ensures that all input features are on a uniform scale, a fundamental requirement for most deep learning algorithms. This step is crucial in enhancing the learning efficiency of the neural network and improving the overall performance of the model. ### Decode In this work, we introduce the DECODE, a novel architecture for sequence modeling tasks, as illustrated in Figure 2. The DECODE is inspired by the TCN architecture [53], which has been shown to outperform traditional recurrent architectures across a diverse range of tasks and datasets. The DECODE architecture leverages the strengths of convolutional networks, which have been proven to be highly effective for sequence modeling. It incorporates dilated convolutions, which are a powerful tool for capturing long-range dependencies in sequence data. The causal nature of the DECODE ensures that the model's output at each step is conditioned on all previous steps, making it suitable for tasks that require an understanding of sequential dependencies. While the TCN and other sequence modeling architectures have predominantly been applied to time series data, the DECODE stands out in its application to frequency domain data. Detecting EMRI in the time domain presents challenges due to the extended duration of the signals and their low SNR. As illustrated in Figure 0(a), the amplitude of the signal is typically three orders of magnitude lower than the noise, and the data spans a full year. However, as shown in Figure 0(b), in the frequency domain, the signal's PSD Figure 3: **EMRI detection performance across SNR and \(N\). All sub-plots depict receiver operating characteristic (ROC) curves for distinct input sample lengths \(N\) within specific SNR ranges, presented on a logarithmic scale. Each line style signifies the balance between TPR and FPR for a given sample length, with the area beneath each curve representing the model’s efficacy. A reference yellow dashed line indicates the random prediction. The use of logarithmic scales enhances the visibility of performance difference, especially at lower FPR levels. (a), Evaluation for \(\mathrm{SNR}\in[50,120]\). (b), Evaluation for \(\mathrm{SNR}\in[70,170]\). (c), Evaluation for \(\mathrm{SNR}\in[100,240]\).** has lots of peaks, with some even reaching the noise level. Despite this shift from time to frequency domain, the core principles of sequence modeling remain applicable. The DECODE effectively exploits these principles, achieving notable performance in EMRI signal detection. Causal Sequence ModelingThe DECODE framework is designed for sequence modeling, with a focus on maintaining causality throughout its structure. Central to DECODE's design are two fundamental principles. Firstly, the architecture ensures that the output sequence's length aligns with the input sequence. This alignment is achieved via a 1D-convolutional network design, where each hidden layer matches the length of the input layer. To maintain this length consistency, zero padding of length (\(\text{kernel size}-1\)) is applied. Following this, the architecture emphasizes the causality of the sequence. This is achieved by using causal convolutions, which ensure that the output at a particular time step is convolved only with preceding elements in the previous layer. Dilated ConvolutionIncorporated into the DECODE architecture, dilated convolutions play a pivotal role in capturing long-range dependencies in sequence data. Drawing inspiration from the WaveNet [54], the DECODE employs dilated convolutions to exponentially expand the receptive field without a significant increase in computational complexity or number of parameters. We provide an illustration in Figure 1(b), More formally, for a 1-D sequence input \(\mathbf{x}\in\mathbb{R}^{n}\) and a filter \(f:\{0,...,k-1\}\longrightarrow\mathbb{R}\), the dilated convolution operation \(F\) on element \(s\) of the sequence is defined as: \[F(s)=\left(\mathbf{x}\ast_{d}f\right)(s)=\sum_{i=0}^{k-1}f(i)\cdot\mathbf{x}_ {s-d\cdot i}\,, \tag{5}\] where \(d\) is the dilation factor, \(k\) is the filter size (i.e. kernel size), and \(s-d\cdot i\) accounts for the direction of the past. When \(d=1\), a dilated convolution reduces to a regular convolution. By employing larger dilations, the receptive field of a DECODE is effectively expanded, allowing it to capture long-range dependencies within the sequence data more effectively. Residual ConnectionsResidual connections, another key feature of the DECODE architecture, are designed to facilitate the training of deep networks. These connections, introduced by He et al. [55], allow the gradient to flow directly through the network, mitigating the problem of vanishing gradients that often jeopardize deep networks. In the DECODE, a residual block is composed of two dilated causal convolutional layers, with a residual connection skipping over them. If we denote the input to the residual block as \(\mathbf{x}\), the output of the block, \(\mathbf{y}\), can be computed as: \[\mathbf{y}=\mathrm{Activation}(\mathbf{x}+\mathcal{F}(\mathbf{x}))\,, \tag{6}\] where \(\mathcal{F}(\mathbf{x})\) represents the transformations performed by the dilated causal convolutional layers. This design choice has been shown to improve the performance of deep networks and is a key component of the DECODE architecture. The residual block used in the DECODE model is illustrated in Figure 1(c). Each block comprises two layers of dilated causal convolution, followed by the rectified linear unit (ReLU) activation function. Weight normalization [56] and dropout [57] are incorporated after each dilated convolution within the residual block. Loss FunctionIn our DECODE model, the output of the residual block has a shape of \((H,N)\), where \(H\) represents the hidden size of our model and \(N\) is the length of the input sequence. The last column of this output is then passed through a linear layer to generate the predicted probability for EMRI signal detection. To train the model, we use the cross-entropy loss, a common choice for classification tasks. One of the advantages of using the cross-entropy loss is its ability to accelerate convergence during training, especially when compared to other loss functions like mean squared error [58]. The cross-entropy loss for a binary classification problem is given by: \[\mathcal{L}=-\frac{1}{n}\sum_{i=1}^{n}y_{i}\log(\mathcal{P}_{i})+(1-y_{i})\log (1-\mathcal{P}_{i})\,. \tag{7}\] In this equation, \(y_{i}\) denotes the actual label, while \(\mathcal{P}_{i}\) is the predicted probability for the \(i\)-th sample, with \(n\) representing the total number of samples in the training dataset. The cross-entropy loss quantifies the divergence between the actual label and the predicted probability. ### Implementation Detail For waveform generation of training data, we employed FastEMRIWaveform2[50, 51] for EMRI signal creation and lisa-on-gpu3[52] for GPU-accelerated detector response simulations, which includes TDI. We also integrated additional functionalities from the SciPy library. Our DECODE architecture consists of 10 residual blocks, each with a kernel size of 3 and a hidden size of 128. Developed using the PyTorch framework, known for its computational efficiency and speed, computations were performed on a high-performance computing cluster equipped with NVIDIA Tesla V100 GPUs. The training utilized the Adam optimizer with a learning rate of \(2\times 10^{-4}\) and a batch size of 64. ## 3 Results ### EMRI Detection Proficiency Receiver operating characteristic (ROC) curve and the area under the curve (AUC) are essential tools for evaluating the performance of models in binary classification tasks. In the context of our study, where the task is to detect EMRI signals buried in noise, these tools provide valuable insights. The ROC curve, which plots the TPR against the FPR, offers a visual representation of the model's performance across various threshold settings. The AUC, on the other hand, provides a single, overall measure of the model's performance across all thresholds. A model with perfect discrimination has an AUC of 1, while a model performing no better than random guessing has an AUC of 0.5. In our research, we employ ROC curves as the primary benchmark to quantify the performance of the DECODE. Our test dataset used here is generated like the training datasets, i.e. the waveform parameters are uniformly distributed as shown in Table 1 but with different SNR range. As depicted in Figures 2(a) to 2(c), we show three separate ROC curves, with each corresponding to a unique input sample length fed into the DECODE. For the specified input lengths of \(N=(1024,2048,4096)\), the SNR ranges are set at \([50,120]\), \([70,170]\), and \([100,240]\). The associated AUC values, detailed within the figures, offer quantitative insight into the model's sensitivity in detecting EMRI signals. For clarity in visual representation, especially at lower FPR values, Figure 3 adopt a logarithmic scale for their axes. It's noteworthy that our test dataset comprises signals with a duration of 1 year, achieving twice the SNR compared to the 3-month data scenario presented in Ref. [45]. While their study tested models on datasets with \(\mathrm{SNR}\in[50,120]\), we evaluated ours on datasets with \(\mathrm{SNR}\in[100,240]\). Both datasets, when rescaled for a 1-year duration, maintain equivalent SNR values, implying consistent signal amplitudes. Impressively, our model attains a TPR of 97.5% at a FPR of 1% as showcased in Figure 2(c). One significant advantage of deep learning methods over matched filtering-based approaches is their speed. Once trained, the model can be rapidly deployed for inference. In our tests, conducted on a single NVIDIA Tesla V100 GPU, our model processed 2000 data samples in approximately 4 seconds, amounting to less than \(10^{-2}\) seconds per sample. ### EMRI Detection Efficacy In Figure 4, we provide a detailed examination of the DECODE's performance across different physical parameters. Figure 3(a) illustrates the relationship between TPR and SNR. The sub-figure clearly demonstrates that as the SNR increases, the TPR increases correspondingly, particularly at the specified FPR thresholds of 0.10 and 0.01. Figure 4: **Detection capability of DECODE across various parameters. (a), Illustrates the TPR as a function of SNR, highlighting the model’s capability to detect signals with varying strengths. (b) Showcases the TPR plotted against the relative amplitude \(\mathcal{A}\) (defined in eq. (8)), emphasizing the model’s ability to detect power excesses in the frequency domain and detect signals even when they are submerged within the noise. (c) Explores the TPR in relation to the spin parameter \(a\), keeping the MBH mass consistent at \(10^{6}M_{\odot}\). This sub-figure is evaluated at three distinct SNR levels: 50, 70, and 100, shedding light on the relationship between spin parameters and detection capabilities.** To gain a deeper understanding of the sensitivity of our model, we introduce the relative amplitude, denoted as \(\mathcal{A}\). It is defined as: \[\mathcal{A}=\max_{i\in A,E}\sqrt{\frac{S_{h}^{i}(f)}{S_{n}(f)}}\,, \tag{8}\] where \(S_{h}^{i}\) represents the Welch PSD of waveform \(h_{i}\). This metric effectively captures the signal's amplitude in the frequency domain. Figure 3(b) plots the TPR against the relative amplitude, at FPRs of 0.1 and 0.01, this sub-figure presents the model's proficiency in discerning power exceeds in the frequency domain. Notably, the DECODE can also detect signals that are entirely submerged within the noise. In Figure 3(c), we evaluate the DECODE's sensitivity to varying spin parameters, while keeping the MBH mass constant at \(10^{6}M_{\odot}\). The evaluation, performed at SNR levels of 50, 70, and 100 and FPR thresholds of 0.1 and 0.01, indicates that the model's detection performance is mainly influenced by the SNR. In contrast, the spin parameter appears to have a limited effect on detection, suggesting that the spin parameter contribution to the overall strength of the EMRI signal is relatively minor. ### Interpretability CNN-based models are powerful tools for pattern recognition and prediction. Their unique architecture and operational mechanism make them inherently interpretable, a feature that is particularly valuable in interdisciplinary research. CNN-based models learn hierarchical patterns in the data through their convolutional layers, with each layer extracting a set of high-level features from the input data. These features are then used by subsequent layers to understand more complex patterns. This transparent process of feature extraction can be visualized, providing insight into how the network interprets the data and makes predictions. The activation maps, often used in the context of neural networks, provide a visual representation of the features that the model identifies and emphasizes during its processing. Essentially, they capture the output values or "activations" from various layers or blocks within the network when presented with an input. These maps offer insights into which parts of the input data the model finds significant or relevant for a particular task. In the case of the DECODE, the activation maps generated at the output of each residual block reveal how the model processes and interprets the frequency-domain data of EMRI signals. The activation maps illuminate the interpretability of the DECODE. By analyzing the outputs of multiple residual blocks, the processes of feature extraction are made transparent. Figure 5 provides a detailed visualization of these maps, demonstrating the ability of the DECODE to distinguish EMRI signals from noise. Specifically, panel **i** of each sub-figure depicts activation maps for inputs with an EMRI signal, while panel **iii** depicts the corresponding frequency domain data. These maps emphasize activated neurons in regions that correspond to the frequency components of the signal. In contrast, panel **ii** of each sub-figure depicts diminished activations for noise-only samples. The corresponding frequency domain data for these samples is presented in panel **iv**, validating the model's ability at identifying EMRI signals. ### Generalization Ability Generalization ability is the capacity of a model trained on a specific dataset to perform well on new, untrained data. It indicates how well a model can extrapolate from its training data to make accurate predictions on unknown data. In practical applications, a model will frequently be presented with data that differs from its training set, so this ability is crucial. A model that generalizes well is robust and flexible, ensuring that it does not simply memorize the training data but rather understands inherent patterns and relationships. In Figure. 4(b) and 4(c), we provide evidence of the generalization capabilities of our model. Even though the model was only trained on AAK waveform datasets, it identified the AK waveform accurately during evaluation with the output probability equal to 1, demonstrating its ability to generalize across various waveform templates. In contrast, the model's successful detection of the XSPECG waveform [59, 60], which was formulated using the KRZ metric, demonstrates its generalization ability with respect to various gravitational theories. These results demonstrate the generalization ability of the model, suggesting that it is capable of handling scenarios beyond its training datasets. ## 4 Conclusion and Discussion The detection of EMRIs in gravitational wave astronomy presents a formidable challenge. In this paper we introduce the DECODE, a state-of-the-art end-to-end DNN model designed for the detection of EMRI signals in the frequency Figure 5: **Interpretability and generalization ability showcase.** This figure provides an in-depth visualization of the intermediate outputs from each residual block, demonstrating the model’s capability for feature extraction within the frequency domain and it’s generalization ability to different waveform templates and gravitational theories. For each sub-figure, panels **i** and **ii** represent the intermediate results corresponding to the input data samples shown in panels **iii** and **iv**. In contrast to the faint activations in panel **ii**, the noticeable activated neurons in panel **i** indicate the extraction of essential characteristics when a signal is present in the input. (a), AAK waveform. (b), AK waveform. (c), XSPEC waveform. domain. By leveraging dilated causal convolutional layers, the DECODE efficiently processes year-long data. Our evaluations on synthetic datasets have revealed the model's robustness and efficiency, achieving remarkable detection rates at varied SNR levels. Furthermore, the model's rapid inference capabilities and its ability to generalize beyond its training parameters but there is still room for future advancement. The precision of the EMRI detection model is intrinsically related to the precision of the training data. While our current training dataset employs the TDI-1.5 detector response, future developments could benefit from the incorporation of more sophisticated simulations, such as the TDI-2.0 technique. This would provide a more accurate simulation of the detector's response, potentially enhancing the model's applicability in the actual world. Our current approach primarily focuses on the amplitude information of the EMRI signals. However, the phase information, which has been largely wasted in this research, holds considerable potential. By integrating phase-related features into the model, we could capture more intricate patterns and details of the EMRI signals. This may lead to improved detection rates and lower false alarm rates. In conclusion, DECODE is a step forward in EMRI detection. Even though there are avenues for improvement, its foundational accomplishments demonstrate its potential as a tool for future space-based GW data analyses. ## 5 Acknowledgments The research was supported by the Peng Cheng Laboratory and by Peng Cheng Laboratory Cloud-Brain. This work was also supported in part by the National Key Research and Development Program of China Grant No. 2021YFC2203001 and in part by the NSFC (No. 11920101003 and No. 12021003). Z.C was supported by the "Interdisciplinary Research Funds of Beijing Normal University" and CAS Project for Young Scientists in Basic Research YSBR-006.
2305.19935
Neural Network Approach to the Simulation of Entangled States with One Bit of Communication
Bell's theorem states that Local Hidden Variables (LHVs) cannot fully explain the statistics of measurements on some entangled quantum states. It is natural to ask how much supplementary classical communication would be needed to simulate them. We study two long-standing open questions in this field with neural network simulations and other tools. First, we present evidence that all projective measurements on partially entangled pure two-qubit states require only one bit of communication. We quantify the statistical distance between the exact quantum behaviour and the product of the trained network, or of a semianalytical model inspired by it. Second, while it is known on general grounds (and obvious) that one bit of communication cannot eventually reproduce all bipartite quantum correlation, explicit examples have proved evasive. Our search failed to find one for several bipartite Bell scenarios with up to 5 inputs and 4 outputs, highlighting the power of one bit of communication in reproducing quantum correlations.
Peter Sidajaya, Aloysius Dewen Lim, Baichu Yu, Valerio Scarani
2023-05-31T15:19:00Z
http://arxiv.org/abs/2305.19935v5
# Neural Network Approach to the Simulation of Entangled States with One Bit of Communication ###### Abstract Bell's theorem states that Local Hidden Variables (LHVs) cannot fully explain the statistics of measurements on some entangled quantum states. It is natural to ask how much supplementary classical communication would be needed to simulate them. We study two long-standing open questions in this field with neural network simulations and other tools. First, we present evidence that all projective measurements on partially entangled pure two-qubit states require only one bit of communication. We quantify the statistical distance between the exact quantum behaviour and the product of the trained network, or of a semianalytical model inspired by it. Second, while it is known on general grounds (and obvious) that one bit of communication cannot eventually reproduce all bipartite quantum correlation, explicit examples have proved evasive. Our search failed to find one for several bipartite Bell scenarios with up to 5 inputs and 4 outputs, highlighting the power of one bit of communication in reproducing quantum correlations. ## I Introduction Quantum Mechanics is famous for having randomness inherent in its prediction. Einstein, Podolski and Rosen argued that this makes quantum mechanics incomplete, and suggested the existence of underlying Local Hidden Variables (LHVs) [1]. While this view was disproved by Bell's theorem [2; 3], it has nevertheless proved fruitful to approach quantum correlations, without committing to an ontology of the quantum world, by asking _which resources would one use to simulate them_. Though insufficient, LHVs provide an intuitive starting point - then, the question becomes: _which additional resources, on top of the LHVs, are needed to simulate quantum correlations?_. Some works have considered nonlocal boxes as supplementary resources [4; 5; 6]: while appealing for their intrinsic no-signaling feature, these hypothetical resources are as counterintuitive as entanglement itself, if not more. Classical communication, on the other hand, is a resource that we use on a daily basis and of which therefore we have developed an intuitive understanding. Because we are thinking in terms of simulations and not of ontology, we are not impaired by the very problematic fact that communication should be instantaneous if taken as the real underlying physical mechanism. Therefore, we are interested in the question of how much classical communication must supplement LHVs to simulate the behaviour of a quantum state. For the maximally entangled state of two qubits, after some partial results [7; 8], Toner and Bacon provided a definitive solution by describing a protocol that simulates the statistics of all projective measurements using only one bit of communication, which we refer to as _LHV+1_[9]. Subsequently, Degorre and coworkers used a different approach and found another protocol which also requires only one bit of communication [10]. The case of non-maximally entangled pure states proved harder. By invoking the Toner-Bacon model, two bits of communication are certainly sufficient [9]; while Brunner and coworkers proved that one PR-box is not [5]. But the simulation of those states in LHV+1 remained open. Only recently, Renner and coworkers reported an LHV+1 protocol that simulates exactly weakly entangled pure states [11]. Our neural network will provide evidence that projective measurements on all two-qubit states can be very closely approximated in LHV+1. The LHV+1 problem could, in principle, be approached systematically, since the behaviours that can be obtained with those resources are contained in a _polytope_. However, the size of this polytope grows very fast with the number of inputs and outputs: as of today, after some initial works [12; 13], the largest LHV+1 polytope to be completely characterized has three measurements per party and binary outcomes; and no quantum violation is found [14]. Addressing the problem for higher-dimensional systems has been challenging without going to the asymptotic limit. The only work we are aware of is of Vertesi and Bene, who showed that a pair of maximally entangled four-dimensional quantum systems cannot be simulated with only one bit of communication, by presenting a scenario involving an infinite number of measurements [15]. In recent years, there have also been increasing attempts to study quantum correlations with machine learning. Many of them reveal the great potential neural network has in tackling the complexities in detecting nonlocality and entanglement [16; 17; 18; 19; 20; 21]. The choice of tackling the LHV+1 problem with machine learning is prompted by the fact that there is no compact parametrisation of LHVs, nor of the dependence of the bit of communication from the parameters of the problem. Thus, we are looking for a solution to a problem, whose variables are themselves poorly specified. Moreover, similar to an LHV model, everything inside a neural network has definite values. Thus, it seems natural to devise a machine learning tool, specifically an artificial neural network (ANN), to act as an LHV model. This work is separated into two sections. In Section II, we study the simulability of the correlations of entangled state with classical resource and one bit of communication using neural network. We also present a semi-analytical protocol which _approximates_ the behaviour of partially entangled two-qubit states with one bit of communication, and we also study the errors of our protocol. In Section III, we also tried to find a quantum behaviour in dimensions higher than two qubits that could not be simulated by a single bit of communication. ## II Two-qubit entangled states using machine learning ### Using Neural Network to generate protocols Inspired by the use of a neural network as an oracle of locality [19], we approached the problem using an artificial neural network. The network takes in measurement settings \(\hat{a}\) and \(\hat{b}\) as an input and outputs an LHV+1 bit probability distribution, enforced by an architecture that forces the suitable locality constraints, which we will discuss below. The output distribution is then compared against the target distribution using a suitable error function, which is the Kullback-Leibler divergence. The Local Hidden Variables (LHV) are described by a random variable \(\lambda\) shared among both parties. \(\lambda\) can be of any form; the model of Toner and Bacon uses a pair of uniformly distributed Bloch vectors, the model by Renner [11] uses a biased distribution on the Bloch sphere, and the neural network of [19] only uses a single number, distributed normally or uniformly, as the LHV. In theory, the choice is ultimately redundant because the different LHV models can be made equivalent by some transformation. However, the neural network will perform differently since it can only process a certain amount of complexity in the model. From trial and error, we settled on Toner-Bacon's _uniformly distributed vector pair_ as the LHV model in our neural network. A probability distribution \(P(A,B)\) is _local_ if it can be written as \[P_{L}(A,B\mid\hat{a},\hat{b})=\int P(A\mid\hat{a},\lambda)\;P(B\mid\hat{b}, \lambda)\;d\lambda. \tag{1}\] The network approximates a local distribution by the Monte Carlo method as \[P_{L}(A,B\mid\hat{a},\hat{b})=\frac{1}{N}\sum_{i=1}^{N}P(A\mid\hat{a},\lambda_ {i})\;P(B\mid\hat{b},\lambda_{i}), \tag{2}\] where \(N\) is a sufficiently large number (\(\geq 1000\)). In the network, Alice and Bob are represented as a series of hidden layers. Each of the parties takes in their inputs according to the locality constraint and outputs their own local probability distribution. The activation functions used in the hidden layers are the standard functions, such as the rectified linear unit (ReLU) and the softmax function used to normalise the probabilities. The forward propagation is done \(N\) times using varying values of \(\lambda_{i}\) sampled from a probability distribution. Thereafter we take the average of the probabilities over \(N\) to get the probability distribution as expressed in equation (2). To move from LHV to LHV+1, we notice that sending one bit of communication is equivalent to giving Alice the power of making the decision to choose between one out of two local strategies. The recipe looks as follows: * Alice and Bob pre-agreed on _two_ local strategies \(P_{L,1}\) and \(P_{L,2}\), as well as on the \(\lambda\) to be used in each round. It seems to us that all previous works in LHV+1 assumed \(P_{1}(A\mid\hat{a},\lambda)=P_{2}(A\mid\hat{a},\lambda)\), but of course there is no need to impose such a constraint. * Upon receiving her input \(\hat{a}\), Alice decides which of the two strategies should be used for that round, taking also \(\lambda\) into account. Although all previous LHV+1 models used a deterministic decision, there is no reason to impose that: Alice's decision could be stochastic. She informs Bob of her choice with one bit of communication \(c\), and Bob consequently keeps his outcome for the chosen strategy. Thus, given a randomly sampled LHV \(\lambda_{i}\), the LHV+1 model is described by \[\begin{split} P(A,B\mid\hat{a},\hat{b},\lambda_{i})& =P(c=+1\mid\hat{a},\lambda_{i})P_{L,1}(A,B\mid\hat{a},\hat{b}, \lambda_{i})+P(c=-1\mid\hat{a},\lambda_{i})P_{L,2}(A,B\mid\hat{a},\hat{b}, \lambda_{i})\\ &=P(c=+1\mid\hat{a},\lambda_{i})P_{1}(A\mid\hat{a},\lambda_{i})P _{1}(B\mid\hat{b},\lambda_{i})+P(c=-1\mid\hat{a},\lambda_{i})P_{2}(A\mid\hat {a},\lambda_{i})P_{2}(B\mid\hat{b},\lambda_{i}).\end{split} \tag{3}\] where we labeled \(c=+1\) (respectively \(c=-1\)) the value of the bit of communication when Alice decides for strategy 1 (resp. 2). Now the complete model consists of two local networks and one communication network. The communication network consists of a series of layers whose inputs are the same as Alice's and outputs a number between 0 and 1 by using a sigmoid activation function, representing \(P(c\mid\hat{a},\lambda_{i})\), which then is used to make a convex mixture of the two local strategies, for the particular inputs and LHV. The final network architecture, then, can be seen in Fig. 1. This approach of using a neural network to generate local strategies was originally used in a network setting [19]. In that work, the network was used to verify non-locality by looking for transitions in the behaviours of distributions when mixed with noise. When a state is mixed with noise, it lies within a local set, up to a certain noise threshold; reducing the amount of noise in the state allows for the identification of sharp transitions in the network's error, indicating when the state exits the local set. Here, instead of such an oracle, we will use the network to generate a protocol to simulate the quantum state by analysing its outputs. ### Simulating Two-qubit States For a two-qubit scenario, the joint measurements can be defined by two vectors in the Bloch sphere, i.e. \(\hat{a},\hat{b}\in S^{2}\). Thus, the behaviour is the set \(\mathcal{P}(\rho)=\{P_{\rho}(A,B\mid\hat{a},\hat{b})\mid\hat{a},\hat{b}\in S ^{2}\}\). #### ii.2.1 Maximally Entangled State The maximally entangled state case has been solved analytically by Toner and Bacon [9]. Thus, we used this state as a test bed for our machine learning approach by training the machine to simulate the distribution of the maximally entangled state \(|\Psi^{-}\rangle\). A snapshot of the behaviour of the trained model can be seen in Fig. 5. By scrutinising similar figures for different LHVs, we can infer an analytical model of the machine. Maximally entangled state protocol: 1. Alice sends to Bob \[c=\text{sgn}(\hat{a}\cdot\hat{\lambda}_{1})\text{sgn}(\hat{a}\cdot\hat{ \lambda}_{2}).\] 2. Alice outputs \[A=-\text{sgn}(\hat{a}\cdot(\hat{\lambda}_{1}+c\hat{\lambda}_{2})).\] 3. Bob outputs \[B=\text{sgn}(\hat{b}\cdot(\hat{\lambda}_{1}+c\hat{\lambda}_{2})).\] The protocol bears much resemblance to Toner-Bacon's original protocol, with the only difference being the output of Alice, which are simply \(-\text{sgn}(\hat{a}\cdot\hat{\lambda}_{1})\) in the original protocol. However, one can check [22] that it indeed fulfills all the correct expectation values. #### ii.2.2 Non-maximally Entangled States We now apply the same method to the non-maximally entangled two-qubit states. Without loss of generality, Figure 1: The architecture of the Artificial Neural Network (ANN). The model consists of two local distributions and a communication network. In each distribution, the two parties are constrained by locality by routing the input accordingly. The communication network outputs a value between 0 and 1, and represents the probability of Alice sending a certain bit to Bob. The output for a particular round is then simply the convex combination of the two local distributions. any pure two-qubit states can be written in the form of \[\left|\psi(\theta)\right\rangle=\cos(\theta)\left|01\right\rangle-\sin(\theta) \left|10\right\rangle,\quad\theta\in\left[0,\frac{\pi}{4}\right]\] using a suitable choice of bases. The state is maximally entangled when \(\theta=\frac{\pi}{4}\) and separable when \(\theta=0\). We train the network to simulate the distribution of \(\left|\psi(\theta)\right\rangle\) with \(\theta\in\left[0,\frac{\pi}{4}\right]\). A selection of the resulting protocol is shown in Fig. 6. The error of the models for these states is lower than the one for the maximally entangled state (see Fig. 2). This does not necessarily mean that they successfully simulate the states exactly, instead of simply approximating them. If the behaviour were actually nonlocal, we should expect a transition in the error when we mix the state with noise, signifying the exit of the state from the local+1 bit set. However, we observe no clear transition occurring when noise is added to the state, only a shallow gradient, suggesting that it is still inside the local+1 bit set. While encouraging, this does not constitute a proof, and we would still need to write an analytical protocol. Unlike for the case of maximally entangled states, the models we obtained for the non-maximally entangled states are more complex and our attempt to infer a protocol begins by looking at figures similar to Fig. 6. We start from the parties' outputs: The outputs of Alice are of the form of \[P(A_{1}=+1\mid\hat{a})=\frac{1}{2}(1-\operatorname{sgn}(\hat{a}\cdot\vec{ \lambda}_{a1}+b_{a1})),\] where \(\hat{\lambda}_{a1}=u_{a1}\vec{\lambda}_{1}+\vec{\lambda_{2}}+v_{a1}\hat{z}\) decides the hemisphere direction and \(b_{a1}=w_{a1}+x_{a1}\vec{\lambda}_{1}\cdot\hat{z}+y_{a1}\vec{\lambda}_{2}\cdot \hat{z}\) decides the size of the hemisphere. Similarly, \[P(A_{2}=+1\mid\hat{a})=\frac{1}{2}(1+\operatorname{sgn}(\hat{a}\cdot\vec{ \lambda}_{a2}+b_{a2})),\] \[P(B_{1}=+1\mid\hat{b})=\frac{1}{2}(1+\operatorname{sgn}(\hat{b}\cdot\vec{ \lambda}_{b1}+b_{b1})),\] \[P(B_{2}=+1\mid\hat{b})=\frac{1}{2}(1-\operatorname{sgn}(\hat{b}\cdot\vec{ \lambda}_{b2}+b_{b2})).\] Using numerical algorithms, we can approximately obtain the relevant coefficients, laid out in Appendix A for the different states. The (simplified) bit of communication is given by \[P(c=+1\mid\hat{a})=\frac{1}{2}(1-\operatorname{clip}(f_{c},-1,1)),\] where \[f_{c} =\Theta(\hat{a}\cdot\vec{\lambda}_{1}+b_{c})\Theta(\hat{a}\cdot \vec{\lambda}_{2}+b_{c})\] \[+\Theta(-\hat{a}\cdot\vec{\lambda}_{1}+b_{c})\Theta(-\hat{a} \cdot\vec{\lambda}_{2}+b_{c})\] \[-\Theta(-\hat{a}\cdot\vec{\lambda}_{1}-b_{c})\Theta(\hat{a} \cdot\vec{\lambda}_{2}-b_{c})\] \[-\Theta(\hat{a}\cdot\vec{\lambda}_{1}-b_{c})\Theta(-\hat{a} \cdot\vec{\lambda}_{2}-b_{c}),\] with \(b_{c}=u_{c}+v_{c}(\vec{\lambda}_{2}\cdot\hat{z})(1-\vec{\lambda}_{1}\cdot \hat{z})\) and the clip function is defined as \[\operatorname{clip}(x,a,b)=\begin{cases}a&\text{if }x<a\\ b&\text{if }x>b\\ x&\text{otherwise}\end{cases}.\] Again, the relevant coefficients obtained using numerical methods are listed in Appendix A. Figure 2: The relative error between the neural network models’ behaviours and the quantum behaviours. The blue dots are for the original model described, while the red crosses are for the simplified model described in the text. The grey shaded region is the region in which an LHV+1 model is known [11]. ### Statistical analysis of the simulations After presenting our protocols, we can now consider the performance of our protocols, both the neural network protocol itself and the semianalytical protocol we distilled from it. These LHV+1 protocols are not exact protocols, but approximations, and we can describe their closeness to the quantum behaviour by providing statistical error values. To get a better intuition on the error values, let us consider a hypothesis testing scenario [23]. Suppose that we have an unknown sample of length \(n\) generated by the same measurement done to \(n\) identical systems. Suppose also that we know that the systems are all actual quantum systems (\(P_{Q}\)), or our LHV+1 models (\(P_{LHV+1}\)), but we do not know which. Let us take \(P_{LHV+1}\) as the null hypothesis. Let \(a\) be the Type I error (mistakenly rejecting a true null hypothesis). In our case, a Type I error would correspond to our machine learning model successfully spoofing as a quantum system. For any decision-making procedure, the probability of a Type I error is lower bounded by \[a\geq e^{-nD_{KL}(P_{Q}||P_{LHV+1})}.\] Thus, in order to have 95% confidence in rejecting a sample from the LHV+1 model, we would need a sample size of \[n_{95\%}\geq-\frac{\ln 0.05}{D_{KL}(P_{Q}||P_{LHV+1})}.\] The sample size \(n\) needed to distinguish the probability distributions differs with the measurement settings, with some measurement settings being more difficult to distinguish. The performance of our LHV+1 models (both the machine learning and our semi-analytical approximations) over the measurement settings are given in Fig. 3. It can be seen that from the neural network's protocols to our semianalytical approximations, we have gained about two orders of magnitude in Kullback-Leibler divergence. This is due to the limitations of our numerical methods used to obtain the optimum parameters, and the fact that we were bound to have missed some details from the behaviour of the network when we translated it into analytical expressions. Our semianalytical protocols require, on average, hundreds of measurements before they can be distinguished from real quantum behaviours, disregarding other noises present in an actual quantum system. Even better, when considering the neural network themselves, it would take upwards of \(10^{4}\) samples to distinguish them from an actual quantum system. Ideally, one might try to see whether the semianalytical protocols, when integrated analytically to give the full behaviour, can be made into an exact protocol with the correct parameters. However, the communication function is very tricky to analytically integrate, and thus this approach might not work. On the other hand, considering that an exact protocol can already simulate some two-qubit states, these pieces of evidence suggest that all two-qubit states can be simulated with just a single bit of communication. However, ultimately, the question of _exactly_ simulating partially entangled states with one bit of communication remains open. ## III Searching for Bell violation of the one-bit of communication polytope Since two-qubit states are simulatable up to very good precision, we now consider a different question: can we find an explicit quantum behaviour that is unsimulatable Figure 3: Violin plots for the neural network (blue) and the semianalytical protocol we presented (red) describing the following values: **(a)** The Kullback-Leibler divergence between our protocols and the quantum behaviours. **(b)** The Total Variational Distance between our protocols and the quantum behaviours. **(c)** The minimum sample size needed to have at least 95% confidence in distinguishing the two behaviours as described in the hypothesis testing scenario. In all three, the violin shapes illustrate the distributions of the values over the different projective measurements on the two-qubit state. with one bit of communication? We try to go to higher dimensional systems and try to find a Bell-like inequality for the communication polytope. As far as we know, no violation of a Bell-like inequality for the one-bit of communication polytope has ever been described. For the rest of the section, let \(\mathcal{L}\) be the local set, \(\mathcal{Q}\) be the quantum set, and \(\mathcal{C}\) be the one-bit of communication set. We are interested in points inside of \(\mathcal{Q}\) that lie outside \(\mathcal{C}\). ### Description of the polytope Similar to \(\mathcal{L}\), \(\mathcal{C}\) is also a convex polytope. However, unlike it, it does not lie inside the no-signalling \(\mathcal{NS}\) space. Let \(\mathcal{A}\) (\(\mathcal{B}\)) be the output set of Alice (Bob) and \(\mathcal{X}\) (\(\mathcal{Y}\)) her (his) input set. The number of deterministic strategies that can be performed with a single bit is \(|\mathcal{A}|^{|\mathcal{X}|}|\mathcal{B}|^{2|\mathcal{Y}|}2^{|\mathcal{X}|}\). However, due to duplicates, the number reduces to [14] \[|\mathcal{A}|\left(|\mathcal{B}|^{|\mathcal{Y}|}+(2^{|\mathcal{X}|-1}-1)(| \mathcal{B}|^{2|\mathcal{Y}|}-|\mathcal{B}|^{|\mathcal{Y}|})\right).\] \(\mathcal{C}\) is the convex polytope formed by these vertices. In practice, we can only generate polytopes of up to around \(2\times 10^{7}\) points due to memory limitations. Since the number of extremum points for \(\mathcal{C}\) is much larger than for \(\mathcal{L}\), we can quickly discard the possibility of performing full facet enumeration. Hence, we would have to resort to other methods for our search. ### Random sampling quantum behaviours in higher dimensions We first tried to sample points from \(\mathcal{Q}\) by measuring the maximally entangled two-qutrit and two-ququart state with measurements sampled uniformly in the Haar measure, before using linear programming to solve the membership problem for \(\mathcal{C}\). However, this method proved ineffective as we did not manage to find any behaviour which lies inside \(\mathcal{C}\), and even a significant amount still lies inside \(\mathcal{L}\). The statistics of this method can be seen in Table 1. ### Using non-signalling points The next method we used was to use points in \(\mathcal{NS}\) and mix them with noise in order to find out the threshold noise levels at which they exit the sets \(\mathcal{Q}\) and \(\mathcal{C}\). If we find a point \(P_{\mathcal{NS}}\) which exits \(\mathcal{Q}\) at a lower noise level \(w_{\mathcal{Q}}\) than the corresponding one for \(\mathcal{C}\), \(w_{\mathcal{C}}\), all behaviours \[wP_{\mathcal{NS}}+(1-w)P_{noise}\] with \(w_{\mathcal{C}}<w<w_{\mathcal{Q}}\) would be behaviours in \(\mathcal{Q}\) that are unsimulatable by one-bit of communication. The membership problem for \(\mathcal{Q}\) is solved using the NPA hierarchy method [24] with level 2 hierarchy. A graphical illustration can be seen in Fig. 4. Choosing a suitable \(P_{\mathcal{NS}}\), however, proved to be a challenge. The extremum points of the \(\mathcal{NS}\) space have only been characterised for binary inputs or binary outputs [25; 26]. Here, we mostly used nonlocal points which are locally unbiased, i.e. for all inputs, all the local outputs are of equal probability, and maximally correlated, i.e. for all input combinations, there is a perfect correlation between Alice and Bob's outputs, for a particular output of Alice the output of Bob is guaranteed and vice versa. While we tried other non-signalling points, this particular class of points gave us the closest gap between \(w_{\mathcal{Q}}\) and \(w_{\mathcal{C}}\). Similarly, there are also numerous choices for \(P_{noise}\), but we find that white noise gives the closest gap in most scenarios. The results of the closest gap \((w_{\mathcal{C}}-w_{\mathcal{Q}})\) found in each scenario are listed in Table 3. In the case of \(|\mathcal{A}|=|\mathcal{B}|=3\), we did not find any violation. The closest gap was observed in the \((|\mathcal{X}|,|\mathcal{Y}|,|\mathcal{A}|,|\mathcal{B}|)=(4,4,3,3)\) scenario, where a there exist a point with \(w_{\mathcal{C}}=0.6612\) and \(w_{\mathcal{Q}}=0.6289\). However, in \(|\mathcal{A}|=|\mathcal{B}|=4\), specifically in the \((4,2,4,4)\) setting, we find a \(P_{\mathcal{NS}}\) which have \(w_{\mathcal{Q}}=w_{\mathcal{C}}=\frac{2}{3}\), described in in Table 2. The table itself can also be interpreted as a Bell inequality by taking the terms in \begin{table} \begin{tabular}{c|c c} \((|\mathcal{X}|,|\mathcal{Y}|,|\mathcal{A}|,|\mathcal{B}|)\) & Points sampled & Proportion in \(\mathcal{L}\) \\ \hline \hline (3,3,3,3) & 10000 & 25.6\% \\ (3,4,3,3) & 300 & 10.0\% \\ (4,3,3,3) & 300 & 10.3\% \\ (4,4,3,3) & 100 & 1.0\% \\ (3,3,4,4) & 500 & 56.6\% \\ \hline \end{tabular} \end{table} Table 1: The statistics for the sampling approach. None of the points sampled fall outside \(\mathcal{C}\). Figure 4: \(w_{\mathcal{Q}}\) is the threshold weight for the quantum set \(\mathcal{Q}\), while \(w_{\mathcal{C}}\) is the threshold weight for the one-bit communication set \(\mathcal{C}\). Thus, \(w_{\mathcal{C}}<w_{\mathcal{Q}}\) would imply a violation and would give a quantum behaviour that could not be simulated by a single bit of communication. the table as the coefficients of the correlation terms and adding all of them. When normalised into a Bell game, the value of the game is both \(\frac{3}{4}\) for \(\mathcal{Q}\) and \(\mathcal{C}\). This Bell facet is the hyperplane which has the line connecting \(P_{\mathcal{N}\mathcal{S}}\) and \(P_{noise}\) as its normal. This point represents our closest attempt at finding a violation with this method. The number of extremal points for \(\mathcal{C}\) in the \((4,2,4,4)\) scenario is around \(1\times 10^{6}\), and it is possible to still go one input higher to \((4,3,4,4)\) or \((5,2,4,4)\). A violation might exist there, but our heuristic search proved unfruitful. In the end, contrary to the prepare-and-measure scenario [27], it still remains an open problem to find a bipartite quantum behaviour that is provably unsimulatable with one bit of communication [28]. ## IV Conclusion In this work, we tried to further the works that have been done on characterising the communication complexity cost of quantum behaviours. We tried to obtain a protocol to simulate partially entangled two-qubit states using a neural network, and we presented a semianalytical LHV+1 protocol based on the protocol of the neural networks. While these protocols could only approximate the quantum behaviours, on average one needs hundreds of measurement data, for the semianalytical protocols, and tens of thousands of measurement data, for the neural network protocols, in order to be distinguished from the quantum behaviour. We also tried to find quantum behaviours in higher dimensions that could not be simulated with one bit of communication. While we were able to find a Bell-like inequality that has the same maximum value in \(\mathcal{Q}\) and \(\mathcal{C}\), we were unable to find a violation. From this work and all the previous works done on the topic, it can be seen that evaluating the capabilities of entangled quantum states in terms of communication complexity is very difficult. While we are confident that a behaviour that cannot be simulated with a single bit could probably be found, extending the work to more bits and states would probably be too difficult, barring any new revolutionary techniques. On the other hand, from our result that numerical protocols that closely approximate the two-qubit entangled states can be found, the task of simulating partially entangled two-qubit states using one bit of communication _exactly_ is probably possible and a fully analytical protocol could probably be found in the near future. ## V Code availability The code is available at [https://github.com/PeterSidajava/neural-network-fp/](https://github.com/PeterSidajava/neural-network-fp/). \begin{table} \begin{tabular}{c|c|c c c|c c c c|} & & \multicolumn{3}{c|}{\(Y=1\)} & \multicolumn{3}{c|}{\(Y=2\)} \\ \cline{3-8} & & \(P(B=1)\) & \(P(B=2)\) & \(P(B=3)\) & \(P(B=4)\) & \(P(B=1)\) & \(P(B=2)\) & \(P(B=3)\) & \(P(B=4)\) \\ \hline \multirow{3}{*}{\(X=1\)} & \(P(A=1)\) & 1 & 0 & 0 & 0 & 1 & 0 & 0 \\ & \(P(A=2)\) & 0 & 1 & 0 & 0 & 0 & 1 & 0 \\ & \(P(A=3)\) & 0 & 0 & 1 & 0 & 0 & 0 & 1 \\ & \(P(A=4)\) & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 1 \\ \hline \multirow{3}{*}{\(X=2\)} & \(P(A=1)\) & 1 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \\ & \(P(A=2)\) & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 1 \\ & \(P(A=3)\) & 0 & 0 & 1 & 0 & 1 & 0 & 0 & 0 \\ & \(P(A=4)\) & 0 & 0 & 0 & 1 & 0 & 1 & 0 & 0 \\ \hline \multirow{3}{*}{\(X=3\)} & \(P(A=1)\) & 1 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\ & \(P(A=2)\) & 0 & 1 & 0 & 0 & 1 & 0 & 0 & 0 \\ & \(P(A=3)\) & 0 & 0 & 1 & 0 & 0 & 0 & 1 & 0 \\ & \(P(A=4)\) & 0 & 0 & 0 & 1 & 1 & 0 & 0 & 0 \\ \hline \multirow{3}{*}{\(X=4\)} & \(P(A=1)\) & 1 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\ & \(P(A=2)\) & 0 & 1 & 0 & 0 & 1 & 0 & 0 & 0 \\ & \(P(A=3)\) & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 1 \\ & \(P(A=4)\) & 0 & 0 & 0 & 1 & 0 & 0 & 1 & 0 \\ \hline \end{tabular} \end{table} Table 2: The point in \((4,2,4,4)\) which have \(w_{\mathcal{Q}}=w_{\mathcal{C}}=\frac{2}{3}\). Each of the smaller eight boxes corresponds to the output table for a particular combination of inputs, which Alice’s input indexing the vertical dimension and Bob’s the horizontal. In each of the boxes, the \(4\times 4\) table corresponds to the outputs of Alice and Bob, with Alice’s in the vertical and Bob’s in the horizontal. Note that the 1 here means \(\frac{1}{4}\), which means that each of the boxes sum up to 1, as required. \begin{table} \begin{tabular}{c|c c|c} \((|\mathcal{X}|,|\mathcal{Y}|,|\mathcal{A}|,|\mathcal{B}|)\) & \(w_{\mathcal{Q}}\) & \(w_{\mathcal{C}}\) & \(w_{\mathcal{C}}-w_{\mathcal{Q}}\) \\ \hline \hline (3,3,3,3) & 0.5995 & 0.7000 & 0.1005 \\ (3,4,3,3) & 0.6022 & 0.7000 & 0.0978 \\ (4,3,3,3) & 0.5856 & 0.6766 & 0.0910 \\ (4,4,3,3,3) & 0.6289 & 0.6612 & 0.0323 \\ (5,3,3,3) & 0.5768 & 0.6610 & 0.0842 \\ (3,3,4,4) & 0.6159 & 0.7143 & 0.0984 \\ (4,2,4,4) & 0.6666 & 0.6666 & 0.0000 \\ \hline \end{tabular} \end{table} Table 3: The closest gap \((w_{\mathcal{C}}-w_{\mathcal{Q}})\) in each scenario we studied. Acknowledgments This research is supported by the National Research Foundation, Singapore and A*STAR under its CQT Bridging Grant. We thank Maria Balanzo-Juando, Martin J. Renner, Marco Tulio Quintino, and Marco Tomamichel for discussions. We are also grateful to the authors of [19] for making their codes public. We also thank the National University of Singapore Information Technology for the use of their high performance computing resources.
2309.11064
Exploring the Relationship between LLM Hallucinations and Prompt Linguistic Nuances: Readability, Formality, and Concreteness
As Large Language Models (LLMs) have advanced, they have brought forth new challenges, with one of the prominent issues being LLM hallucination. While various mitigation techniques are emerging to address hallucination, it is equally crucial to delve into its underlying causes. Consequently, in this preliminary exploratory investigation, we examine how linguistic factors in prompts, specifically readability, formality, and concreteness, influence the occurrence of hallucinations. Our experimental results suggest that prompts characterized by greater formality and concreteness tend to result in reduced hallucination. However, the outcomes pertaining to readability are somewhat inconclusive, showing a mixed pattern.
Vipula Rawte, Prachi Priya, S. M Towhidul Islam Tonmoy, S M Mehedi Zaman, Amit Sheth, Amitava Das
2023-09-20T05:04:16Z
http://arxiv.org/abs/2309.11064v1
Exploring the Relationship between LLM Hallucinations and Prompt Linguistic Nuances: Readability, Formality, and Concreteness ###### Abstract As Large Language Models (LLMs) have advanced, they have brought forth new challenges, with one of the prominent issues being LLM hallucination. While various mitigation techniques are emerging to address hallucination, it is equally crucial to delve into its underlying causes. Consequently, in this preliminary exploratory investigation, we examine how linguistic factors in prompts, specifically readability, formality, and concreteness, influence the occurrence of hallucinations. Our experimental results suggest that prompts characterized by greater formality and concreteness tend to result in reduced hallucination. However, the outcomes pertaining to readability are somewhat inconclusive, showing a mixed pattern. ## 1 Hallucination in LLMs: An introduction The remarkable advantages offered by extensive generative AI models like GPT-4 Brown et al. (2020); OpenAI (2023), Stable Diffusion Rombach et al. (2022), DALL-E Ramesh et al. (2021, 2022), and Midjourney Midjourney (2022) are accompanied by a significant potential for misuse. The recent and rapid developments in the field of LLMs have been gaining significant attention and use in various applications. This included natural language understanding and generation for chatbots, content generation, translation, summarization, and more. They were also being applied in fields like healthcare, finance, and education. Nevertheless, these LLMs encounter significant hurdles, with one prominent issue being termed _hallucination_. This term describes a situation in which the LLM generates responses that contain factual inaccuracies or fabrications. Several mitigation techniques have emerged to address and reduce the occurrence of hallucinations. These techniques can be broadly categorized into two groups: i) Black-box Mundler et al. (2023), which operates without depending on external grounded knowledge, and ii) Gray-box Zhang et al. (2023); Peng et al. (2023); Li et al. (2023), which incorporates external knowledge to a certain extent. Prompt engineering can play a crucial role in mitigating hallucinations in generative AI models. By providing clear and specific prompts, users can Figure 1: An illustration of how a “reformulated prompt” can aid in addressing the hallucination issue by providing pertinent context. Here, the hallucinated text is highlighted in red. By introducing additional context highlighted in blue, such as “who” and “what”, we modify the prompt to be more formal and concrete. Thus, the newly generated response now incorporates the factually correct (dehallucinated) text, highlighted in green. steer the AI model toward generating content that aligns with their intended context or requirements. This can reduce the chances of the model producing hallucinated or inaccurate information. Prompts can include contextual cues that help the AI model understand the context of the request. This additional context can guide the model in generating responses that are more contextually accurate and less prone to hallucination. Complex prompts can be used to guide the model through a series of steps, ensuring that it follows a logical sequence of thought and produces coherent responses. The state-of-the-art LLMs have the capability to process lengthy prompts as input. However, findings in Liu et al. (2023) indicate (see Fig. 2) that these models tend to perform best when pertinent information is located at the beginning or end of the input context. their performance significantly diminishes when they need to access relevant information in the middle of lengthy contexts. Moreover, as the input context becomes more extended, even models explicitly designed for longer contexts experience a substantial decrease in performance. In this paper, our primary objective is to explore the impact of the key linguistic attributes of prompts on hallucinations generated in LLMs. The contributions are as follows: 1) We delineate the broad categories of hallucinations observed in LLMs, as discussed in Section 2. 2) We construct and provide annotations for our dataset, which is derived from tweets related to New York Times events, as detailed in Section 3. 3) We analyze the relationship between the primary linguistic aspects of prompts, such as their readability, formality, and concreteness, and the occurrence of hallucinations in LLMs, as discussed in Section 4. ## 2 Types of Hallucination In this study, we explore the following _four_ different categories of hallucination. Additionally, we offer examples for each case in which the hallucinated text is marked in red. 1. Person (P):The issue of generating fictional characters is discussed in Ladhak et al. (2023) and Table 1. 2. Location (L):The case of generating fictional places is addressed in Ladhak et al. (2023) and Table 1. 3. Number (N):Similarly, Varshney et al. (2023) delves into the generation of imaginary numbers, as shown in Table 2. 4. Acronym (A):Additionally, we investigate the potential role of acronyms in prompting the generation of inaccurate responses as illustrated in Table 3. ## 3 Dataset and Annotation To conduct our empirical analysis, where we examine how linguistic properties affect hallucination, we create and annotate a hallucination dataset using the NYT tweets detailed in the following sections. \begin{table} \begin{tabular}{p{56.9pt}|p{284.5pt}} \hline \hline **Original** & Antoine Richard is a former athlete from France who mainly competed in the 100 metres. He was French 100 metre champion on 5 occasions, and also 200 metre winner in 1985. He also won the French 60 metres title 5 times as well. \\ \hline **Al-generated** & Athletic Naoki Tsukahara was born in Tokyo, Japan to a Japanese father and French mother. \\ \hline \hline \end{tabular} \end{table} Table 1: An example showing how imaginary places such as Tokyo and persons such as father and mother are hallucinated Ladhak et al. (2023). \begin{table} \begin{tabular}{p{56.9pt}|p{284.5pt}} \hline \hline **Original** & Freddie Frith. \\ \hline **Al-generated** & He was born in London in 1929 and began his racing career in 1951. \\ \hline **Fact** & He was born in Grimsby in 1909 and began his career in 1930. \\ \hline \hline \end{tabular} \end{table} Table 2: Both years 1929 and 1951 are hallucinated. Figure 2: Empirical results in Liu et al. (2023) show that the models tend to excel at utilizing pertinent information found at the very start or end of their input context, but their performance notably declines when they need to access and utilize information situated in the middle of their input context. ### New York Times News Tweets We utilize a news dataset, specifically the New York Times (NYT) news events tweets (NYT). We selected a total of 2,500 tweets. These news tweets serve as our source of factually accurate prompts, which are then presented to the fifteen Large Language Models (LLMs) described in Section 3.2. ### Selection of LLMs We have selected 15 contemporary LLMs that have consistently demonstrated outstanding performance across a wide spectrum of NLP tasks. These models include: (i) GPT-4 (OpenAI, 2023) (ii) GPT-3.5 (OpenAI, 2022) (iii) GPT-3 (Brown et al., 2020) (iv) GPT-2 (Radford et al., 2019) (v) MPT (Wang et al., 2023) (vi) OPT (Zhang et al., 2022) (vii) LLMA (Touvron et al., 2023) (viii) BLOOM (Scao et al., 2022) (ix) Alpaca (Taori et al., 2023) (x) Vicuna (Chiang et al., 2023) (xi) Dolly (databricks, 2023) (xii) StableLM (Liu et al., 2023) (xiii) XLNet (Yang et al., 2019) (xiv) T5 (Raffel et al., 2020) (xv) T0 (Deleu et al., 2022). ### Annotation guidelines For the purpose of annotating the 2,500 text snippets, we leveraged the services of Amazon Mechanical Turk (AMT) (Amazon). Through this platform, we obtained annotations at the sentence level to identify the different _four_ categories of hallucination. ### Dataset statistics Following the annotation process, our dataset statistics for the hallucination categories are presented in Table 4. ## 4 Linguistic Properties of the prompt Linguistic properties refer to the various characteristics and attributes of language and its components. These properties encompass a wide range of aspects that help define and understand a language. Some fundamental linguistic properties include: syntactic, semantic, pragmatic, and lexical. Considering these characteristics, we will delve more deeply into the three primary linguistic subtleties in the forthcoming Sections 4.1 to 4.3. ### Readability _Readability_ quantifies the ease with which a text can be comprehended. Several factors, including the text's complexity, familiarity, legibility, and typography, collectively contribute to its readability. The Flesch Reading Ease Score (FRES) (Flesch, 1948) (see Eq. (1)) is a measure of the readability of a text. It was developed to assess how easy or difficult a piece of text is to read and understand. The score is calculated based on two factors: (a) Sentence Length and (b) Word Complexity. As shown in the following example, in the first sentence, the language is straightforward, and the sentence is easy to understand, resulting in a high readability score. In contrast, the second sentence contains complex vocabulary and lengthy phrasing, making it more challenging to comprehend, resulting in a lower readability score. \begin{table} \begin{tabular}{|l|l|} \hline **Category** & **Hallucinated sentences** \\ \hline **Person** & 14850 \\ \hline **Location** & 13050 \\ \hline **Number** & 7275 \\ \hline **Acronym** & 1225 \\ \hline \hline **Total** & 36910 \\ \hline \end{tabular} \end{table} Table 4: Hallucination dataset statistics \begin{table} \begin{tabular}{|l|l|} \hline **Original** & **Robert used PDF for his math homework.** \\ \hline **AI-generated** & \begin{tabular}{l} In the context you provided, “PDF” most likely refers \\ to “Portable Document Format,” which is a common \\ file format used for documents, including math homework assignments. PDF files are widely used for sharing documents because they preserve the formating \\ and layout across different devices and operating systems. Robert likely used a PDF file for his math homework to ensure that it could be easily shared and viewed by others without compatibility issues. \begin{table} \begin{tabular}{|l|l|} \hline **Category** & **Hallucinated sentences** \\ \hline **Person** & 14850 \\ \hline **Location** & 13050 \\ \hline **Number** & 7275 \\ \hline **Acronym** & 1225 \\ \hline \hline **Total** & 36910 \\ \hline \end{tabular} \end{table} Table 3: In the mathematical context, “PDF” stands for “Probability Density Function.”. To investigate the impact of the readability of the prompt, we pose the following research questions: 1. How does the complexity of a prompt's language or vocabulary affect the likelihood of hallucination in LLM-generated responses? 2. Does the length of a prompt impact the potential for hallucination, and how does the readability of a long versus a short prompt affect LLM behavior? 3. How do different LLM architectures (e.g., GPT-3, GPT-4, etc.) respond to prompts of varying linguistic readability, and do they exhibit differences in hallucination tendencies? ### Formality The _formality_ of language refers to the degree of sophistication, decorum, or politeness conveyed by the choice of words, sentence structure, and overall tone in communication. It is a way to indicate the level of etiquette, respect, or professionalism in a given context. In the example given below, both sentences convey an identical message, yet the initial one carries significantly more formality. Such stylistic distinctions frequently exert a more significant influence on the reader's comprehension of the sentence than the literal meaning itself [11]. **Example of _formality_ in sentences**[10] * Those recommendations were unsolicited and undesirable. * that's the stupidest suggestion EVER. _Formality_ (defined in [1]) is calculated as given in Eq. (2): \[\begin{split}\text{F}=(\text{noun frequency}+\text{ adjective freq.}\\ +\text{preposition freq.}+\text{article freq.}\\ -\text{pronoun freq. - verb freq.}\text{- - }\text{arverb freq.}\text{- }\text{ interjection freq.}+100)/2\end{split} \tag{2}\] To examine how the formality of the prompt influences the outcome, we ask the following research inquiries. 1. How does the level of formality in prompts influence the likelihood of hallucination in responses generated by LLMs? 2. Are there specific categories of hallucination that are more prevalent in responses prompted with formal versus informal language? ### Concreteness _Concreteness_ assesses the extent to which a word represents a tangible or perceptible concept. As per the theory in [12], it is suggested that concrete words are easier to process compared to abstract words. The degree of concreteness associated with each word is expressed using a 5-point rating scale that ranges from abstract to concrete. A concrete word receives a higher rating and pertains to something that physically exists in reality, i.e. one can directly experience it through senses (smell, taste, touch, hear, see) and actions. An abstract word receives a lower rating and refers to something that isn't directly accessible through your senses or actions. Its meaning is dependent on language and is usually elucidated by employing other words since there's no straightforward method for direct demonstration. **Examples of _concrete_ words** Apple, Dog, Chair, Book, Water, Mountain, Car **Examples of _abstract_ words** Justice, Love, Happiness, Courage, Friendship, Wisdom, Equality, Democracy Concreteness ratings for 37,058 individual English words and 2,896 two-word expressions (i.e., a total of 39,954) are provided in [10]. Since these ratings are at the word level, we compute the concreteness of a sentence by taking an average as described in Eq. (3). \[\frac{\sum_{i=1}^{n}\text{concreteness rating}_{i}}{n} \tag{3}\] In order to explore the influence of the prompt's concreteness on the study, we present the following research questions. * How does the level of linguistic concreteness in a prompt impact the probability of hallucination in LLMs? * Do LLMs tend to hallucinate less when provided with prompts that include specific details and constraints? * Are LLMs more prone to hallucination when given abstract or vague prompts compared to concrete and specific prompts? ## 5 Our findings To investigate how the linguistic characteristics of prompts affect the generation of hallucinations in LLMs, we initially define the ranges for three specific scores, as outlined in Table 5. A comprehensive analysis of these findings is presented in the following sections. ### Effects of _readability_ on hallucination in LLMs The figure (see Fig. 3) illustrates our empirical findings and the following are the main insights that address the research questions posed earlier in Section 4.1. ### Effects of familiarity on LLMs relative ELMs * Prompts that are easier to read tend to have fewer instances of hallucinations. * Some difficult-to-read prompts, but more formal also hallucinate less. * Hence, the results regarding readability are somewhat uncertain, displaying a combination of findings. ### Effects of _formality_ on hallucination in LLMs Fig. 4 represents our empirical findings.The following points outline the primary insights that respond to the research queries introduced in Section 4.2. ### Effects of familiarity on LLMs relative ELMs * Formal language prompts typically exhibit a lower propensity for generating hallucinatory content. * Our findings demonstrate how utilizing more formal prompts can address hallucinations in the **Name and Location categories.** * The linguistic impacts of the prompts become more evident in LLMs such as GPT-4, OPT, and subsequent versions. ### Effects of _concreteness_ on hallucination in LLMs Fig. 5 shows our experimental results. The following section highlights the core insights that address the research inquiries introduced in Section 4.3. ## 6 Conclusion In this preliminary research study, we begin by categorizing the primary types of hallucinations present in LLMs. Subsequently, we compile our dataset by utilizing New York Times news tweets, aligning with these established categories. Language intricacies assume a crucial role in the comprehension of language. Therefore, we delve into the examination of three significant linguistic dimensions: readability, formality, and concreteness, and their potential influence on the occurrence of hallucinations in LLMs.
2309.17415
Intuitive or Dependent? Investigating LLMs' Behavior Style to Conflicting Prompts
This study investigates the behaviors of Large Language Models (LLMs) when faced with conflicting prompts versus their internal memory. This will not only help to understand LLMs' decision mechanism but also benefit real-world applications, such as retrieval-augmented generation (RAG). Drawing on cognitive theory, we target the first scenario of decision-making styles where there is no superiority in the conflict and categorize LLMs' preference into dependent, intuitive, and rational/irrational styles. Another scenario of factual robustness considers the correctness of prompt and memory in knowledge-intensive tasks, which can also distinguish if LLMs behave rationally or irrationally in the first scenario. To quantify them, we establish a complete benchmarking framework including a dataset, a robustness evaluation pipeline, and corresponding metrics. Extensive experiments with seven LLMs reveal their varying behaviors. And, with role play intervention, we can change the styles, but different models present distinct adaptivity and upper-bound. One of our key takeaways is to optimize models or the prompts according to the identified style. For instance, RAG models with high role play adaptability may dynamically adjust the interventions according to the quality of retrieval results -- being dependent to better leverage informative context; and, being intuitive when external prompt is noisy.
Jiahao Ying, Yixin Cao, Kai Xiong, Yidong He, Long Cui, Yongbin Liu
2023-09-29T17:26:03Z
http://arxiv.org/abs/2309.17415v3
# Intuitive or Dependent? Investigating LLMs' Robustness to Conflicting Prompts ###### Abstract This paper explores the robustness of LLMs' preference to their internal memory or the given prompt, which may contain contrasting information in real-world applications due to noise or task settings. To this end, we establish a quantitative benchmarking framework and conduct the role playing intervention to control LLMs' preference. In specific, we define two types of robustness, factual robustness targeting the ability to identify the correct fact from prompts or memory, and decision style to categorize LLMs' behavior in making consistent choices -- assuming there is no definitive "right" answer -- intuitive, dependent, or rational based on cognitive theory. Our findings, derived from extensive experiments on seven open-source and closed-source LLMs, reveal that these models are highly susceptible to misleading prompts, especially for instructing commonsense knowledge. While detailed instructions can mitigate the selection of misleading answers, they also increase the incidence of invalid responses. After Unraveling the preference, we intervene different sized LLMs through specific style of role instruction, showing their varying upper bound of robustness and adaptivity. ## 1 Introduction Large language models (LLMs) have become fundamental tools and achieved great success in the area of natural language processing (NLP) (Wei et al., 2022; Mirowski et al., 2023). They can solve various tasks in the same form of text generation simply by providing task-specific prompts (Mishra et al., 2022). However, LLMs sometimes fail to understand and follow the prompted instructions. Take the inverse scaling prize as an example, when the instruction goes against common sense or refines some fake facts, the performance dramatically decreases even with increasing model scale. One of the main reasons is that LLMs may struggle between the memory and the conflicting prompt (McKenzie et al., 2022), leading to an unclear preference and risky decision style in real-world applications. More interestingly, can we fix their preference by defining the role of a specific style through instructions? In this paper, we propose to systematically quantify the robustness of LLMs' preference in the conflicting situation (Longpre et al., 2021) from the following two perspectives: **Factual robustness** measures the ability of LLMs to discern the facts in conflicting situations. There are two scenarios. Firstly, the model memorizes the correct facts while the prompt introduces a fake one; and secondly, the model's internal memory is inaccurate or lacks related knowledge, where the correct counterpart is provided in the external prompt. Thus, if a model has a higher factual robustness score, it should be able to robustly ignore the prompted noisy information and better utilize the given external knowledge. Such a robust model is invaluable for fact-centric tasks like fact-checking or factual question-answering (QA). **Decision style** measures if the LLM has a consistent preference in the situation where there isn't a definitive "right" answer. That is, regardless of the correctness, can LLMs make consistent choices -- leaning towards the prompt or its own memory? Assessing models' decision styles empowers users with insights into the models' behavioral inclinations. A higher score indicates that the model can yield less random answers, making it more predictable and reliable in non-factual applications, such as personalized assistance or recommendation. To this end, we establish a complete benchmarking framework including a dataset, a robustness evaluation pipeline, and corresponding metrics. Furthermore, we intervene in the preference of LLMs by instructing a specific style of role. For the dataset, to ease the measurement and ensure the high-quality, we leverage existing knowledge-intensive datasets and standardize a unified form of Multi-Choice Questions (MCQ). Under this setting, the "confliction" goes into where the knowledge presented in the prompt advocates for one answer, while the model memory suggests another one. For the evaluation pipeline and metrics, we design five steps from 1) memory assessment to 2) factual robustness in zero-shot and 3) few-shot in-context learning (ICL), to 4) decision style analysis, and finally 5) role playing intervention as well as leaderboard building. To measure factual and style robustness, on one hand, we break down into two aspects, Vulnerable Robustness (VR) and Resilient robustness (RR), according to the two factual conflicting scenarios mentioned above. On the other hand, drawing from prior research (Harren, 1979; Phillips et al., 1984), we define three types of decision styles: intuitive, dependent, and rational, to categorize the models' behavior -- to which extent they leverage internal memory or external prompt only, or can rationally consider both. We have conducted extensive experiments on seven closed-source and open-source LLMs. The main findings are as follows: **(1)** Compared with utilizing correct prompted knowledge, LLMs are more vulnerable to misleading prompts, thus enhancing VR robustness against noisy or fake prompts will be a pivotal focus in future research (Sec 4.1). **(2)** LLMs are more robust in using factual knowledge than commonsense knowledge via prompts. This suggests that we can leverage the retrieval-then-prompt strategy to remedy factual flaws while enhancing LLMs' inherent commonsense reasoning ability (Sec 4.1). **(3)** Detailed instructions are not magic. Although optimizing prompts with hints of possible noise does deter models from selecting misleading answers, the side effect leads to more invalid responses (Sec 4.2). **(4)** Medium-sized LLMs with instruction-tuning tend to exhibit a decision-making style dependent more on external prompts. Compared with them, GPT-4 and Bard are rational considering both memory and prompt. We attribute to the large model scale that amplifies memory retention while maintaining instruction-following capabilities (Sec 4.4). **(5)** We indeed can change LLMs' robustness through role playing intervention, while different LLMs vary a lot in upper bound and adaptivity. Notably, although GPT-4 demonstrates the best performance and LLaMA2 is competitive in some aspects, the adaptivity reveals their large gap (Sec 4.5). ## 2 KRE Dataset Construction To ensure high quality, our knowledge robustness evaluation dataset (KRE) dataset extends existing machine reasoning comprehension (MRC) and commonsense reasoning (CR) datasets by automatically generating conflicting cases. We choose the tasks of MRC and CR, as LLMs have demonstrated good memorization of factual and commonsense knowledge, facilitating the robustness assessment. Specifically, each sample in our KRE dataset consists of four components: 1) a question, 2) a set of answer choices, including an answer (**golden/correct answer**, \(a_{gol}\)) that conforms to facts or common sense and several misleading answers, 3) two types of contexts, a golden context to provide necessary facts or commonsense, and a negative context that supports a misleading answer (**negative answer**, \(a_{neg}\)), and 4) instructions. Since the questions, answers, and golden contexts already exist in MRC and CR datasets, we design three steps for KRE construction: dataset filtering, conflict generation, and instruction design. Note that our pipeline can be easily extended to a broader of tasks. **Dataset filtering** For data sources, we select and process four publicly available datasets as the fundamental datasets to construct our KRE dataset: two machine reading comprehension (MRC) datasets MuSiQue (Trivedi et al., 2022) and SQuAD v2.0 (Rajpurkar et al., 2018), as well as two commonsense reasoning (CR) datasets ECQA (Aggarwal et al., 2021) and e-CARE (Du et al., 2022). We take the MRC paragraph and CR explanation as golden context. We use these sources because they are either based on Wikipedia or human knowledge. This setup enables us to verify the scope of LLMs' memory by withholding golden context. We only retain answerable examples for MRC and leverage the validation set. The KRE dataset comprises a total of 11,684 samples, more statistics results of the KRE are shown in Table 7. **Conflict generation** This step involves the generation of misleading answer choices and negative context. As CR datasets have misleading choices, we utilize ChatGPT (OpenAI, 2022) to supplement MRC samples (Details can be found in Appendix B.1.1). Subsequently, we randomly choose one misleading option as the negative answer (\(a_{neg}\)) and employ ChatGPT to generate a negative context. Specifically, for SQuAD and MuSiQue, we substitute the golden answer entity in the gold context with the negative answer ( A case is shown in Appendix B.1.2 ). In the case of ECQA and e-CARE, we create an explanation tailored for the negative answer to serve as the negative context. **Instruction design** Since the instruction in the prompt tells LLMs what to do, it may have some potential impact (positive or negative) on the usage of the knowledge in the prompts (Shi et al., 2023), leading to inaccurate robustness evaluation results. Hence, we propose and select different kinds of instructions to alleviate this potential problem. Based on how to use the knowledge in the context or few-shot examples, we design two kinds of instructions (1) **Instruction without hint** will not explicitly tell the LLMs how to use the knowledge or few-shot examples (if provided) to answer the question. (2) **Instruction with hint** tells LLMs there might be some noise in the knowledge context or few-shot examples (if provided), they should judge the quality of the prompts carefully. For each kind of instruction, We engaged four individuals to draft a total of \(i=12\) distinct instructions. After that, to further enhance the diversity of the instructions, we ask ChatGPT (OpenAI, 2022), GPT-4 (OpenAI, 2023), Claude (Anthropic, 2023), to rephrase the instruction, generating fresh variants. Consequently, we amassed a pool of 24 unique candidate instructions. All the instructions can be found at Appendix B.2 and B.3. **Human evaluation** We conduct the human evaluation for the quality of the negative context, involving four evaluators. We randomly select 100 questions from each corpus within the KRE dataset and provide the evaluators with the negative context, the associated question, and the set of answer choices. The evaluators are then tasked to assess the extent to which the negative context steers toward the negative answer option. The result shows that more than 98% of the sampled negative context is misleading. All the evaluation results can be referred to Appendix A.2. ## 3 Method ### Framework **Preliminary:** Our evaluation focuses on the conflict situation where the prompt we consider has four key components: **the instruction \(I\)**, **the testing question \(x\)**, **the knowledge context \(C\)** related to \(x\), and **the few-shot examples** set \(E\) (removed for zero-shot learning scenario). In specific, we introduce the knowledge context into example as \(\hat{E}\). We define the prompt \(P\) as the concatenation of the above components: \(P=I\oplus E\oplus C\oplus x\), where \(\oplus\) denotes the concatenation operation. For example, \(P\) could be **"I: Help me to answer the question. _E_: _Question: Where can I find water? Answer: Lakes. **C**: Forces hunt chickens. **x**: Question: Where would I not want a fox?_. **Framework:** The overall framework is shown in Figure 1. The entire pipeline consists of 5 steps: (1) **Memory Assessment** ( Sec 3.2) to check if LLMs memorize the accurate knowledge to the question, partitioning the dataset for the following steps, (2) **Factual Robustness Evaluation** (Sec 3.3) targeting factual discernment in two conflict scenarios, vulnerable and resilient robustness, by supplementing with either golden or negative context according to different memory assessment results, (3) **Influence of Few-shot Example** ( Sec 3.4) that further considers the impacts of the noise in few-shot examples on the robustness, complementary to the above zero-shot settings, and (4) **Decision-Making Style Analysis** ( Sec 3.5) to reveal LLMs' preference between memory and prompt. Note that different from the above steps, we here do not take world facts as the only correct answer. Instead, we highlight two ambiguous answer choices supported by memory and context, respectively. (5) **Role Play Intervention and Leaderboard** (Sec 3.6) Upon evaluating the models, we construct a leaderboard based on the Factual Robustness result. Additionally, we implement a Role Play Intervention to discern the upper boundary of a model's capabilities. ### Memory Assessment Assessing memory in LLMs can be approached through two primary methods: **Direct Knowledge Evaluation** which entails directly evaluating the LLM on text that is a part of its training data. While this approach is direct and efficient, it is contingent upon access to the precise training datasets. **Question-Answering Assessment** employs question-answering tasks to gauge if the LLM has the knowledge necessary for accurate response generation. This strategy's primary appeal is that it eliminates the need for access to pre-training data, enabling the creation of a unified evaluation framework suitable for an extensive array of both open-source and closed-source LLMs. Based on these reasons we deploy the Question-Answering Assessment approach. We promote the LLM question directly to conduct a memory assessment, and based on the answer we split the datasets into two parts: \(D^{\star}\) and \(D^{\scalebox{0.7}{$-$}}\) for each LLM. Here, \(D^{\star}\) signifies those samples where the LLM's predictions are accurate. On the other hand, \(D^{\scalebox{0.7}{$-$}}\) represents the samples where the LLM either provides incorrect answers or outputs that are invalid ( like " None "). That is, we assume that it possesses the corresponding memory to answer the question, if the model is able to answer the question correctly in this setting, because there is no other information (i.e., knowledge context and few-shot examples provided). ### Factual Robustness Evaluation Given \(D^{\star}\) and \(D^{\scalebox{0.7}{$-$}}\) from Section 3.2, we supplement each sample input with extra negative or gold knowledge context to evaluate Factual Robustness. In these scenarios, We break down the overall factual robustness into two aspects: 1) **Vulnerable Robustness (VR)** that measures to which extent the model can trust its own correct memory even with a misleading prompt, and 2) **Resilient Robustness (RR)** to quantify the model's ability to harness accurate information from the prompt, when memory is insufficient or flawed. Formally, for each sample in \(D^{\scalebox{0.7}{$-$}}\), we change the prompt to \(P=I\oplus C^{\scalebox{0.7}{$-$}}\oplus x\), marked as \((D^{\scalebox{0.7}{$+$}},C^{\scalebox{0.7}{$-$}})\), to perform Vulnerable Robustness evaluation. Conversely, for each sample in \(D^{\scalebox{0.7}{$-$}}\), we change the prompt to \(P=I\oplus C^{\scalebox{0.7}{$+$}}\oplus x\), marked as \((D^{\scalebox{0.7}{$-$}},C^{\scalebox{0.7}{$+$}})\), to measure the Resilient Robustness. We define the robustness metrics for VR and RR as follows: \[\small\text{VR}_{(D^{\scalebox{0.7}{$+$}},C^{\scalebox{0.7}{$-$}})}=\frac{1}{|D ^{\scalebox{0.7}{$+$}}|}\sum_{x\in D^{\scalebox{0.7}{$-$}}}\mathbb{I}\left[f(x, c^{\scalebox{0.7}{$-$}};M)=a_{\text{\tiny{\it{\it{\it{\it{\it{\it{\tiny{\it{\tiny{\it{\tiny{\it{\tiny{\it{ \tiny{\it{\it{\it{\it{ \it{ }}}}}}}}}}}}}}}\right]},\, \text{RR}_{(D^{\scalebox{0.7}{$-$}},C^{\scalebox{0.7}{$+$}})}=\frac{1}{|D^{ \scalebox{0.7}{$-$}}|}\sum_{x\in D^{\scalebox{0.7}{$-$}}}\mathbb{I}\left[f(x, c^{\scalebox{0.7}{$+$}};M)=a_{\text{\tiny{\it{\it{\it{\tiny{\it{\tiny{\it{\tiny{\it{ \tiny{\it{\tiny{\it{\tiny{\it{\it{\it{\it{\it{\it{\it{{\it{\it{{ \; \;\ to trust its internal memory. Using these two scores together, we represent the Factual Robustness: \[\text{FR}=\frac{(\text{VR}+\text{RR})}{2} \tag{2}\] Before assessing the robustness, we undertook an instruction selection process to mitigate the potential biases introduced by specific instruction. We conduct preliminary experiments on each LLM using a smaller sampled KRE dataset to identify the most effective instruction (Construct in Sec 2). Based on the result, we chose the instruction that exhibited the highest robustness for the Factual Robustness (FR) score assessment. This selection process is also conducted for few-shot setting. ### Few-shot Example Influence To delve deeper into the effects of noise within few-shot examples on factual robustness, in addition to the previously explored zero-shot settings (Section 3.3), we introduce few-shot examples denoted as \(\hat{E}\). Formally, the complete prompt is \(P=I\oplus\hat{E}\oplus C\oplus x\). For Vulnerable Robustness, marked as \((D^{+},C^{-},\hat{E})\), and Resilient Robustness, symbolized as \((D^{-},C^{+},\hat{E})\). In specific, the few-shot example \(\hat{E}\) testing for the VR is in the form \(\hat{E}=C^{-}\oplus x\oplus A\), and when evaluating the RR the \(\hat{E}\) is designed as \(\hat{E}=C^{+}\oplus x\oplus A\). In practice, the examples may also be noisy. We manually design gold and noisy examples that form the following three configurations: **All-positive** where few-shot examples correctly correspond to its question. This setting guides the model to rely on knowledge when lacking correct knowledge and to overlook incorrect information when possessing the right knowledge. **All-negative** means the answer in each few-shot example is wrong to the corresponding question. This setting misleads the model to rely on the negative context and ignore the golden context. **Mixed** means each few-shot example is randomly constructed as a positive or negative example in the previous two configurations. In experiments, each of the above configurations shares the same questions. The examples are written by human annotators. We manually sample \(m=3\) samples for each evaluation setting. The corresponding VR and RR metrics under the few-shot setting are shown follow. \(E_{x}\) is the few-shot examples configurations set (all-positive, all-negative, and mixed) corresponding to question \(x\). \[\text{VR}_{(D^{+},C^{-},\hat{E})} =\sum_{x\in D^{*}}\frac{\sum_{e_{\text{E}_{x}}}\mathbb{1}\left[ f(x,c^{-},e;M)=a_{gd}\right]}{|E_{x}||D^{*}|}, \tag{3}\] \[\text{RR}_{(D^{-},C^{+},\hat{E})} =\sum_{x\in D^{*}}\frac{\sum_{e_{\text{E}_{x}}}\mathbb{1}\left[ f(x,c^{+},e;M)=a_{gd}\right]}{|E_{x}||D^{-}|}.\] ### Decision-Making Style Analysis From the work (Harren, 1979; Phillips et al., 1984) there are three kinds of decision-making styles: **Rational Style**: Rational decision-makers employ strategic approaches, taking into account both their personal preferences and external information to make informed decisions. **Dependent Style**: These decision-makers heavily rely on external information of the advice of others. **Intuitive Style**: This style of decision-makers is driven primarily by their inner feelings and instincts. Based on these decision-making features, we conceptualize a model's reliance on its internal memory for responses as learning from inherent instincts, and its deference to prompts as relying on external information sources. Building on this analogy, we defined a **Decision-Making Style Score (DMSS)** to measure the behavior of the LLM. With just one score, the DMSS, we can efficiently classify models into Rational, Dependent, or Intuitive categories. \[\text{DMSS} =\frac{1}{|D|}\left(\sum_{x\in D^{+}}\mathbb{1}\left[f(x,c^{-};M) =a_{gd}\right]+\sum_{x\in D^{-}}\mathbb{1}\left[f(x,c^{+};M)=f(x;M)\right]\right) \tag{4}\] \[-\frac{1}{|D|}\left(\sum_{x\in D^{+}}\mathbb{1}\left[f(x,c^{-};M) =a_{neg}\right]+\sum_{x\in D^{-}}\mathbb{1}\left[f(x,c^{+};M)=a_{gd}\right] \right),\] The closer DMSS to 1 means the model is more likely an intuitive decision-maker who depends on self-memory to answer the question. Conversely, when DMSS nearing -1 the model aligns more with the dependent style, leaning heavily on external prompts. A score around 0 denotes a rational style, implying the LLM will consider the memory and the prompt together to make the decision. However, it's vital to note that a DMSS near 0 doesn't necessarily guarantee the model's capability to judiciously consider both the memory and the prompt. Given the conflicting scenarios in this study, discerning whether the model genuinely integrates both sources or randomly selects an option becomes challenging. Thus, in such cases, the Factual Robustness score should also be examined as an auxiliary metric to provide a more comprehensive understanding. ### Role Play Intervention To delve deeper into the potential for modulating the decision-making tendencies of LLMs, we introduced an intervention method known as "Role Play". The core idea behind this intervention is to explicitly guide the model's behavior using carefully crafted role prompts. We designed two distinct role prompts to steer the models into specific decision-making pathways: **Dependent Role**: In this intervention, the model is furnished with a role prompt that asks it to prioritize information solely from the external prompt when generating answers. The aim is to see how a model behaves when explicitly told to disregard its internal knowledge and place full trust in the provided prompt. **Intuitive Role**: Contrary to the dependent role, this role prompt is designed to push the model towards relying predominantly on its intrinsic memory. The model is encouraged to harness its accumulated knowledge and insights, essentially making decisions that stem from its inner memory, irrespective of the external prompt (Prompt is shown in Appendix B.4). ## 4 Experiment Based on the evaluation pipeline, we initially selected two LLMs ChatGPT (OpenAI, 2022) and Vicuna-13B (Chiang et al., 2023) for experiments with the complete KRE dataset and analyze their behavior. Recognizing the importance of a broader analysis, we expanded our scope by incorporating five additional LLMs into our evaluation. However, due to computational constraints and the time-intensive nature of exhaustive tests, these models were assessed on a representative subset of the KRE dataset. Subsequently, we applied the Role Play intervention to these models and deployed the Robustness leaderboard for comparisons ### How Factual robust are LLMs? Following the framework, we conduct memory assessment. The overall memory assessment for ChatGPT and Vicuna-13B is shown in Table 1. The result shows that the memory of ChatGPT possesses greater and more accurate factual and commonsense knowledge than that of Vicuna-13B. Interestingly, both ChatGPT and LLM tend to perform better on commonsense knowledge datasets compared to factual ones. This might be because language models capture many co-occurrence relationships, and a lot of commonsense knowledge is an induction of these observed patterns. Subsequent to memory assessment, for every LLM we proceed with factual robustness evaluation. Prior to assessing the robustness of LLMs ChatGPT and Vicuna-13B, we conduct preliminary experiments on each LLM using a smaller sampled KRE dataset to identify the most effective instruction (Construct in Sec 2). Based on the results, we retain the top-performing instruction for both categories under evaluation. And use the best robustness score among these two as the final result. The selection result is shown in AppendixA.1. The factual robustness result is shown in Figure 2. ChatGPT and Vicuna exhibit similar behavior in terms of the two robustness. Specifically, A higher RR score relative to the VR score **indicates that LLMs already possess a stronger capability to utilize the correct knowledge from prompts. However, their robustness against negative context introduced by conflicting prompts remains suboptimal. Consequently, as the field progresses, enhancing robustness against adversarial negative context is likely to emerge as a paramount research focus.** Moreover, the observed \(\text{RR}_{(D^{-},C^{+})}\) score on the two MRC datasets appears to be higher compared to the CR datasets, and the \(\text{VR}_{(D^{+},C^{-})}\) scores are lower on the MRC portion of the KRE dataset. **This result shows that LLMs prioritize the prompts with factual knowledge more than with commonsense knowledge.** Consequently, when equipped with accurate internal knowledge, models are more inclined to trust the prompts, leading to a decreased VR. Conversely, when their internal memory is lacking or incorrect, it results in an elevated RR. **Thus, to ensure better utilization of LLMs, there's a pressing need to enhance the precision of factual knowledge embedded in prompts. Meanwhile, when it comes to commonsense knowledge, the focus should be on amplifying the intrinsic memory of the model.** In the graph, the combined \begin{table} \begin{tabular}{l c c c c} \hline \hline **Model** & \(\text{ECQA}_{\text{KRE}}\) & \(\text{e-CARE}_{\text{KRE}}\) & \(\text{MuSIQuexex}\) & \(\text{SQuAD}_{\text{KRE}}\) \\ \hline ChatGPT & 74.2 & 81.5 & 34.6 & 65.3 \\ Vicuna-13B & 39.5 & 70.1 & 17.7 & 32.3 \\ \hline \hline \end{tabular} \end{table} Table 1: The memory assessment results of ChatGPT and Vicuna-13B on the KRE dataset. lengths of the bars representing VR and RR scores quantitatively illustrate the model's factual robustness. It's evident that ChatGPT's bar is longer than that of Vicuna-13, signifying that ChatGPT possesses superior factual robustness. This higher performance can be attributed to ChatGPT's larger number of parameters, more extensive training dataset, and enhanced instruction comprehension capabilities. ### How dose Instruction Influence Factual Robustness? In our factual robustness evaluation process, we carefully select instructions as the preliminary study (defined in Sec 3.3), following the defined process. In this section, we explore the influence of different configurations of the instructions on the Robustness. The results in Figure 3 (full results in Figure 7) indicate that neither ChatGPT nor Vicuna showcases any substantial improvements, though there is slight enhancement observed in ChatGPT's performance on the CR dataset. This outcome seems counter-intuitive. To gain deeper insights, we further investigated the model's responses. Specifically, we calculate the number of negative answers and invalid outputs generated by the model. Our observations reveal that the **inclusion of a hint indeed reduces the propensity of the model to choose the negative answer. However, it also introduces an increase in the frequency of invalid responses,** especially for Vicuna. Therefore, when taking both factors into account, the overall robustness does not exhibit any marked improvement. ### How does Few-shot example effect Factual Robustness? Similar in zero-shot setting 3.3, before assessing the robustness, we select the best performance instruction (Detailed result shown in AppendixA.1). As for the instruction influence in the few-shot setting, we observe a phenomenon consistent with that in the zero-shot scenario. More in Figure 7. The robustness score for ChatGPT and Vicuna-13B under few-shot setting can be found in Figure 4. The results demonstrate that for both ChatGPT and Vicuna, the "All-positive" configuration exhibits the highest RR and the highest VR. However, when compared to the zero-shot setting (\(VR_{(D^{+},C^{-})}\) and \(RR_{(D^{-},C^{+})}\)) "All-positive" setting do not always have a positive effect under the conflict situation. This phenomenon is counter-intuitive, conventionally, one would anticipate the "All-positive" approach to augment performance, "All-negative" to impede it, and "Mixed" to lie somewhere in between. **The result indicates that the few-shot approach doesn't consistently bolster performance, even in an "All-positive" versus zero-shot comparison.** Two potential explanations Figure 4: The VR and RR score (%) under the influence of three few-shot configurations. Figure 3: RR and VR of ChatGPT (a) and Vicuna (b) under different instruction settings: with and without hint (Sec 2). Also the output number of Negative Answers and Invalid responses under different instruction settings. Figure 2: The Vulnerable Robustness score (%) and The Resilient Robustness score (%) for model ChatGPT and Vicuna-13B. emerge for this phenomenon: 1: Few-shot examples, may more act to dictate the output pattern to the model, rather than the "thinking " pattern under the conflict situation. 2: The extended length of the context could obstruct the LLM's ability to effectively harness the implicit pattern information presented in few-shot examples. **Interestingly, we observe that under the mixed setting, Vicuna-13B's performance is notably subpar.** This suggests that the presence of mixed answer patterns induces confusion within the model, leading to its diminished performance. Notably, this phenomenon is absent in ChatGPT's performance, suggesting that ChatGPT possesses a more refined robustness to demonstration. ### decision-making style analysis In our work, We incorporated seven prominent models, namely GPT-4 (OpenAI, 2023), Claude (Anthropic, 2023), Bard (Google, 2023), Vicuna-13B (Chiang et al., 2023), ChatGPT (OpenAI, 2022), LLaMA (Touvron et al., 2023a), and LLaMA2 (Touvron et al., 2023b). Adhering to the established framework, we computed the DMSS for each of these models using a subset of the KRE dataset. The comprehensive results are tabulated in Table 2. It's evident that the majority of the models, 4 out of the 7 examined, tend to exhibit a dependent decision-making style. Considering that they all underwent instruction-tuning during training, this inclination towards being dependent suggests that after instruction tuning, these models can be guided to utilize external knowledge more effectively. Interestingly, LLaMA the only one that aligns with the Rational Style, possibly due to its low-level ability to utilize external golden contexts (evidenced by a lower RR score in Table 2). This behavior further corroborates our inference when considering that LLaMA did not undergo instruction-tuning. Furthermore, models with superior factual robustness (Table 2), such as GPT-4 and Bard, tend to exhibit a Rational decision-making style. This suggests they are adept at making judicious decisions by integrating both their internal memory and external prompts. **We hypothesize that when models reach a certain scale, they inherently amplify both their memory retention and instruction-following capabilities.** This enhancement allows them to balance between relying on stored knowledge and adapting to new information from prompts. ### poly play intervention and leaderboard **Roly Play Intervention.** Following the framework, We opted to have Role Play interventions on ChatGPT and Bard, which exhibit a Rational style, and on LLaMA-2, which leans towards the Dependent style for display. As illustrated in Figure 5 (All results in Table 2), the span of the three bars on the vertical axis (blue for intuitive role, yellow for dependent role intervention, and green representing the initial, unaltered situation) reveals a conspicuous shift in the model's decision-making behavior post-intervention. **This result indicates that we can change LLMs' robustness through role playing intervention**. Depending on the assigned role, post-intervention models demonstrated a distinct bias: they either leaned more on their internal memory or favored more the provided prompt. The range between the highest DMSS scores (intuitive role, blue bar) and the lowest (dependent role, yellow bar) gauges the **Adaptivity** of the model's decision-making style. Beyond understanding their decision-making tendencies under these role situations, we delved deeper into the models' VR and RR scores. All the tested models consistently exhibited a correlation between their assumed role and their robustness score. Specifically, when they were operating under the intuitive role, each model achieved the peak VR score. Conversely, under the dependent role, they all posted their highest RR scores. Furthermore, utilizing such a "God's-eye view" instruction prompting way, we were able to discern the **Upper-Bound** for the Factual Robustness (indicated by the red number in the figures 5). \begin{table} \begin{tabular}{l c c c|c|c||c c|c||c|c||c} \hline \hline **Model** & **VR** & **RR** & **FR** & \(\textbf{FR}_{upper}\) & \(\textbf{FR}_{rank}\) & **DMSS** & **Style** & **Adapt** & \(\textbf{Adap}_{rank}\) & **Over all** \\ \hline GPT-4 & 50 & 88 & 69 & 80 & 1 & -10 & Rational & 0.8 & 1 & 1 \\ Claude & 34 & 57 & 45 & 60 & 4 & -43 & Dependent & 0.39 & 4 & 4 \\ ChatGPT & 32 & 79 & 56 & 63 & 3 & -43 & Dependent & 0.45 & 3 & 3 \\ Vicuna-13B & 25 & 48 & 36 & 44 & 6 & -31 & Dependent & 0.27 & 6 & 6 \\ Bard & 54 & 68 & 61 & 74 & 2 & -1 & Rational & 0.68 & 2 & 2 \\ LLaMA-13B & 20 & 21 & 20 & 33 & 7 & 39 & Intuitive & 0.15 & 7 & 7 \\ LLaMA-2-13B-chat & 24 & 62 & 39 & 55 & 5 & -46 & Dependent & 0.31 & 5 & 5 \\ \hline \hline \end{tabular} \end{table} Table 2: The Robustness Leaderboard. The table shows the two robustness scores (FR and DMSS) for the involved models, and the rank of FR score (FR\({}_{rank}\)) and Adaptivity (Adap\({}_{rank}\)) **Robustness Leaderboard.** At the last stage of evaluation, we construct the leaderboard. Table 2 summarizes the Robustness score, encompassing FR and DMSS for the seven involved models. Among the models, Bard stands out for its superior Vulnerable robustness, effectively maintaining its core knowledge despite external disturbances. In contrast, GPT-4 has the highest Resilient Robustness, demonstrating its ability to capitalize on the accurate knowledge embedded in prompts. Furthermore, GPT-4 also displays unmatched factual robustness, properly relying on the prompt to discern accurate answers. LLaMA-2-13B-chat has the lowest DMSS score under Role Play intervention. **This suggests that in specific scenarios, it can adhere to the given instructions even more rigorously than GPT-4. However, when it comes to Adaptivity, it significantly falls behind GPT-4.** ## 5 Related Work **Prompt-in LLMs**: Large language models (LLMs) have become increasingly popular due to their impressive performance in various downstream tasks Wei et al. (2022); Mirowski et al. (2023). It can solve various tasks by simply conditioning the models on a few examples (few-shot) or instructions describing the task (zero-shot). The method of conditioning the language model is called "prompting" Liu et al. (2023), and designing prompts either manually Schick and Schutze (2021); Reynolds and McDonell (2021) or automatically Shin et al. (2020); Gao et al. (2021)has become a hot topic in NLP. Prompts serve as the interface between humans and LLMs, enabling in-context learning in an auto-regressive manner Liu et al. (2023). However, LLMs are known to be highly sensitive to prompts Turpin et al. (2023); Shi et al. (2023); Zheng et al. (2023); Zhao et al. (2021); Si et al. (2022), where minor variations like the order of few-shot examples. It is crucial to examine the robustness of LLMs under the influence of the prompt. **LLM robustness**: Recent studies have shown that language models are vulnerable to adversarial attacks Wang et al. (2023); Zuccon and Koopman (2023). In work Zhuo et al. (2023) shows that prompt-based semantic parsers built on large pre-trained language models have also highlighted their susceptibility to adversarial attacks Bruna et al. (2014); Hosseini et al. (2017). The work Wang et al. (2023) evaluated the robustness of ChatGPT and other LLMs from an adversarial and out-of-distribution perspective. Another work, PromptBench Zhu et al. (2023), developed a robustness benchmark to assess the resilience adversarial prompts. The work Chen et al. (2022); Longpre et al. (2021) focused on how the model acts when given conflicting evidence, and the work Longpre et al. (2021) proposed a method to mitigate over-reliance on parametric knowledge. Prior research Zuccon and Koopman (2023) has explored the impact of input knowledge in prompts on ChatGPT's performance when answering complex health information questions. Another recent study Xie et al. (2023) investigated how the model behaves when encountering knowledge conflicts. Notably, the work Xie et al. (2023) focused on the model's answer consistency Zhou et al. (2023). ## 6 Conclusion This comprehensive study provides pivotal insights into the robustness of LLMs' preference between their internal memory and external prompts. We have designed a quantitative benchmarking framework in terms of factual discernment and decision-making consistency. Based on that, we have conducted extensive experiments on seven widely used LLMs. The results underscore many critical revelation. Besides, we design a role playing intervention to bolster the robustness, which also shows the varying upper bound and adaptivity of different LLMs. Based on these insights, in the future, we Figure 5: Role Play Intervention result for the model GPT-4, Bard, LLaMA-2. The results illustrate how under specific DMSS score, the VR and RR scores of each model adjust post-intervention. will explore strategies to improve LLMs' abilities in using factual knowledge via external prompts while enhancing the commonsense reasoning via internal memory.
2309.17414
QR TPM in Programmable Low-Power Devices
Trusted Platform Modules (TPMs), which serve as the root of trust in secure systems, are secure crypto-processors that carry out cryptographic primitives. Should large-scale quantum computing become a reality, the cryptographic primitives adopted in the TPM 2.0 standard will no longer be secure. Thus, the design of TPMs that provide Quantum Resistant (QR) primitives is of utmost importance, in particular with the restrictions imposed by embedded systems. In this paper, we investigate the deployment of QR primitives and protocols in the standard TPM 2.0. Cryptographic algorithms that are already in the NIST QR cryptography standardization process, as well as an Oblivious Transfer (OT), a fundamental cryptographic primitive, are the QR cryptographic schemes selected to extend TPM 2.0. In particular, the Kyber algorithm for key encapsulation, the Dilithium algorithm for digital signature, and a 3-round Random Oblivious Transfer (ROT) protocol, supporting protocols such as Multi-Party Computation and Private Set Intersection (PSI). The QR extended TPM 2.0 is implemented in ARM and RISC-V embedded processors, its computational requirements are analysed and experimentally evaluated in comparison to the standard TPM. It is shown that Kyber and Dilithium are faster at creating keys than RSA, due to the key size and secure random sampling required in RSA, while they meet the same performance level as ECC. For digital signatures, both in signature creation and verification, Dilithium is on par with RSA and ECC. The ROT protocol shows decent performance and its support required small modifications to the TPM. This paper also shows that it would be possible to backport the required code to already available TPMs to ensure that current TPMs remain secure against quantum adversaries.
Luís Fiolhais, Leonel Sousa
2023-09-29T17:21:46Z
http://arxiv.org/abs/2309.17414v1
# QR TPM in Programmable Low-Power Devices ###### Abstract. Trusted Platform Modules (TPMs), which serve as the root of trust in secure systems, are secure crypto-processors that carry out cryptographic primitives. Should large-scale quantum computing become a reality, the cryptographic primitives adopted in the TPM 2.0 standard will no longer be secure. Thus, the design of TPMs that provide Quantum Resistant (QR) primitives is of utmost importance, in particular with the restrictions imposed by embedded systems. In this paper, we investigate the deployment of QR primitives and protocols in the standard TPM 2.0. Cryptographic algorithms that are already in the NIST QR cryptography standardization process, as well as an Oblivious Transfer (OT), a fundamental cryptographic primitive, are the QR cryptographic schemes selected to extend TPM 2.0. In particular, the Kypber algorithm for key encapsulation, the Dilithium algorithm for digital signature, and a 3-round Random Oblivious Transfer (ROT) protocol, supporting protocols such as Multi-Party Computation and Private Set Intersection (PSI). The QR extended TPM 2.0 is implemented in ARM and RISC-V embedded processors, its computational requirements are analysed and experimentally evaluated in comparison to the standard TPM. It is shown that Kyber and Dilithium are faster at creating keys than RSA, due to the key size and secure random sampling required in RSA, while they meet the same performance level as ECC. For digital signatures, both in signature creation and verification, Dilithium is on par with RSA and ECC. The ROT protocol shows decent performance and its support required small modifications to the TPM. This paper also shows that it would be possible to backport the required code to already available TPMs to ensure that current TPMs remain secure against quantum adversaries. Trusted Platform Module, Cryptography Quantum-Resistant, Embedded Systems + Footnote †: journal: Acmment of Computer Science + Footnote †: journal: Acmment of Computer Science + Footnote †: journal: Acmment of Computer Science + Footnote †: journal: Acmment of Computer Science + Footnote †: journal: Acmment of Computer Science + Footnote †: journal: Acmment of Computer Science + Footnote †: journal: Acmment of Computer Science + Footnote †: journal: Acmment of Computer Science + Footnote †: journal: Acmment of Computer Science + Footnote †: journal: Acmment of Computer Science + Footnote †: journal: Acmment of Computer Science + Footnote †: journal: Acmment of Computer Science + Footnote †: journal: Acmment of Computer Science + Footnote †: journal: Acmment of Computer Science + Footnote †: journal: Acmment of Computer Science + Footnote †: journal: Acmment of Computer Science + Footnote †: journal: Acmment of Computer Science + Footnote †: journal: Acmment of Computer Science + Footnote †: journal: Acmment of Computer Science + Footnote †: journal: Acmment of Computer Science + Footnote †: journal: Acmment of Computer Science + Footnote †: journal: Acmment of Computer Science + Footnote †: journal: Acmment of Computer Science + Footnote †: journal: Acmment of Computer Science + Footnote †: journal: Acmment of Computer Science + Footnote †: journal: Acmment of Computer Science + Footnote †: journal: Acmment of Computer Science + Footnote †: journal: Acmment of Computer Science + Footnote †: journal: Acmment of Computer Science + Footnote †: journal: Acmment of Computer Science + Footnote †: journal: Acmment of Computer Science + Footnote †: journal: Acmment of Computer Science + Footnote †: journal: Acmment of Computer Science + Footnote †: journal: Acmment of Computer Science + Footnote †: journal: Acmment of Computer Science + Footnote †: journal: Acmment of Computer Science + Footnote †: journal: Acmment of Computer Science + Footnote †: journal: Acmment of Computer Science + Footnote †: journal: Acmment of Computer Science + Footnote †: journal: Acmment of Computer Science + Footnote †: journal: Acmment of Computer Science + Footnote †: journal: Acmment of Computer Science + Footnote †: journal: Acmment of Computer Science + Footnote †: journal: Acmment of Computer Science + Footnote †: journal: Acmment of Computer Science + Footnote †: journal: Acmment of Computer Science + Footnote †: journal: Acmment of Computer Science + Footnote †: journal: Acmment of Computer Science + Footnote †: journal: Acmment of Computer Science + Footnote †: journal: Acmment of Computer Science + Footnote †: journal: Acmment of Computer Science + Footnote †: journal: Acmment of Computer Science + Footnote †: journal: Acmment of Computer Science + Footnote †: journal: Acmment of Computer Science The change of paradigm to QR raises a new set of challenges, in particular on embedded devices, including ones related to performance and memory requirements, but also related to side-channel security. In this paper, we analyse the computational requirements of an updated QR TPM based on low-end range processors. To experimentally validate and evaluate a QR TPM, we have chosen an in-order double-issue RISC-V U74 processor and the in-order double-issue ARM Cortex-A7. The adoption of programmable processors is the right choice in this first stage of development, not only allowing adjustments and parametrization during the standardization phase but also assessing methods to face the challenges of side-channel attacks. As far as we know, this is the first time this type of research, with significant practical interest, is performed, being a step forward for launching secure and efficient QR TPM devices in the near future. Moreover, our approach demonstrates that it would be possible to backport the required code to already available TDMs in order to ensure that old TDMs remain secure against quantum adversaries. The organization of the paper is as follows. In Section 2 we provide the background information required to understand the material in this paper and briefly discuss the state-of-the-art. Section 3 analyses the memory and computational requirements of the Kyber, Dilithium, and the ROT algorithms. The experimental results presented in Section 5 were obtained by implementing the algorithms and protocol on ARM and RISC-V based embedded systems. Section 6 draws the conclusions. ## 2. Background and State of the Art The main objective of this paper is to analyse the computational requirements of QR TDMs, and to experimentally evaluate in practice the feasibility of these new types of secure TDMs by using low-end range programmable microprocessors. In this section, we will discuss the current TPM standard and the QR algorithms and primitives added to this standard classic TPM. ### Trusted Platform Module The TPM is a specialized hardware component designed to provide security-related functions for computing systems The standard TPM 2.0 library, from the Trusted Computing Group (TCG) (Kumar et al., 2017), builds upon the foundation of TPM 1.2 with enhanced capabilities. In relation to TPM 1.2, TPM 2.0 provides enhanced cryptography, for example, ECC and more advanced hash functions, allows for asymmetric key encryption directly within the TPM, provides multiple hierarchies of keys, and facilitates remote management and configuration of the TPM. Although TPM 2.0 is designed to meet the evolving security requirements of current computing platforms, such as laptops, servers, embedded systems, and Internet of Things (IoT) devices, they are not quantum resistant. With that purpose, QR cryptographic primitives and algorithms have to be implemented in hardware and integrated into the TPM. ### Kyber and Dilithium algorithms Kyber (Kyber, 2017) and Dilithium (Dilithium, 2017) are QR cryptographic schemes based on lattice-based cryptography, which relies on the hardness of mathematical problems related to lattices. They have been identified for NIST as QR cryptographic algorithms for standardization. Both algorithms require the implementation of sampling, number theoretic transform (NTT) to speedup arithmetic, and encode/decode functions. The Kyber algorithm provides secure key encapsulation mechanisms, for exchanging encryption keys between two parties. It aims to achieve a balance between security and efficiency, having reasonable computational requirements. A standalone hardware design of the Kyber algorithm, which computes key-generation, encapsulation (encryption), and decapsulation (decryption and reencryption), can be found in (Kumar et al., 2017). The Dilithium digital signature scheme has been designed for digital signature generation and verification, with the aim of being secure and efficient in terms of computational and memory requirements. It allows multiple security levels through parametrization. The Dilithium family includes different parameter sets that offer varying levels of security and performance, which can be chosen according to the security requirements and available resources. An implementation of Dilithium, for a set of parameters, targeting Field Programmable Gate Arrays (FPGAs) is proposed in (Bach et al., 2017). ### ROT Protocol Random Oblivious Transfer (ROT) is a cryptographic protocol that allows one party (the sender) to send a set of messages to another party (the receiver), who can then select and receive a single message from the set without revealing which message was chosen to the sender. This protocol provides a form of private communication where the sender remains oblivious to the choice made by the receiver, and the receiver remains oblivious to the content of the unchosen messages. The concept of Oblivious Transfer (OT) was introduced in the field of cryptography to address scenarios where one party needs to transfer information to another party without revealing unnecessary details. ROT extends this concept by introducing an element of randomness, making it more suitable for certain applications. There are different variations of ROTs, including: * 1-out-of-2 ROT (1-2 ROT): In this type of ROT, the sender has two messages, and the receiver chooses one of them without revealing the choice. This is similar to flipping a coin where the sender does not know the outcome, and the receiver only gets the chosen side. * k-out-of-N ROT (k-N ROT): This is a more general version where the sender has N messages, and the receiver can choose k messages without revealing the choices or the content of the unchosen messages. ROT has applications in secure multiparty computation, private database queries, and cryptographic protocols where privacy and confidentiality are essential. It's a fundamental building block in constructing secure protocols that allow parties to interact without revealing sensitive information. As a practical example consider that the TPM is being used as a secure wallet and that the user wants to perform a payment anonymously. The usage of ROT within the TPM would be advantageous as the contents of the secure wallet would never have to leave the TPM when a transaction is processed, and the transfer can be anonymous (Dilithium, 2017). Moreover, consider the example of anonymous remote attestation. An enterprise wants to authenticate that a laptop belongs to its network such that it can give the laptop access to some confidential internal documents. Once again, the TPM, which already features functionality for attestation (Tururur et al., 2017), can be used to anonymize the attestation procedure (Bordes et al., 2017). ### Target Hardware: Microprocessors selection Previous work in this topic has explored the architectural, performance, and memory requirements of QR algorithms in a TPM (Krishnan et al., 2017). However, the results presented therein used a laptop-class processor, with an out-of-order backend, to emulate the TPM hardware, thus the objective was not to evaluate performance in a real world scenario. Even though it would be beneficial for a TPM to have an out-of-order processor, this is not a realistic goal. TPMs are designed for low power and low price such that they can be used in any type of computing system. The goal of the TPM is to lower the barrier of entry for any computing system to have strong security guarantees. Therefore, this paper aims to perform a performance comparison using cheaper and low-power devices. In these experiments, two low-power processors are used: an ARM Cortex-A7 at 900 MHz and a RISC-V U74 core at 1.2 GHz. Both of these processors were selected as they fit the power/performance/area requirements found in hardware TPMs. The ARM Cortex-A7 is a 32-bit dual-issue in-order processor with support for NEON instructions. It has 8 pipeline stages and can issue up to two instructions per cycle under certain conditions. The A7 core used in the experiments is part of the BCM2836 System-on-Chip (SoC), which features a cluster of four A7 cores. It is unclear what cache hierarchy is being used in the SoC (Bordes et al., 2017). The RISC-V U74 is a 64-bit RV64IMAFDC double-issue in-order processor with no vector unit. It has 8 pipeline stages and can issue up to two instructions per cycle under certain conditions. The U74 core used in the experiments is part of the FU740 SoC, which features a cluster of four U74 cores (Krishnan et al., 2017). This SoC has a private 32 KiB 4-way I$ and 32 KiB 8-way D$ for each U74 core, and a 2 MiB 16-way L2 cache shared between all cores in the cluster. ## 3. TPM Computational Requirements Figure 1 shows the basic architecture of the TPM. The base architecture is composed of a cryptographic processor wherein a secure Random Number Generator (RNG), RSA and ECC cryptographic primitives, and a hashing engine are available; a small non-volatile memory module (64 kB) to store TPM's state; and a volatile memory to keep short-lived data. The TPM is both a passive and active agent in a system. It provides security services to itself and to the system it is embedded in. A SoC can use a TPM to enhance the security functionality it offers. An application processor can send commands to the TPM requesting certain operations to be completed securely. It is important to note that the TPM only uses its resources to provide functionality to itself or others. The TPM is independent of the system it is embedded in, both hardware and software wise, and provides dedicated circuitry to protect against physical attacks (Turur et al., 2017). Through the TPM Command Transmission Interface (TCTI) layer, a client is able to interface with a TPM using the commands provided in the TPM Software Stack (TSS). Physically, the TCTI layer is a bus that connects the application processor to the TPM. This is generally achieved using SPI. Figure 2 shows a callgraph of the chain of functions executed when a command is received. A user will issue a command to the TPM using the TSS. Within this stack, the user's command will be serialized and sent to the TPM through the TCTI. The TPM, upon receiving the command, will deserialize it and check if the caller has sufficient privileges to execute this command. If the command passes the check, then the TPM will execute the command and return the result of the command back to the application processor. The TPM serializes the results and sends them through TCTI to the application processor which will deserialize it upon reception using the TSS. Finally, the TSS returns the result of the command to the user. TPMs can be implemented in different formats (Turur et al., 2017). Hardware TPMs, besides strengthening the software Trusted Computing Base (TCB) of a system also protect against physical attacks (Turur et al., 2017; Tur et al., 2017). Software TPMs, as is the studied case in this paper, are used for development and prototyping, since they have a faster implementation-debug cycle than a hardware TPM and also do not have to wear out the non-volatile memory which has a limited lifecycle due to the number of writes (Tur et al., 2017). Moreover, TPMs can also be used in virtualization scenarios where an hypervisor offers the services of a TPM through a virtualization layer (Tur et al., 2017; Tur et al., 2017). This can be achieved Figure 1. Example Number-Theoretic TPM Architecture. Figure 2. Function flowgraph executed by the TPM when a valid command is received. through multiple Software TPMs running in the hypervisor for each virtual machine, or the hypervisor uses a hardware TPM in a multi-programmed manner, context switching part of the state of the TPM between each active virtual machine. The general implementation of a TPM features a small in-order CPU, a small on-chip memory, and a cryptographic co-processor to accelerate cryptographic tasks. Number-Theoretic TPMs have a co-processor that specializes in big-number algebra due to the algebraic requirements of RSA and ECC. QR TPMs, using the soon-to-be standardized Kyber and Dilithium algorithms, will start moving into co-processors that specialize in lattice-based algebra. Lattice operations are over a polynomial ring with \(n\) dimensions modulus a \(q\), wherein \(q\) is a prime number. Algebraic operations over the ring require adding, subtracting, multiplying, and dividing polynomials where all operations are modulus \(q\). Therefore, QR TPMs will feature a specialized vector engine in its cryptographic co-processor to handle the ring algebra. Moreover, to improve performance and reduce the number of operations in particular algebraic operations, the co-processor will first convert the polynomials to the NTT domain. Previous work in this area has shown that a QR TPM will possess an architecture that is similar to Figure 3 with larger buffers. The median for the extra memory requirements is one order of magnitude, due to the larger key sizes present in lattice-based cryptography (Bauer et al., 2013; Bauer et al., 2014; Bauer et al., 2015). Figure 4 shows a public and secret key size comparison between RSA 2048 bits, ECC NISTP256, Kyber-768, and Dilithium III. However, the security strength provided by the QR algorithms offsets the increased memory cost. Note that, the specialized accelerator for lattice-based cryptography will vary depending on the order of the polynomial ring and the modulus \(q\) supported. As a general rule of thumb, cryptographers increase the order of the polynomial and the modulus \(q\) to provide stronger security (Bauer et al., 2013; Bauer et al., 2015). ## 4. TPM Emulator and Rot Implementation To emulate the TPM, we use a fork of the IBM Software TPM (SW-TPM) and TSS (Kane et al., 2015; Kane et al., 2015) that supports some QR algorithms in (Bauer et al., 2014; Bauer et al., 2015). The SW-TPM emulates the TCTI layer through a TCP socket. A command sends its data to the socket for the TPM to process. Due to the nature of the emulation, the SW-TPM is also referred to as the TPM server. The SW-TPM attempts to closely emulate the memory limitations found in a real TPM. There is no reliance on dynamic memory management. The SW-TPM manages its memory by using buffers allocated either in the stack, or the.bss and.data program segments. The main reason behind the choice of this fork is because it already contained the infrastructure to support QR algorithms, as it was used to evaluate the architectural implementation of QR algorithms such as Kyber (Bauer et al., 2013), Dilithium (Bauer et al., 2015), NTTRU (Kane et al., 2015), and L-DAA (Kane et al., 2015). The TPM was also extended to add support for a ROT (Bauer et al., 2015) protocol to evaluate its efficiency in a TPM scenario. The ROT protocol is implemented in C++ and Assembly (for certain optimizations) (Kane et al., 2015). However, the SW-TPM and its TSS are written in C (Bauer et al., 2014; Bauer et al., 2015). C and C++ programs can be linked together if the C++ program exposes a C Application Binary Interface (ABI). Therefore, we have modified the ROT protocol such that it can expose functions for each of the required messages in the protocol. Moreover, we disabled all optimizations found in the ROT implementation, namely vector instructions for ARM, such that its computational model would fit the TPM. With these modifications, we compiled a static library that could be linked with the SW-TPM. Finally, to add support for the ROT algorithm, four new TPM commands were added for each message passed in the protocol. The CC_ROT_MSG1 command computes and transfers the receiver's first message. The CC_ROT_MSG2 command computes and transfers the sender's first message. The CC_ROT_MSG3 command computes and transfers the receiver's second, and final, message. Lastly, the CC_ROT_MSG4 command computes the sender's challenges. Furthermore, similar modifications were employed to the TSS, in the form of new C binaries and modifications to the TSS library, in order to create new commands to call the desired ROT functionality in the SW-TPM. The addition of this new algorithm to the SW-TPM and the TSS totaled 880 lines and 3111 lines modified, respectively. The implementation of ROT in the SW-TPM and the TSS shows that the usage of a SW-TPM to prototype features that future TPM specifications may provide requires a small effort. Figure 4. Key size comparison between RSA 2048 bits, ECC NISTP256, Kyber-768, and Dilithium III. Figure 3. QR TPM Architecture. ## 5. Experimental Results In the ARM core, the compiler used was GCC 8.3.0 and in RISC-V core the compiler used was GCC 11.2.0, both with the -00 optimization flag for the TPM and the TSS. Even though the ARM core possesses a vector unit in the NEON unit, no vector instructions were used either explicitly by the TPM server, the TSS commands, or the compiler. This choice was made because the current TPM 2.0 architecture does not contain a vector unit. To avoid cache misses due to core migration, the TPM server and the TSS commands were executed in different cores where each process was pinned to the same core. The frequency was fixed at 900 MHz in the ARM core and at 1.2 GHz in the RISC-V core such that dynamic frequency scaling would not taint the performance results. The experiment methodology used herein is the following. The security parameters used for each algorithm are: 2048 bits for RSA, the NISTP256 curve for ECC, the 768 mode in Kyber (\(n=256\) and \(q=3329\)) (Kyber, 2017), the level III mode in Dilithium (\(n=256\) and \(q=8380417\)) (Kyber, 2017), and the default security parameters in ROT (\(n=512\) and \(q=13313\)) (Kyber, 2017). The ASCII string "My super secret. Please don't share.un' is used for encryption and signature; signed messages use the SHA3-256 hash; and all keys are created as non-primary with the fixed TPM and parent properties. All the measured times result from taking the median over one hundred runs running on each of the previously described processors. The used TSS commands are: CC_Create for key creation; CC_Sign for data signature; CC_VerifySignature for signature verification; CC_{Kyber, RSA}_Encrypt and CC_{Kyber, RSA}_Decrypt for Kyber and RSA encryption and decryption; CC_KYBER_Enc and CC_KYBER_Dec for Kyber encapsulation and decapsulation; and CC_{ROT_MSG1, ROT_MSG2, ROT_MSG3, ROT_MSG4} for the ROT operations. The performance results for each QR and number-theoretic algorithm can be found in Table 1 for the ARM processor, and in Table 2 for the RISC-V processor. Furthermore, Figures 5 and 6 shows the speedups for each core between QR and number-theoretic algorithms. Note that, in the ROT protocol the time for the receiver is the result of the addition of the time for MSG1 and MSG3, and the time for the sender is the result of the addition of the time for MSG2 and MSG4. In the ARM core, the QR algorithms Kyber and Dilithium fully replace the functionality provided by RSA and ECC. Regarding performance, Kyber and Dilithium are faster at creating keys than RSA, due to the key size and secure random sampling required in RSA. However, they meet the same performance level as ECC. For digital signatures, both in signature creation and verification, Dilithium is on par with both RSA and ECC. In data encryption and decryption, Kyber is also on par with RSA. With respect to ROT, since the TPM does not yet offer commands or requires an implementation there is no one-to-one comparison with a number-theoretic counterpart. However, it has been previously shown that QR OTs and ROTs significantly outperform number-theoretic OTs and ROTs (Kyber, 2017). Therefore, only the QR variants are analyzed. Still, the performance results for the ROT commands both for a sender and a receiver are approximately double that of Kyber and Dilithium even though they possess more complex lattice arithmetic. The ROT results show that it would be possible for a QR TPM to support ROTs and OTs with minimal architectural efforts and quite decent performance. In the RISC-V U74 core, there are some improvements in the QR algorithms. Kyber and Dilithium create their keys faster than RSA but on par with ECC. However, in signature operations, Dilithium \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline & **RSA** & **ECC** & **Kyber** & **Dilithium** & **ROT** \\ & **2048 bits** & **NISTP256** & **768** & **III** & **ROT** \\ \hline **Key Creation** & 3,54 & 1,59 & 1,59 & 1,55 & - \\ \hline **Signature** & 1,59 & 1,54 & - & 1,59 & - \\ \hline **Verify Signature** & 1,52 & 1,55 & - & 1,55 & - \\ \hline **Encryption** & 1,54 & - & 1,56 & - & - \\ \hline **Decryption** & 1,61 & - & 1,55 & - & - \\ \hline **ROT Receiver** & - & - & - & - & 3,16 \\ \hline **ROT Sender** & - & - & - & - & 3,12 \\ \hline \end{tabular} \end{table} Table 1. Execution time (s) for number-theoretic and post-quantum lattice-based algorithms in a TPM 2.0 running in an ARM Cortex-A7 processor. \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline & **RSA** & **ECC** & **Kyber** & **Dilithium** & **ROT** \\ & **2048 bits** & **NISTP256** & **768** & **III** & **ROT** \\ \hline **Key Creation** & 4,00 & 3,28 & 3,14 & 3,51 & - \\ \hline **Signature** & 3,22 & 3,17 & - & 2,98 & - \\ \hline **Verify Signature** & 3,23 & 3,75 & - & 3,13 & - \\ \hline **Encryption** & 3,12 & - & 3,05 & - & - \\ \hline **Decryption** & 3,29 & - & 3,14 & - & - \\ \hline **ROT Receiver** & - & - & - & - & 6,51 \\ \hline **ROT Sender** & - & - & - & - & 7,1 \\ \hline \end{tabular} \end{table} Table 2. Execution time (s) for number-theoretic and post-quantum lattice-based algorithms in a TPM 2.0 running in a RISC-V U74 processor. Figure 5. Encryption and Decryption Speedups between RSA and Kyber for the ARM Cortex-A7 and the RISC-V U74. shows a speedup of at least 6% when compared with ECC and RSA. The same is true for encryption operations. Kyber outperforms RSA by at least 3%. For ROT operations, the same conclusions can be drawn from the ARM experiments, despite ROT having more complex operations than Kyber and Dilithium, it still provides quite reasonable performance. ## 6. Conclusions This paper showed how to extend the current Number-Theoretic base TPMs to be QR. By substituting the RSA and ECC algorithms by the Kyber, for key encapsulation and data encryption/decryption, and the Dilithium primitives, for digital signature, quantum resistance is ensured by the hardness of mathematical problems related to lattices. Moreover, the TPM was also extended, for the first time, to add support for a ROT protocol, useful for example to Multi-Party Computation, which is based on the same type of mathematical problems. Not only were the computational and memory requirements for this TPM extension analysed, but it was experimentally evaluated by implementations on ARM and RISC-V low-power processors. It is shown that the security strength provided by the QR algorithms increased memory requirements, while maintaining or even decreasing the execution time of the QR algorithms. The ROT protocol was implemented without significant changes to the architectural model of the TPM and it possesses decent performance. Finally, we show that the usage of programmable microprocessors in future TPMs would allow a vendor to add new algorithms remotely to older hardware TPMs, in order to improve its security features, with minimal changes to the architectural model. This paper paves the way for the design of QR TPMS based on low-power programmable devices.
2309.11065
UniPCM: Universal Pre-trained Conversation Model with Task-aware Automatic Prompt
Recent research has shown that multi-task pre-training greatly improves the model's robustness and transfer ability, which is crucial for building a high-quality dialog system. However, most previous works on multi-task pre-training rely heavily on human-defined input format or prompt, which is not optimal in quality and quantity. In this work, we propose to use Task-based Automatic Prompt generation (TAP) to automatically generate high-quality prompts. Using the high-quality prompts generated, we scale the corpus of the pre-trained conversation model to 122 datasets from 15 dialog-related tasks, resulting in Universal Pre-trained Conversation Model (UniPCM), a powerful foundation model for various conversational tasks and different dialog systems. Extensive experiments have shown that UniPCM is robust to input prompts and capable of various dialog-related tasks. Moreover, UniPCM has strong transfer ability and excels at low resource scenarios, achieving SOTA results on 9 different datasets ranging from task-oriented dialog to open-domain conversation. Furthermore, we are amazed to find that TAP can generate prompts on par with those collected with crowdsourcing. The code is released with the paper.
Yucheng Cai, Wentao Ma, Yuchuan Wu, Shuzheng Si, Yuan Shao, Zhijian Ou, Yongbin Li
2023-09-20T05:05:40Z
http://arxiv.org/abs/2309.11065v1
# UniPCM: Universal Pre-trained Conversation Model with Task-aware Automatic Prompt ###### Abstract Recent research has shown that multi-task pre-training greatly improves the model's robustness and transfer ability, which is crucial for building a high-quality dialog system. However, most previous works on multi-task pre-training rely heavily on human-defined input format or prompt, which is not optimal in quality and quantity. In this work, we propose to use Task-based Automatic Prompt generation (TAP) to automatically generate high-quality prompts. Using the high-quality prompts generated, we scale the corpus of the pre-trained conversation model to 122 datasets from 15 dialog-related tasks, resulting in Universal Pre-trained Conversation Model (**UniPCM**), a powerful foundation model for various conversational tasks and different dialog systems. Extensive experiments have shown that UniPCM is robust to input prompts and capable of various dialog-related tasks. Moreover, UniPCM has strong transfer ability and excels at low resource scenarios, achieving SOTA results on 9 different datasets ranging from task-oriented dialog to open-domain conversation. Furthermore, we are amazed to find that TAP can generate prompts on par with those collected with crowdsourcing. The code is released with the paper. ## 1 Introduction Recently, dialogue systems have been developing rapidly in various scenarios, such as personal assistants and customer service. The advancements in dialogue systems for those applications have been significantly boosted by the use of pre-trained language models (PLMs), including GPT-2 Radford et al. (2019), BERT Kenton and Toutanova (2019), and T5 Raffel et al. (2020), combined with task-specific fine-tuning on annotated data Hosseini-Asl et al. (2020); Yang et al. (2021); Heck et al. (2020); Lee (2021); Liu et al. (2022). However, most of the models trained under the 'pretrain-finetune' paradigm are limited to specific tasks or datasets, and the dialog systems built upon those models can only respond to certain input formats, which lacks robustness and transfer ability. To relieve such problems, multi-task pre-training, which has achieved great success in language model pre-training, has been introduced to pre-trained conversation models (PCM). Recent progress in multi-task pre-training Ouyang et al. (2022); Sanh et al. (2022); Mishra et al. (2022); Wang et al. (2022) has shown that the robustness and transfer ability of language models are greatly improved by pre-training with multiple tasks. However, the previous works on multi-task learning rely heavily on human-defined input format or prompt. We find those artificially constructed prompts still have two obvious weaknesses, which can be relieved by our proposed task-aware automatic prompt generation method TAP: (1) **Human labor required and limited in quantity.** Previous works like Supernatural-instruction Wang et al. (2022) only have one task instruction for each dataset, which is difficult for the model to catch the essence of the tasks and transfer to other prompts. In contrast, our TAP method can generate numerous prompts given a task, and we show in our experiments that increasing the number of prompts not only scales the pre-training corpus, but makes the model understand the task better as well. (2) **Hard to understand and limited in quality.** Human labelers easily incorporate their own understandings into the prompts or simply obtain the prompt by paraphrasing the dataset description, which makes the prompts quite long and unnatural, containing specific knowledge of the datasets. Meanwhile, our TAP method leverages task-related information to generate task-centric prompts and the quality is ensured by scoring and filtering procedure. The superiority of the generated prompts in quality is proved by both automatic and human evaluation. The TAP method can generate numerous high-quality prompts, which can greatly help train a universal pre-trained conversational model by scaling the pre-training datasets and fuse different tasks using the proposed multi-prompt training mechanism. Using the 303 high-quality prompts automatically generated for the 15 tasks, we scale our pre-training corpus to 122 datasets and 26,625,486 instances, which, to our knowledge, is currently the largest annotated conversational dataset, covering almost all dialog-related tasks. Moreover, we propose a multi-prompt training mechanism to make use of the generated prompts to better fuse different tasks. The resulting model UniPCM is a powerful foundation model for various conversational tasks and different dialog systems. Through comprehensive experiments in multiple conversational tasks, we find that UniPCM has strong ability on various dialog tasks, which outperforms the T5 baseline by 7.14% in the few-shot setting on DialoGLUE Mehri et al. (2020) and achieves SOTA results on 9 different datasets ranging from task-oriented dialog to open-domain conversation. Moreover, the model is robust to input format and can respond to different input prompts. Furthermore, to comprehensively evaluate the quality of the generated prompts, we conduct human evaluation and automatic evaluation. Our generated prompts achieve higher average scores than human-written prompts by 9.50% on three proposed metrics in human evaluation and improve by 2.40% when used to finetune downstream tasks. In summary, our main contributions are: * We propose a task-aware automatic prompt generation method TAP to better fuse the datasets from different tasks in the multi-task pre-training, which can generate numerous high-quality prompts based on extracted task information. The proposed method greatly reduces human effort in prompt engineering and improves the quality of generated prompts. * Leveraging the high-quality prompts generated, we pre-train a unified pre-trained conversation model UniPCM by scaling the the pre-training datasets to 122 dialog-related datasets from 15 dialog-related tasks, resulting in a powerful conversation model UniPCM. The pre-trained model and the datasets collected will be released to public. * We conduct extensive experiments on 10 dialog-related benchmarks including 6 types of task. Results on few-shot and full-data experiments show the superiority of our proposed method and model. ## 2 Related Work ### Multi-task Language Model Pre-training Recent researches have shown that multi-task language model pre-training or pre-finetuning can greatly improve the model's transfer ability, resulting improved performance in few-shot and zero-shot settings Raffel et al. (2020); Wei et al. (2021); Sanh et al. (2022). Although negative transfer may occur when the number of tasks is limited, the model will still benefit from pre-training if scaling the number of task Aribandi et al. (2021). To implement multi-task pre-training, some signals are given to the model to distinguish one task from another. Initially, researchers do multi-task pre-training using a unified text to text format directly Raffel et al. (2020); Lu et al. (2022). Simply adding the name of the task will help the model better understand the relation between task and reduce negative transfer problem Zhang et al. (2018). Recent works use crowdsourcing prompts and instructions to perform multi-task pre-training, which achieves great success Sanh et al. (2022); Wang et al. (2022); Ouyang et al. (2022). The resulting models show strong transfer ability, and can even chat with humans fluently in open domain Ouyang et al. (2022). Our work improves over the previous works in that we use automatically generated prompts instead of the crowdsourcing ones to enable multi-tasking, which reduces human labor as well as improves the quality of the prompts. Furthermore, we propose and formulate multi-prompt training mechanism, which relieves several problems in multi-task pre-training, including task imbalance, uneven data quality and difference between the importance of tasks. Moreover, we prove that multi-prompt training can improve model's performance on unseen prompts. ### Automatic Prompt Generation It has been shown that prompt engineering can be of great benefit to reduce the gap between language model pre-training and finetuning on downstream tasks Gao et al. (2021); Zhong et al. (2021). To reduce human labor in prompt engineering, various approaches have been proposed to generate prompts automatically. AutoPrompt Shin et al. (2020) use gradient-based prompt search to automatically generates prompts. However, the prompt generated are not coherent, and may confuse models in multi-task scenarios. Researchers proposed in Gao et al. (2021); Zhou et al. (2022) to use T5 Raffel et al. (2020) or large language model to fill in the blank bewteent the input and output for automatically generating coherent prompts. However, the prompts generated do not necessarily contain task information and may be highly related to certain input case or dataset. Different from previous works, our work aims to generating prompts for multi-task pre-training. Therefore, our method TAP models task in automatic prompt generation to help the model understand the relation between the tasks and the prompts, as well as improve the quality of generated prompts. ### Pre-training for Dialog Systems It has been shown that pre-training can greatly improve performance for dialog systems, improving coherency of the generated response and transfer ability Roller et al. (2021); Zhang et al. (2020); Su et al. (2022). Models trained on large-scale online open-domain dialogues, for example, Blender-Bot Roller et al. (2021), DialoGPT Zhang et al. (2020) and Meena Adiwardana et al. (2020), can perform well on the chit-chat task, while models pre-trained on certain tasks can improve performance on corresponding tasks. For example, in task-oriented dialog, works like TOD-BERT Wu et al. (2020), CONV-BERT Mehri et al. (2020), PPTOD Su et al. (2022), GALAXY He et al. (2021) improve the performance on relevant datasets. However to interact with human fluently in open-domain Ouyang et al. (2022), the dialog system should not only be capable of various tasks, but also be robust to different input prompts. Recent progress in building powerful open-domain dialog systems mainly used crowdsourcing to annotate large-scale, multi-task datasets to improve the systems' performance Shuster et al. (2022); Ouyang et al. (2022). Different from their approaches, we propose to leverage the existing large scale datasets that are dialog-related to perform multi-task pre-training. Chen et al. (2022) also trains their dialog foundation model over large scale dialog-related datasets. However, they do not aim to building a dialog system, therefore they do not improve their model's robustness to input prompts, neither do they evaluate their model's transfer ability in few-shot or zero-shot scenarios. In contrast, we use generated prompts to perform multi-task prompt pre-training to improve the model's transfer ability and robustness to different input prompts. ### Exploit Prompts for Low Resource Setting Prompts can reduce the gap between language model pre-training and finetuning, therefore improving model's performance in downstream tasks, especially in few-shot and zero-shot settings Gao et al. (2021); Cui et al. (2021); Chen et al. (2022). Apart from that, pattern exploit training (PET), a Figure 1: An illustration of our proposed model UniPCM, which unifies all tasks into an ’input-prompt-output’ format. Prompts are crucial as they help the model understand the task it should perform. self-training method leveraging multiple prompts, can greatly improve model's performance in low resource setting by perform semi-supervision training Different prompts can be used as different view for the case, and models finetuned with different prompt are used to ensemble pseudo labels on unlabeled data (Schick and Schutze, 2021). There are a few works that improve over the original pattern exploit training: Schick and Schutze (2021) extends PET to deal with labels that have multiple tokens, while Tam et al. (2021) proposes to provide more supervision and learn without task-specific unlabeled data. Our PET contributes in reformulating PET to apply it to generative language model. Moreover, we combine PET with our multi-task prompt pre-trained model and applied multi-prompt training in the finetuning stage of PET, improving the accuracy of the generated pseudo labels. ## 3 Method To pre-train our UniPCM, we first unify all the dialog-related tasks into an 'input-prompt-output' format, which is shown in Figure 1. Then we propose task-aware automatic prompt generation TAP to generate high quality prompts for the pre-training. Finally, based on the prompts and corpus, we pre-train our UniPCM using the proposed multi-prompt training mechanism. ### Task-aware Automatic Prompt Generation Our task-aware automatic prompt generation method TAP, as illustrated in Figure 2, mainly contains the following 3 parts: #### 3.1.1 Finding Signals for Task Information Before generating prompts, we extract task-related signals to help us to find the information about the task \(t\). In this work, we mainly focus on 3 kinds of signals that can be used as hint of the task for the model to generate prompts upon. We discuss their availability, limitation and effectiveness to generate prompts. **Instructions:** Task descriptions, or instructions are usually available for datasets. Moreover, huge amounts of instructions are annotated or collected by researchers or crowdsourcing workers (Wang et al., 2022; Ouyang et al., 2022). Instructions are usually long and difficult for language model to understand directly, therefore it's hard to directly use them as input to generate prompts in an unsupervised way. However, instructions contain almost all important information for the task and it's not hard to extract key information from it. In our work, we use tf-idf methods to filter out irrelevant words. 1 Then we get all the 1-gram, 2-grams and 3-grams of the remaining words to get scored by a Bert model (Devlin et al., 2019) using their similarity with the task name. The words that have similarity score above a threshold are deemed as keyword and used to generate prompts. Footnote 1: We implemented tf-idf using the gensim package: [https://radimrehurek.com/gensim/](https://radimrehurek.com/gensim/) **Task Name:** Task name is always available for task and very concise, ideal for automatic prompt generation. However, one task can only have one task name, making it difficult to generate diverse prompts. Therefore, we propose to use the thesaurus tool to paraphrase the task name to form diverse key words. Also, the task name is used to select the keywords extracted from the instruction, which has already been discussed. **Keywords:** Keywords are ideal input for automatic prompt generation, as keywords are both concise and representitive of the task information. However, keywords are not readily available and should be inferred from other task-related information like instructions or the task name. If the quality or the number of the keywords generated by the instructions or the task name do not meet the researchers need, researchers can quickly summarize the task and write some high-quality keywords themselves. #### 3.1.2 Automatic Prompt Generation Using Keywords Getting the task signals (in the form of keywords in this work), we can generate prompts automatically using a pre-trained language model T5 (Raffel et al., 2020). T5 is pre-trained to fill missing spans for a sentence. For example, given the input "Thank you <X> me to your party <Y> week", T5 is trained to generate "<X> for inviting <Y> last <Z>", meaning that "for inviting" replaces the placeholder <X> and "last" replaces the placeholder <Y>. This is well suited for prompt generation, as we want to generate a prompt with the keywords that is coherent given input and output. Given an instance of input-output pair \((X_{t},Y_{t})\) in task \(t\), along with one of the keywords \(k_{t}^{i}\), we define a transform \(\mathcal{T}(X_{t},Y_{t},k_{t}^{i})\)2: \[X_{t},Y_{t},k_{t}^{i}\to X_{t}\langle X\rangle k_{t}^{i}\langle Y\rangle Y_{t} \tag{1}\] where \(\langle X\rangle,\langle Y\rangle\) are sentinel tokens for T5 generation. We generate the prompts according to the T5 generation probability \(P_{T5}(\mathcal{T}(X_{t},Y_{t},k_{t}^{i}))\) and harvest the prompts generated after the sentinel tokens. The final prompts are reorganized as \(P_{t}=x\oplus k_{t}^{i}\oplus y\), where \(\oplus\) denotes the concatenation of token sequence and \(x,y\) are the corresponding content generated after sentinel tokens \(\langle X\rangle,\langle Y\rangle\) by the T5 model. For one single input-output instance and one keyword, we get the top 5 prompts according to generation probability. Using multiple instances and multiple keywords, we generate numerous prompts for selection. To avoid generating prompts that are specific to one single input instance and overlook the task information, we only retain the prompts that appear multiple times, which is empirically set as 2 in our work, in different instances. #### 3.1.3 Scoring and Filtering of Generated Prompts As it is discussed in section A.1, given input \(X_{t}\) and prompt \(P_{t}\) in task \(t\), the probability of generating the correct answer \(Y_{t}\): \(p(Y_{t}|X_{t},P_{t})\) should be optimized, therefore we evaluate the quality of the generated prompts according to the probability, as all of \(Y_{t},X_{t},P_{t}\) are available and the probability can be directly calculated. We choose those prompts \(P_{t}\) that has higher average log probability \(\sum\log(p(Y_{t}|X_{t},P_{t}))\) among the datasets in task. After that we filter out prompts that may contain certain biased information about the output \(Y_{t}\) using a prohibited words list extracted from the outputs. The prohibited words mainly fall into the classification type class, as the output \(Y_{t}\) is selected from certain labels. For example, in emotion classification task, the word "positive" or "negative" is often generated by the model, as the output \(Y_{t}\) contains those two words with high frequency. However, using such prompts for output generation, the result will be biased, reducing the generated performance. Therefore, we filter out those case-sensitive prompts to encourage those prompts that accurately reflect the task. ### Multi-task Prompt Pre-training Using the generated prompts, as well as the collected corpus, we perform multi-task prompt pre-training. With the training instances \((X_{t}^{i},Y_{t}^{i})(i=1,2,\cdots,N_{t})\) from \(K\) different tasks and the generated prompts \(P_{t}^{j}(j=1,2,\cdots,M_{t})\), the objective function of the pre-training can be written as: Figure 2: An illustration of our proposed method TAP. (a) We collect existing signals to generate keywords, either by extracting from instructions or searching synonyms of the task name. (b) We automatically generate prompts based on input examples and task-related keywords using a T5 model, harvesting prompts by concatenating generated words after the sentinel token. (c)We select the prompts by average perplexity on task-related examples and filter out those prompts that may contain information about the labels. \[\mathcal{J}_{\theta}=\sum\limits_{t=1}^{K}\sum\limits_{i=1}^{N_{t}}\sum\limits_{j= 1}^{M_{t}}\log p(Y_{t}^{i}|X_{t}^{i},P_{t}^{j}) \tag{2}\] Note that in Eq. 2, we propose to use multi-prompt training, which means applying multiple prompts to one single input instance: \(\sum\limits_{j=1}^{M_{t}}\log p(Y_{t}^{i}|X_{t}^{i},P_{t}^{j})\). The benefits of which is discussed in sec A.2. However it's not necessary to apply all the \(M_{t}\) prompts available to one single case, as the prompts \(P_{t}^{j}(j=1,2,\cdots,M_{t})\) are representation of the task \(t\) and have similar embeddings in the latent task space. Therefore a subset of \(P_{t}^{j}\) can be randomly sampled, resulting in \(\tilde{P}_{t}^{j}(j=1,2,\cdots,\tilde{M}_{t})\). The loss \(\sum\limits_{j=1}^{M_{t}}\log p(Y_{t}^{i}|X_{t}^{i},P_{t}^{j})\) can be approximated by \(\frac{M_{t}}{M_{t}}\sum\limits_{j=1}^{\tilde{M}_{t}}\log p(Y_{t}^{i}|X_{t}^{i },\tilde{P}_{t}^{j})\) to save calculation time. If the ratio \(\frac{M_{t}}{M_{t}}\) is not added, we can simply adjusting the weights of datasets or tasks in pre-training by adjusting the number of prompts applied. It is beneficial as some tasks or datasets are deemed more important by the researchers. Adding more prompts to those tasks or datasets can make the model better focus on them. ### Prompts for Semi-Supervised Training: PET To utilize numerous and diverse generated prompts, as well as the pre-trained model that performs well on those prompts, we perform PET (Schick and Schutze, 2021) for semi-supervised training. We adapt the origin PET method to better utilize multiple prompts, as well as fitting our pretrained model. For the generated prompts \(P=P^{j}(j=1,2,\cdots,M)\), we use a partition of \(P\), \(P_{1},P_{2},\cdots,P_{k}\) to train \(k\) voting models for ensembling. The \(l\)th voting model \(M_{l}\) are finetuned from the pre-trained model on the annotated part of data \((X^{i},Y^{i})(i=1,2,\cdots,N_{a})\) with the prompt sets \(P_{l}\), the loss function as follows: \[\mathcal{J}_{\theta}^{l}=\sum\limits_{i=1}^{N_{a}}\sum\limits_{j=1}^{|P_{l}|} \log p_{M_{l}}(Y^{i}|X^{i},P_{l}^{j}) \tag{3}\] To generate pseudo labels on unannotated data, we ensemble the outputs generated by voting models given all input instances and prompts: \[\tilde{Y^{i}}=ensemble(\{\tilde{Y^{i}_{j}}\}),\tilde{Y^{i}_{j}} \sim p_{M_{l}}(\tilde{Y^{i}}|X^{i},P_{l}^{j}) \tag{4}\] where we use majority voting method to perform ensembling for the labels generated. Sampling is used in (4) to increase diversity of the generated label, helping us to distinguish those instances and labels that are deemed uncertain by the model. Because we finetune the voting models on a model pre-trained over all prompts and we use multi-prompt training to finetune the voting models in (5), the accuracy of the voting models is greatly improved, therefore advancing the quality of the pseudo labels generated. The \(N_{p}\) pseudo labels are used to train the model, along with the annotated data, to improve the model's performance: \[\mathcal{J}_{\theta} =\sum\limits_{i=1}^{N_{a}}\sum\limits_{j=1}^{M}\log p(Y^{i}|X^{i},P^{j})\] \[+\sum\limits_{k=1}^{N_{p}}\sum\limits_{j=1}^{M}\log p(\tilde{Y^{ k}}|X^{k},P^{j}) \tag{5}\] ## 4 Experiments ### Baseline&Benchmark To comprehensively evaluate our UniPCM, we carefully choose ten downstream datasets in six tasks, mainly evaluating the model's ability in dialog understanding, response generation, and comprehensive ability. #### 4.1.1 Dialog understanding Dialog understanding is crucial for building a high-quality dialog system as it's impossible to generate high-quality responses without having a good understanding of the context. DialoGLUE (Mehri et al., 2020) is a benchmark that comprehensively evaluates the dialogue understanding ability of a dialog system, which consists of four tasks: slot filling (REST8K (Coope et al., 2020), DSTC8 (Rastiogi et al., 2020)), intent prediction (BANKING77 (Casanueva et al., 2020), CLINC150 (Larson et al., 2019), HWU64 (Liu et al., 2021) ), semantic parsing (TOP (Gupta et al., 2018)), and dialog state tracking (MultiWOZ2.1 (Eric et al., 2020)). We follow the original preprocessing and evaluating scripts of Mehri et al. (2020), except that we modify the implementation to a sequence-to-sequence generation format to fit the model's pre-training. The evaluation metrics for slot filling, intent prediction and semantic parsing are F1, accuracy and exact match respectively. For dialog state tracking task of Multiwoz, we apply our model to the SOTA generative baseline SDP-DST (Lee et al., 2021) and joint goal accuracy (JGA) is reported.Apart from T5 (Raffel et al., 2020) (we pre-trained our model upon a T5-base model), we choose SPACE-2 (He et al., 2022) and Flan-T5 (Chung et al., 2022) as our baselines, as SPACE-2 represent the SOTA pre-trained results targeting task-oriented dialog understanding, while Flan-T5 is a general-purpose pre-trained language model using instruction-tuning. The results of TOD-BERT (Wu et al., 2020) and the best variant of ConvBERT (Mehri et al., 2020) are also reported for comparison. #### 4.1.2 Response Generation Open-domain response generation, or chit-chat, is also an important skill for building a high-quality dialog system. We evaluate our model on two classic chit-chat datasets PersonaChat (Zhang et al., 2018) and DailyDialog (Li et al., 2017). We follow the preprocessing and evaluation scripts of FSB (Madotto et al., 2021), BLEU (Papineni et al., 2002), word-level F1 and Rouge-L (Lin, 2004) reported. We choose DialogPIT (Zhang et al., 2020) and PPTOD (Su et al., 2022) as our baseline. #### 4.1.3 Comprehensive ability We evaluate the comprehensive ability of a dialog system on the Multiwoz end to end generation task (End2End) (Budzianowski et al., 2018). In End2End task, the model needs to track the use's state, understand user's intention, decide the best responding strategies and generate coherent response, which is quite challenging. Multiple dialog skills, such as intent prediction, dialog state tracking, policy optimization, and response generation, are necessary to complete the task.We apply our model to the SOTA method MTTOD (Lee, 2021) and use the official evaluation scripts 3 given by (Nekvinda and Dusek, 2021). We compare our results to LABES (Zhang et al., 2020), SOLOIST (Peng et al., 2021), UBAR (Yang et al., 2021) and PPTOD (Su et al., 2022). Footnote 3: [https://github.com/budzianowski/multiwoz](https://github.com/budzianowski/multiwoz) ### Implementation #### 4.2.1 Building pre-training corpus To perform multi-task pre-training for conversation model, we collect UniPreDial 4, which contains 122 dialog-related from 15 dialog-related tasks. The tasks in UniPreDial mainly fall into three categories: task-oriented dialog related (intent prediction, dialog state tracking and grounded dialog), open-domain chit-chat, and other dialog-related datasets. Footnote 4: We collect datasets from [https://huggingface.co/datasets](https://huggingface.co/datasets), [https://www.parl.ai/docs/tasks.html](https://www.parl.ai/docs/tasks.html) and GitHub repositories on [https://github.com/](https://github.com/). Task-oriented dialog is extensively studied by previous researchers, resulting in abundant annotated datasets. We make full use of the annotated information as we leverage prompts to convert a turn in a dialog into multiple training instances, as shown in Figure 1. Open-domain chit-chat datasets are important for improving the generation ability of pre-trained conversation models. We use the datasets collected in He et al. (2022)5 as the datasets are competitive in quality and quantity. However, instead of viewing those datasets as unannotated data for semi-supervised training for task-oriented dialog, we train the response generation task on those datasets, leveraging the coherency of open-domain chat datasets. Footnote 4: [https://github.com/AlibabaResearch/DAMO-ConvAI/tree/main/space-3](https://github.com/AlibabaResearch/DAMO-ConvAI/tree/main/space-3) To extend the model's ability, we collect other datasets that can improve the model's skills. Emotion classification, summary, natural language generation, and text2sql are important skills for dialog systems in real-life scenarios, while question answering and multiple choice have similar format as dialog and will yield positive transfer in co-training (Aribandi et al., 2021). The statistics of the tasks and datasets, as well as the generated prompts, are shown in Table 4 and the details of the tasks and datasets can be found in Table 5. #### 4.2.2 Pre-training We pre-train our conversation model UniPCM on the collected corpus UniPreDial, the details of which shown in Table 5. The maximum sequence length of input context is set to 256. The batch size is set to 64 and an AdamW optimizer is employed for optimization with a constant learning rate of 2e-5. The pre-training is performed on eight 80GB Tesla-A100 GPU and takes about 72 hours. #### 4.2.3 Downstream tasks For downstream tasks, we finetune UniPCM following the corresponding baseline scripts. For each few-shot and zero-shot experiment, we exclude the training data other than the few-shot data in the pre-training datasets accordingly to avoid unfair data use. During testing, we test the model with 5 random prompts sampled from all the available prompts for each testinig instance (if the prompts are used). We view the results as 5 independent experiments and the mean result of the performance is reported as the final result. The variance of the experiment is reduced as we take the mean results of 5 experiments. Moreover, to achieve high score under this testing setting, the model needs to perform well on all the available prompts. The resulting high performance proves that our model is robust to input prompts. ### Main Results We conduct our experiments on the baseline and benchmarks mentioned above. The implementation detail is shown in Sec. 4.2. #### 4.3.1 DialoGLUE Results As shown in Table 1, our model UniPCM excels at few-shot setting, improving **7.14%** on average scores over the T5 baseline, achieving SOTA results on all 7 datasets of DialoGLUE and improve by **1.75%** over the previous SOTA result SPACE-2 on average scores. For the full data setting, our model is competitive, achieving the best average scores among the strong baselines and consistantly outperforms Flan-T5 on all datasets, which demonstrate the efficacy of our pre-training methods. It is worth noticing that SPACE-2 performs quite well on this task, which is mainly because its TOD targeted modelling, which makes the model restricted to understanding task in TOD datasets. #### 4.3.2 MultiWOZ2.0 End2End Results As shown in Table 2, our model UniPCM improves over the previous SOTA model MTTOD in both full data and few shot scenarios by 1.6 and 2.2 \begin{table} \begin{tabular}{c c c c c c c c c} \hline \hline Setting & Model & avg & BANKING77 & HWU64 & CLINIC150 & REST8K & DSTC8 \({}^{\dagger}\) & TOP & MULTIWOZ \\ \hline \multirow{6}{*}{10-shot data} & T5 & 76.52 & 76.01 & 81.77 & 88.36 & 85.31 & 74.72 & 76.03 & 51.63 \\ & TOD-BERT \({}^{\star}\) & 79.96 & 85.99 & 86.74 & 93.07 & 87.62 & 50.19 & 77.77 & 48.54 \\ & ConvBERT \({}^{\star}\) & 78.72 & 85.06 & 85.69 & 93.06 & 87.58 & 44.36 & 72.01 & 48.89 \\ & SPACE-2 \({}^{\star}\) & 81.91 & 88.31 & 88.85 & 95.22 & 88.85 & 54.41 & 79.55 & 50.70 \\ & Flan-T5 & 80.68 & 84.48 & 86.88 & 91.80 & 90.59 & 78.68 & 76.78 & 53.52 \\ & **UniPCM** & **83.66** & **90.16** & **90.05** & **95.78** & **92.62** & **83.27** & **79.63** & **53.73** \\ \hline \hline \multirow{6}{*}{Full data} & T5 & 85.70 & 92.60 & 91.07 & 96.49 & 95.95 & 93.60 & 81.41 & 56.66 \\ & TOD-BERT \({}^{\star}\) & 85.43 & 93.02 & 89.87 & 95.93 & 95.53 & 90.05 & 81.90 & 56.30 \\ & ConvBERT \({}^{\star}\) & 86.17 & 93.44 & 92.38 & 97.11 & 95.44 & 91.20 & 82.08 & 56.56 \\ & SPACE-2\({}^{\star}\) & 87.56 & **94.77** & **94.33** & **97.80** & 96.20 & 91.38 & 82.74 & **59.51** \\ & Flan-T5 & 86.99 & 93.47 & 92.37 & 96.71 & 96.41 & 94.51 & 84.32 & 58.68 \\ & **UniPCM** & **87.59** & 94.41 & 93.40 & 97.47 & **96.92** & **96.15** & **84.58** & 58.76 \\ \hline \hline \end{tabular} \end{table} Table 1: Results of seven datasets from the DialoGLUE benchmark in low-resource and full data setting. \({}^{\star}\) denotes the model is specified for understanding task in TOD only. \({}^{\dagger}\) denotes we fix a bug in the original scripts, resulting higher score in DSTC8 dataset and we exclude the dataset in the avg score for fair comparison. \begin{table} \begin{tabular}{c c c c c c} \hline \hline \multicolumn{6}{c}{MultiWOZ2.0 End2End} \\ \hline \multirow{2}{*}{Sening} & Model & Inform & Success & BLEU & Combined score \\ \cline{2-6} & LAHES & 68.5 & 58.1 & 18.9 & 82.2 \\ & SOLOST & 82.3 & 72.4 & 13.6 & 90.9 \\ & URAR & 83.4 & 70.3 & 17.6 & 94.4 \\ & PFOTD & 83.1 & 72.7 & 18.2 & 96.1 \\ & MTIO & 85.9 & 76.5 & 19.0 & 100.2 \\ & UniPCM (ours) & **83.3** & **76.8** & **19.2** & **101.8** \\ \hline \hline \multirow{2}{*}{Few shot(101)} & MTTOD & 66.8 & 52.8 & **15.7** & 75.5 \\ & UniPCM (ours) & **68.4** & **57.2** & 14.9 & **77.7** \\ \hline \hline \end{tabular} \end{table} Table 2: Full data and few-shot results on MultiwoZ2.0 End2End task, inform, success, BLEU and combined score are reported. \begin{table} \begin{tabular}{c c c c c c c} \hline \hline \multicolumn{2}{c}{Model Configuration} & \multicolumn{2}{c}{Person} & \multicolumn{2}{c}{Only/Philage} \\ \hline Setting & Model & BLEU & F1 & Range-1 & BLEU & F1 & Rouge-L \\ \hline \multirow{4}{*}{Zero-shot} & TS (Baseline) & 49.4 & 15.24 & 9.16 & 2.59 & 9.76 & 8.51 \\ & PTOD & 0.7 & 13.3 & 10.3 & 0.9 & 10.44 & 10.14 \\ & DukeGT & 0.57 & 96.1 & 11.33 & 0.85 & 118.90 \\ & UniPCM (ours) & **1.56** & **14.5** & **13.2** & **9.85** & **17.82** & **12.84** \\ \hline \multirow{4}{*}{Few-shot(101)} & TS (Baseline) & 1.76 & 17.8 & 18.14 & 0.53 & 12.62 & 16.64 \\ & PTOD & 1.85 & 17.4 & 17.35 & 0.97 & 14.58 & 17.65 \\ & DukeGT & 1.25 & 14.9 & 18.95 & 0.77 & 10.55 & 18.60 \\ & UniPCM (ours) & **2.41** & **19.16** & **18.31** & **0.81** & **18.04** & **21.23** \\ \hline \hline \end{tabular} \end{table} Table 3: Few-shot and Zero-shot results on Personachat and DailyDialog dataset (task: chit-chat). BLEU, word-level F1 and Rouge-L are reported. on combined score respectively. The model's improvements mainly fall in the Inform and Success, implying that the pre-training improves the model's dialog understanding and decision-making ability. Meanwhile, the few-shot improvements are not so remarkable as in DialoGLUE datasets, probably resulting from the delexicalization preprocessing used in MultiWOZ (Zhang et al., 2020), making the language used in this dataset slightly different from those in other datasets in pre-training. #### 4.3.3 Chit-chat Results As shown in Table 3, UniPCM consistently improves over all of the baseline results in zero-shot and few-shot settings in Persona and DailyDialog. The results imply that combining open-domain chat datasets in the multi-task pre-training procedure will improve the model's ability to perform open-domain chatting. Meanwhile, the performance of PPTOD, a model trained on task-oriented dialog datasets only, does not improve over the T5 baseline on chit-chat tasks, which shows the importance of combining open-domain chit-chat tasks in pre-training. ### Analysis and Ablation Study #### 4.4.1 Ablation study for UniPCM in few-shot setting It has been shown in Table 1 that UniPCM excels at few-shot setting, and we want to have a full understanding of why UniPCM achieves great performance in few-shot setting. We get three main conclusions from the ablation study shown in Table 6: **(1) Using multi-prompt training in finetuning stage greatly helps the model's performance in few-shot setting**, achieving 2.98% gain. **(2) Using multi-prompt training in pre-training stage will help the model learn better in multi-task scenario.** Although using one human-written prompt in the pretraining stage will help improve the dialog understanding ability by 1.20%, by using multi-prompt training in the pre-training stage, the results improve by **3.08%**, which shows that using multi-prompt training in the pre-training stage will greatly benefit the model's performance in downstream task. **(3) PET (introduced in Sec. 3.3) will help in low-resource setting.** Adding PET improves by 1.08% over the strong baseline, which shows that our generated prompts can help model better utilize unlabeled data by using PET. \begin{table} \begin{tabular}{c|c c c c c c c c c c} \hline \hline Task type & Intert & Dialog & state tracking & Emotion & Summary & Question answering & Question & Response & Multiple choice & Text2a/a & Grounded dialog & Total \\ \hline Tasks & Intert & DST, slot time & Emotion & Summary & DialQA, DiscQA & Generation & Response, Chat & Multiple choice & Text2a/a & TOD, US, RO-chat & 15 \\ Number of purpose & 37 & 33 & 14 & 11 & 35 & 51 & 27 & 39 & 29 & 27 & 803 \\ Number of datasets & 22 & 21 & 7 & 5 & 12 & 4 & 23 & 3 & 2 & 23 & 122 \\ Number of instances & 1,822,413 & 4,822,314 & 171,353 & 489,995 & 400,881 & 198,999 & 16,353,894 & 44,902 & 19,009 & 2,959,798 & 26,652,486 \\ \hline \hline \end{tabular} \end{table} Table 4: Statistics of tasks, datasets, and prompts in UniPreDial. \begin{table} \begin{tabular}{c|l} \hline **Task** & **Influents** \\ \hline **Normal language generation** & End-by Gen #### 4.4.2 Finetuning with multiple prompts. Although we have shown in Table 1 that multi-prompt training will greatly improve the model's performance of finetuning in few-shot setting, it is not clear how the number of prompts available will influence the final results. From Table 7, we can see that simply applying 1 prompt will increase by 2.306% on test accuracy. Moreover, applying a small number of prompts (7) can greatly improve the test accuracy (4.643%). To manually select prompts that are deemed better by human experts will not help much (0.323%). Moreover, using a large number of prompts (25) will improve a little over fewer prompts result (0.811%). Therefore in PET, we propose to use subsets of prompts to finetune the voting models, which will yield the best performance. ### Automatically generated Prompts Using the 494 keywords extracted from the Super-Instruction datasets (Wang et al., 2022), we generate 3423 prompts on 74 tasks. However, as our work mainly focus on pre-training a conversation model, we mainly evaluate the 303 prompts used in pre-training. The rest of generated prompts will be released with our codes and can be further studied. #### 4.5.1 Visualization of Generated Prompts To better understand the prompts distribution in the latent space, we visualize the embeddings of the prompts generated using t-SNE visualization (Van der Maaten and Hinton, 2008). As illustrated in Figure 3, we use embeddings from language models to approximate the embeddings in the latent space, as the embeddings in the latent space are not available. The results show that our generated prompts are task-centric, yet diverse. Moreover, comparing the embeddings in our pre-trained model and T5-base model, we can see that pre-training makes the prompt embeddings of the same task cluster, meaning that the model understands the relation between tasks and prompts better after pre-training. #### 4.5.2 Human Evalution We perform human evaluation to comprehensively evaluate the quality of the generated prompts. We sum up three key characteristics of good prompts: task-specific, coherency and fluency, which is defined as: **Task-specificity**: whether the prompt accurately reflects the essence of the task. **Coherency**: whether the prompt can form coherent sequences with most of the inputs and outputs. **Fluency**: whether the prompt itself is grammatically correct and fluent. Experts in dialog system are asked to score 0, 1, 2 for the three metrics on the prompts generated by TAP and crowdsourcing human-written prompts randomly selected from Prompsource (Bach et al., 2022), the average scores reported in table 8. The results show that our generated prompts are superior to the crowdsourcing human-written prompts, \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline Method & MF & BANANA(TQ) & HPSU4 & CLD-150 & BSEKE & DSTCS & TOP \\ \hline T5 & 78.52 & 70.61 & 81.77 & 85.36 & 85.31 & 74.22 & 75.00 \\ + MF & 79.50 & 87.37 & 85.56 & 14.87 & 83.24 & 76.20 & 76.90 \\ + MF+ MF & 82.30 & 83.50 & 85.66 & 92.73 & 90.96 & 94.01 & 76.64 \\ + MF+ MF+ MF+ MF+ & 82.58 & 82.92 & 86.74 & 94.76 & 91.55 & 82.87 & 78.75 \\ + MF+ MF+ MF+ RF+ RF+ & **83.06** & **90.66** & **90.65** & **90.78** & **91.52** & **83.27** & **79.63** \\ \hline \hline \end{tabular} \end{table} Table 6: Ablation study on six datasets from the DialoGLUE benchmark in low-resource setting (10-shot data), MP means multi-prompt training in the finetuning stage, PT means pre-training, MPPT means multi-prompt training in the pre-training stage. \({}^{*}\) denotes it is the UniPCM. \begin{table} \begin{tabular}{c c c c c} \hline \hline Number of Prompts & 0 & 1(avg) & 7(random) & 7(selected) & 25 \\ \hline Test Acc & 76.006 & 78.312 & 82.955 & 83.279 & **83.766** \\ \hline \hline \end{tabular} \end{table} Table 7: Few-shot(10%) results on BANKING77 dataset using different numbers of prompts. For 1 prompt setting, we report the average scores of randomly selected prompt to reduce variance. Figure 3: Prompt embeddings in the latent space using t-SNE visualization. T5-base model and our pre-trained model are used to approximate the latent space in (a) and (b) respectively. improving the task-specificity, coherency, and fluency by **9.95%**, **10.97%**, **7.59%** respectively. Moreover, it can be shown in the results that by modeling task in TAP, the prompts generated focus on the task better, while using the input-output pairs in the automatic prompt generation procedure make the prompts generated better fit with the context, resulting in higher gain in task-specificity and coherency. #### 4.5.3 Results on Downstream Tasks Besides human evaluation, we measure the quality of the prompts generated using the downstream finetuning results. A T5 model is finetuned on downstream tasks with the prompts generated using multi-prompt training (Eq.3). We compare our automatically generated prompts with crowdsourcing prompts from Promptsource (Bach et al., 2022). Moreover, to illustrate the importance of modeling task in TAP, we try to generate prompts without task-related information, i.e. the keywords for ablation study, which is the same as the method proposed in Gao et al. (2021). The results shown in Table 9 demonstrate the superiority of our automatically generated prompts over human-written prompts, improving by **2.40%** on test accuracy. Meanwhile, modelling task in TAP brings an improvement of 0.89%, which shows that modeling task is beneficial for generating prompts with higher quality. ## 5 Conclusion and Future Work This paper represents progress toward building high-quality dialog systems with multi-task prompt pre-training using automatically generated prompts. Based on a unified 'input-prompt-output' format, we generate high-quality prompts using the proposed automatic prompt generation method TAP and perform multi-task prompt pre-training using the proposed multi-prompt training mechanism, resulting in a powerful pre-trained conversation model UniPCM. Extensive experiments demonstrate that UniPCM is robust to input prompts, capable of performing various dialog-related tasks, and has strong transfer ability, particularly in low-resource scenarios. We hope our pre-trained model UniPCM, as well as the collected datasets, will help researchers to build better dialog systems. Furthermore, since multi-task prompt pre-training is widely used in pre-training, we hope our automatic prompt generation method TAP, as well as the high-quality prompts generated, will encourage the community to further explore the limits of multi-task prompt pre-training.
2309.05010
Absence of quantum optical coherence in high harmonic generation
The optical phase of the driving field in the process of high harmonic generation and the coherence properties of the harmonics are fundamental concepts in attosecond physics. Here, we consider to drive the process by incoherent classical and non-classical light fields exhibiting an undetermined optical phase. With this we introduce the notion of quantum optical coherence into high harmonic generation, and show that high harmonics can be generated from incoherent radiation despite having a vanishing electric field. We explicitly derive the quantum state of the harmonics when driven by carrier-envelope phase unstable fields and show that the generated harmonics are incoherent and exhibiting zero electric field amplitudes. We find that the quantum state of each harmonic is diagonal in its photon number basis, but nevertheless has the exact same photon statistics as the widely considered coherent harmonics. From this we conclude that assuming coherent harmonic radiation can originate from a preferred ensemble fallacy. These findings have profound implications for attosecond experiments and how to infer about the harmonic radiation properties.
Philipp Stammer
2023-09-10T12:16:32Z
http://arxiv.org/abs/2309.05010v4
# On the role of the optical phase and quantum coherence in high harmonic generation ###### Abstract In this work we analyze the role of the optical phase and coherence of the driving field in the process of high harmonic generation. We consider driving the process of high harmonic generation with incoherent classical and non-classical intense light fields, and show that harmonic radiation can be generated even in cases where the phase of the driving field is completely undetermined leading to vanishing mean electric field values. This implies that quantum optical coherence in the driving field is not necessary for generating high harmonic radiation, with the consequence that the emitted harmonic radiation in those cases do likewise not exhibit quantum optical coherence. We further show that the final quantum state of each harmonic is diagonal in the photon number basis, from which we conclude that the measurement of the high harmonic spectrum alone does not allow to infer on the coherence properties of the harmonic radiation. _Introduction._ High harmonic generation (HHG) is a parametric process in which an intense driving field is frequency up-converted with the resulting harmonic spectrum extending towards very high non-linear orders ranging from the infrared to the extreme-ultraviolet regime. In conventional HHG experiments the process is driven by a classical light source provided by a laser, while the description has almost exclusively focused on semi-classical approaches [1]. Furthermore, full quantum optical methods show that the generated harmonic radiation is coherent with the quantum state of the field modes given by product coherent states [2; 3; 4; 5; 6]. This result holds in the limit of vanishing dipole moment correlations [4; 7] and that the experimental boundary condition for the initial state of the driving field is given by a coherent state. This assumption of an initial pure coherent state leads to a well defined phase in the associated classical driving field [8; 9] bridging the gap to the semi-classical picture [9]. Closely related to the optical phase is the concept of optical coherence, which is associated to the statistical properties of the fluctuations of the light field [10; 11]. Both of these concepts, the phase of the field and quantum coherence, will be scrutinized in this work. Of particular interest is the quantum optical coherence associated to the off-diagonal density matrix elements in the photon number basis of the corresponding field state. The discussion about the existence of optical coherence was initiated in [12; 13], with subsequent studies on the relevance of this optical coherence for quantum information processing protocols [14], and causes a debate about the proper description of the quantum state of a laser field [12; 14; 15; 16; 17]. Approaches going beyond the semi-classical perspective for the description of HHG considered the quantum optical analog of driving the process with classical laser radiation given by coherent states [2; 3; 4; 5; 6; 7; 18; 19], showing that the harmonic radiation is coherent as well. Even further, recent work on the quantum optical description of HHG studied the process when driving with non-classical states of light [20; 21], which allows to consider light fields without a well defined phase. For instance, they considered light fields with a well defined photon number, making the phase of the field arbitrary. Further, this approach allows to consider light states with vanishing quantum optical coherence, i.e. a diagonal density matrix in the photon number basis, leading to a vanishing mean electric field value [9]. However, the approach considered in the present work allows to pose questions such as: _Can HHG be driven by light fields without quantum optical coherence, and if so, is the harmonic radiation coherent? For the experimental consequences, can we infer on the coherence properties of the harmonic radiation from the measurement of the HHG spectrum?_ In the following we will give definite answers to those questions, which have yet not been posed before. This is particularly important for virtually all HHG experiments in which the spectrum is measured to avoid a preferred ensemble fallacy in the interpretation of the measurement data, and provides further insights into the radiation properties and the structure of the generated quantum state from HHG. Controlling the quantum state of the harmonic field modes is of current interest since the domain of strong field physics has recently become a tool for quantum state engineering [2; 6] of high photon number entangled states [4; 7] and coherent state superposition in terms of optical cat states with photon numbers sufficient to induce non-linear processes [22]. Further, driving HHG in solid state [23; 24] or strongly correlated materials [25] allows to obtain possibly interesting field states. Understanding the coherence properties of the generated harmonic radiation, and to derive the associated quantum state is essential for connecting strong field physics with quantum information science [26; 27; 28]. _HHG driven by coherent light._ Before analyzing the process of HHG driven by incoherent radiation, we first consider the case of driving the atom by classical coherent laser light. The quantum optical description of the experimental boundary condition of the coherent driving laser is given by an initial coherent state \(|\alpha\rangle\), while the harmonic field modes \(q\) are considered to be in the vacuum \(|\{0_{q}\}\rangle=\otimes_{q}|0_{q}\rangle\). The coupling of the optical field modes to the electron is taken into account within the dipole approximation with the interaction Hamiltonian \(H_{I}=-dE_{Q}(t)\), and electric field operator \[E_{Q}(t)=-i\kappa\sum_{q}\sqrt{q}\left(a_{q}^{\dagger}e^{i\omega_{q}t}-a_{q}e^{ -i\omega_{q}t}\right), \tag{1}\] where \(\kappa\propto 1/\sqrt{V}\) is proportional to the quantization volume \(V\). To solve the dynamics for the field modes a unitary transformation is performed [2; 6], which shifts the initial state of the driving field mode to the origin in phase space. This is done by using the displacement operator \(D(\alpha)\) such that the interaction Hamiltonian obtains an additional term \(H_{cl}(t)=-dE_{cl}(t)\), and the new initial state of the driving mode is given by the vacuum \(D^{\dagger}(\alpha)\left|\alpha\right\rangle=|0\rangle\). This new term takes into account the fact that the initial driving laser mode is given by a coherent state and leads to the semi-classical interaction of the electron dipole moment with the classical electric field \[E_{cl}(t)=\mathrm{Tr}[E_{Q}(t)\left|\alpha\right\rangle\!\!\left\langle\alpha \right|]=i\kappa\left(\alpha e^{-i\omega t}-\alpha^{*}e^{i\omega t}\right), \tag{2}\] associated to the driving laser. This unitary transformation defines a semi-classical reference frame, which is unique for a pure coherent state initial condition since the phase \(\phi=\mathrm{arg}(\alpha)\) of \(\left|\alpha\right\rangle\) is well defined [8; 9]. Within this frame the dynamics of the optical field modes conditioned on HHG can be solved such that the evolution is given by a multi-mode displacement operation [6]. The final state of the harmonic field modes after the interaction is thus given by product coherent states \[|\{0_{q}\}\rangle\rightarrow\prod_{q}D(\chi_{q})\left|\{0_{q}\}\right\rangle =|\{\chi_{q}\}\rangle\,, \tag{3}\] with the amplitudes proportional to the Fourier transform of the time-dependent dipole moment expectation value in the ground state \[\chi_{q}=-i\sqrt{q}\int dt\left\langle d(t)\right\rangle e^{i\omega_{q}t}, \tag{4}\] for the electron driven by the classical field (2). The fact that the final state is a pure state in terms of product coherent states comes from neglecting dipole moment correlations during the evolution [4; 7; 29]. This holds for small depletion of the electronic ground state, and it was shown that taking into account these dipole moment correlations leads to entanglement and squeezing of the optical field modes [30]. In the following we discuss how the description changes when considering driving fields without a well defined phase such that the unitary transformation into the semi-classical frame is not uniquely defined anymore [9]. _Incoherent driving and the optical phase._ To describe the process of HHG driven by incoherent light we shall first consider a classical light field by means of the mixture of coherent states over all phases \[\rho_{|\alpha_{0}|}=\frac{1}{2\pi}\int_{0}^{2\pi}d\phi\left||\alpha_{0}|e^{i \phi}\rangle\!\!\left\langle|\alpha_{0}|e^{i\phi}\right|, \tag{5}\] which in contrast to a pure coherent state \(|\alpha\rangle\) has an arbitrary phase \(\phi\). Due to the totally undetermined phase of the field, this state does not allow to uniquely define a semi-classical frame by means of the unitary displacement operation \(D(\alpha)\). A consequence is that this field has a vanishing mean electric field value at all times \[E_{cl}(t)=\mathrm{Tr}\big{[}E_{Q}(t)\rho_{|\alpha_{0}|}\big{]}=0, \tag{6}\] and the implications on the underlying semi-classical picture of HHG was discussed in [9]. However, despite the absence of a unique semi-classical frame one can express the initial state of the driving field in terms of phase-space distributions, which allows to decompose the field in terms of coherent states. Here, we shall focus on the generalized \(P\)-distribution \(P(\alpha,\beta^{*})\), allowing to write a quantum state in terms of a unique, positive and finite distribution function [31; 32; 33] \[\rho=\int d^{2}\alpha d^{2}\beta P(\alpha,\beta^{*})\frac{|\alpha\rangle\left\langle \beta\right|}{\left\langle\beta|\alpha\right\rangle}. \tag{7}\] This allows to solve the HHG dynamics for an arbitrary initial light field [20], in close analogy to the approach used for a coherent state initial condition. The difference using the generalized \(P\)-representation is that there is not a single coherent state contribution, but due to the decomposition in (7) each contribution of the coherent states \(|\alpha\rangle\) and \(|\beta\rangle\) driving the electron can be solved separately under the same approximations as in [2; 3; 5]. To derive the final field state generated from the electron currents driven by the distribution of intense fields, we use the general relation [33; 34] \[P(\alpha,\beta^{*})=\frac{1}{4\pi}e^{-\frac{|\alpha-\beta^{*}|^{2}}{4}}Q\left( \frac{\alpha+\beta^{*}}{2}\right), \tag{8}\] where \(Q(\alpha)=\frac{1}{\pi}\left\langle\alpha\right|\rho\left|\alpha\right\rangle\) is the Husimi \(Q\) function of the driving field mode. Further, we take into account that the process is driven by light fields with sufficiently high intensities for generating harmonic radiation in a large enough quantization volume [35; 20]. Hence, we consider the limit \(\kappa\to 0\) and \(\alpha\rightarrow\infty\) such that the physical electric field amplitude \(\mathcal{E}_{\alpha}=2\kappa\alpha\) remains finite, and evaluate the limits of the product in (8) separately \[\lim_{\kappa\to 0}\frac{1}{4\pi\kappa^{2}}e^{-\frac{\left|\mathcal{E}_{ \alpha}-\mathcal{E}_{\beta^{*}}\right|^{2}}{16\alpha^{2}}}=\delta^{(2)}( \alpha-\beta^{*}). \tag{9}\] Solving the dynamics of the electron currents and using the aforementioned limit, we find that the final field state after the end of the pulse is given by \[\rho=\int d^{2}\alpha Q(\alpha)\prod_{q}|\chi_{q}(\alpha)\rangle\!\!\left\langle \chi_{q}(\alpha)\right|. \tag{10}\] This final state describes an incoherent mixture of product coherent states over the driving field distribution given by \(Q(\alpha)\) with product coherent states for each component of the driving field decomposition. The amplitudes are similarly as before \[\chi_{q}(\alpha)=-i\sqrt{q}\int dt\left\langle d_{\alpha}(t)\right\rangle e^{i \omega_{q}t}, \tag{11}\] where \(\left\langle d_{\alpha}(t)\right\rangle\) is the time-dependent dipole moment expectation value of the electron driven by the classical field of associated coherent state amplitude \(\alpha\) from the decomposition of the initial driving field via \(Q(\alpha)\). The coherent state amplitudes of the harmonic modes are the same as in the case of the pure coherent state driving field, just that the final state in (10) is incoherently mixed over the different coherent state contributions. With the final field state in (10) we can now compute the HHG spectrum \(S(\omega_{q})\propto\left\langle a_{q}^{\dagger}a_{q}\right\rangle\) for an arbitrary driving field \[\left\langle a_{q}^{\dagger}a_{q}\right\rangle=\int d^{2}\alpha Q(\alpha)| \chi_{q}(\alpha)|^{2}, \tag{12}\] which is an incoherent average over the amplitudes \(\left|\chi_{q}(\alpha)\right|^{2}\) weighted by the Husimi distribution \(Q(\alpha)\). Using that the Husimi distribution for the incoherent drive in (5) is given by \[Q_{|\alpha_{0}|}(\alpha)=\frac{1}{2\pi^{2}}\int_{0}^{2\pi}d\phi e^{-|\alpha- \alpha_{0}(\phi)|^{2}}, \tag{13}\] we have \[\left\langle a_{q}^{\dagger}a_{q}\right\rangle=\frac{1}{2\pi^{2}}\int_{0}^{2 \pi}d\phi\int d^{2}\alpha e^{-|\alpha-\alpha_{0}(\phi)|^{2}}|\chi_{q}(\alpha) |^{2}. \tag{14}\] Since both \(Q_{|\alpha_{0}|}(\alpha)\geq 0\) and \(\left|\chi_{q}(\alpha)\right|^{2}\geq 0\) for all \(\alpha\) we find, despite the averaging over the phase \(\phi\), that the spectrum is non-vanishing. This is particularly interesting because in contrast to the vanishing mean electric field value (6) the spectrum does not vanish when averaging over all phases [9]. This is the case because we incoherently average over the positive distribution \(Q(\alpha)\) which does not allow for interference between the different contributions and thus there is no possible cancellation of different dipole currents of opposite phase. This is in fact a consequence of the limit performed in (9), which holds for sufficiently intense fields and is necessary to drive the highly non-linear process of HHG. So far we have analyzed driving HHG by a classical field without optical coherence given by \(\rho_{|\alpha_{0}|}\). We shall now consider a genuinely non-classical field state without optical coherence by means of a photon number state \(\left|n\right\rangle\) with sufficient intensity (limit of large \(n\)). Since (10) is the general solution for an arbitrary intense light field, we can use that the \(Q\) function for the photon number state is given by \[Q_{n}(\alpha)=\frac{1}{\pi}\frac{\left|\alpha\right|^{2n}}{n!}e^{-|\alpha|^{2 }}, \tag{15}\] such that the final state reads \[\rho=\frac{1}{\pi}\int d^{2}\alpha\frac{\left|\alpha\right|^{2n}}{n!}e^{-| \alpha|^{2}}\prod_{q}\left|\chi_{q}(\alpha)\right\rangle\!\!\left\langle\chi_ {q}(\alpha)\right|. \tag{16}\] The HHG spectrum obtained from this state is proportional to \[\left\langle a_{q}^{\dagger}a_{q}\right\rangle_{n}=\frac{1}{\pi}\int d^{2} \alpha\frac{\left|\alpha\right|^{2n}}{n!}e^{-|\alpha|^{2}}|\chi_{q}(\alpha)|^ {2}, \tag{17}\] which suggests that intense photon number states can drive the process of HHG [20]. However, there is an interesting observation if one consistenly considers the limit used to obtain (10), which is given by \(\kappa\to 0\) for constant \(\mathcal{E}_{\alpha}=2\kappa\alpha\). We can write the Husimi function \(Q_{n}(\alpha)\) in terms of the field amplitude \(\mathcal{E}_{\alpha}\) and take the respective limit such that \[\lim_{\kappa\to 0}Q_{n}(\mathcal{E}_{\alpha}/(2\kappa))\frac{d^{2}\mathcal{E}_{ \alpha}}{4\kappa^{2}}\propto\left|\mathcal{E}_{\alpha}\right|^{2n}\delta^{(2) }(\mathcal{E}_{\alpha})d^{2}\mathcal{E}_{\alpha}, \tag{18}\] and consequently the HHG spectrum would read \[\left\langle a_{q}^{\dagger}a_{q}\right\rangle_{n} \propto\int d^{2}\mathcal{E}_{\alpha}|\mathcal{E}_{\alpha}|^{2n} \delta^{(2)}(\mathcal{E}_{\alpha})|\chi_{q}(\mathcal{E}_{\alpha})|^{2}\] \[=\left[|\mathcal{E}_{\alpha}|^{2n}|\chi_{q}(\mathcal{E}_{\alpha}) |^{2}\right]\Big{|}_{\mathcal{E}_{\alpha}=0}. \tag{19}\] This corresponds to the harmonic amplitudes \(\chi_{q}(\mathcal{E}_{\alpha})\) and the physical electric field amplitude \(\mathcal{E}_{\alpha}\) evaluated at \(\mathcal{E}_{\alpha}=0\). However, already the harmonic amplitudes obtained from the semi-classical dipole moment expectation value in (11), driven by the classical field \(\mathcal{E}_{\alpha}=0\), would lead to a vanishing dipole moment, and thus, a vanishing harmonic spectrum. This implies that photon number states are not capable of driving the process of HHG in the limit used to obtain the general result (10). _Optical coherence in HHG._ We have seen that driving the process of HHG with a mixture of coherent states over all phases \(\rho_{|\alpha_{0}|}\) is still possible despite the vanishing mean electric field amplitude. In the following we discuss an other crucial consequence of this observation. Interestingly, the mixed driving state in (5) does not exhibit quantum optical coherence in the sense of non-vanishing off-diagonal density matrix elements in the photon number basis. This can be seen when rewriting the mixture \[\rho_{|\alpha_{0}|}=e^{-|\alpha_{0}|^{2}}\sum_{n}\frac{\left|\alpha_{0}\right| ^{2n}}{n!}\left|n\rangle\!\!\left\langle n\right|, \tag{20}\] which is diagonal in the Fock basis and does therefore not have quantum optical coherence [12; 16]. Since we have seen that this initial field state allows to generate high harmonic radiation for sufficiently large field intensities, it is now interesting to analyze the coherence properties of the harmonic radiation in the case of driving the process by light fields without optical coherence. This allows to answer the question _if the harmonic radiation is coherent when driven by incoherent radiation?_ We therefore look at a single harmonic mode \(q\) by tracing the state (10) over the remaining modes \(q^{\prime}\neq q\). Since each state in the mixture is a product state we have \[\rho_{q}=\mathrm{Tr}_{q^{\prime}\neq q}[\rho]=\int d^{2}\alpha Q_{|\alpha|}( \alpha)\ket{\chi_{q}(\alpha)}\!\!\bra{\chi_{q}(\alpha)}. \tag{21}\] We can now use the \(Q\) function for the mixed initial state, and that in the limit of large field amplitudes \(\mathcal{E}_{\alpha}\) considered above each exponential can be written as a \(\delta\)-function \[\lim_{\kappa\to 0}\frac{d^{2}\mathcal{E}_{\alpha}}{4\pi\kappa^{2}}e^{-\frac{| \mathcal{E}_{\alpha}-\mathcal{E}_{\alpha_{0}}(\phi)|^{2}}{4\kappa^{2}}}=\delta ^{(2)}(\mathcal{E}_{\alpha}-\mathcal{E}_{\alpha_{0}}(\phi))d^{2}\mathcal{E}_{ \alpha}, \tag{22}\] such that we have \[\rho_{q}=\frac{1}{2\pi}\int_{0}^{2\pi}d\phi\ket{\chi_{q}(\mathcal{E}_{\alpha_{ 0}}(\phi))}\!\bra{\chi_{q}(\mathcal{E}_{\alpha_{0}}(\phi))}. \tag{23}\] Expressing the state in the photon number basis we find \[\rho_{q}=\frac{1}{2\pi^{2}}\int_{0}^{2\pi}d\phi e^{-\ket{\chi_{q} (\phi)}^{2}}\sum_{n,m}\frac{(\chi_{q}(\phi))^{n}(\chi_{q}^{*}(\phi))^{m}}{\sqrt {n!m!}}\ket{n}\!\!\bra{m}, \tag{24}\] where we have introduced the short hand notation \(\chi_{q}(\phi)=\chi_{q}(\mathcal{E}_{\alpha_{0}}(\phi))\). To further simply the expression we use that for pulses of more than just a few cycles that the phase of the driving field, i.e. the carrier-envelope phase (CEP), only alters the phase of the induced dipole moment expectation value. Further, a different phase in the driving field can be seen as a time-delay \(\Delta t=\phi/\omega\), such that for the harmonic amplitude we have \[\chi_{q}(\phi) =-i\sqrt{q}\int dt\bra{d_{|\alpha_{0}|}(t+\Delta t)}e^{i\omega_{q }t}\] \[=e^{-i\frac{\omega_{q}}{\omega}\phi}\chi_{q}(|\alpha_{0}|). \tag{25}\] And finally, the state of each harmonic field mode is given by \[\rho_{q}=e^{-\ket{\chi_{q}(|\alpha_{0}|)}^{2}}\sum_{n}\frac{\ket{ \chi_{q}(|\alpha_{0}|)}^{2n}}{n!}\ket{n}\!\!\bra{n}, \tag{26}\] where we have used that \[\int_{0}^{2\pi}d\phi e^{-i\frac{\omega_{q}}{\omega}(n-m)\phi}=2\pi\delta(n-m). \tag{27}\] We observe, that each harmonic field mode is diagonal in it's respective photon number basis and does not have quantum optical coherence by means of non-vanishing off-diagonal elements (the same would hold true for the case of an incoherent Fock state drive [36]). The observation that optical coherence is not required to drive the process of high harmonic generation provides interesting insights into the underlying mechanism. This is because the harmonic field modes are still given by coherent states, which are generated by classical charge currents emitting coherent radiation [37]. In the case of HHG it is the electron current driven by the intense field which generates the coherent radiation. However, due to the incoherent averaging over all phases of the driving field, and consequently over all phases of the induced charge current, the final state of the harmonic field modes is incoherent, i.e. diagonal in the respective photon number basis. We emphasize that this incoherent state of each harmonic field mode originates despite the fact that the final state of all modes is in a product state, see Eq.(10), and no entanglement between the field modes was considered. Note that the final field state can be entangled when taking into account dipole moment correlations [30], which would also lead to mixed final states for each mode. However, the effect considered here solely originates from the properties of the driving field and the role of the optical phase and coherence as discussed above. _Optical coherence and the HHG spectrum._ We now use this result to explicitly show that concluding on the coherence properties of the harmonic radiation in all experiments which solely measure the HHG spectrum are fallacious. This is particularly important because in virtually all HHG experiments the generated harmonic radiation is assumed to be coherent, and we shall now show that inferring on the coherence of the harmonics from the spectrum is not allowed and thus the commonly used assumption is not justified. This is because the observer perspective by means of the spectrum itself does not distinguish between the coherent and incoherent harmonic radiation, because the incoherent distribution in (26) and a pure coherent state with the same amplitude leads to the same spectrum. In more detail, this can be seen when computing the average photon number for the harmonic field mode in a pure coherent state \(\bra{\chi_{q}}a_{q}^{\dagger}a_{q}\ket{\chi_{q}}=\ket{\chi_{q}}^{2}\), in comparison to the average for the incoherent state (26) given by \[\left\langle a_{q}^{\dagger}a_{q}\right\rangle=\mathrm{Tr}[a_{q}^{\dagger}a_{q }\rho_{q}]=\ket{\chi_{q}}^{2}. \tag{28}\] From the observation that the pure coherent state \(\ket{\chi_{q}}\) and the incoherent state \(\rho_{q}\) have the same mean photon number (and the same photon number distribution), we can conclude that the most used observable in HHG experiments, i.e. the HHG spectrum, is insensitive to the quantum optical coherence in the radiation field. This implies that inferring about the coherence properties of the harmonic radiation from the HHG spectrum alone can be fallacious by assuming a preferred ensemble in the description of the field state itself [15]. This is particularly interesting when considering that the proper way of describing a CEP unstable driving field is rather given by the mixture \(\rho_{|\alpha_{0}|}\) than the pure state \(\ket{\alpha}\) with a well defined phase. A consequence of this is that interpreting the observation of the HHG spectrum by means of incoherent radiation is equally correct as using coherent radiation. This is because the process of HHG and detection of the spectrum alone is insensitive to quantum opti cal coherence. Extending this analysis to other processes in attosecond science [38] or non-linear optics [39; 40], such as harmonic generation driven by non-classical light [41; 42], in which the field properties are discussed can lead to a new examination of those properties and its interpretation. _Conclusions._ The insights obtained when driving the process of HHG by incoherent radiation shows that quantum optical coherence, in terms of non-vanishing off-diagonal density matrix elements in the photon number basis, is not required to generate high-order harmonics. However, the considerable difference to a coherent drive is that the emitted harmonic radiation is incoherent as well. One reason why optical coherence is not required to drive HHG is because the different contributions of the driving field, by means of the distribution over coherent states, couple diagonally (incoherently) to the charge which emitts the harmonic radiation. This can be seen from (10) where the distribution of the incoherent average is given by the Husimi \(Q\) function and performed over the coherent states into which the driving field is decomposed. This holds in the limit of intense fields with large amplitudes necessary for driving HHG, and hence the off-diagonal elements vanish. The process of HHG is only coherent by means of the emitted radiation due to the oscillating charge current of the electron for a driving field with a well defined phase. Averaging over all phases leads to vanishing quantum optical coherence. This suggests that further investigation about the role of the optical phase from a quantum optical perspective can provide insights into the properties of the generated harmonic radiation. In particular, the role of the carrier-envelope phase (CEP) for ultrashort few-cycles pulses is of interest. Furthermore, I am curious to see: _what is the proper description of the experimental boundary condition, i.e. the quantum state, of an ultrashort few-cycle (CEP-stable) intense laser pulse?_ Moreover, this work shows that concluding on the coherence properties of the harmonic radiation from the observation of the spectrum alone, which is insensitive to the coherence of the field, is not possible without falling into a preferred ensemble fallacy. Finally, we emphasize that it is not only a fallacy to conclude on the coherence properties of the harmonic radiation from the spectrum, but also about the mean field amplitude. The analysis in this work further illustrates that harmonic radiation does not necessarily possess an electric field amplitude, and thus challenges the common believes about the radiation properties of high harmonic generation. Therefore, we emphasize again, that the properties such as optical coherence in this case, depends on the observer perspective, i.e. the specific experiment to be performed. The observer perspective should be the first thing to be defined before talking about the properties of interest. ###### Acknowledgements. P.S. acknowledges funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 847517. ICFO group acknowledges support from: ERC AdG NOQIA; MICIN/AEI (PGC2018-0910.13039/501100011033, CEX2019-000910-S/10.13039/501100011033, Plan National FIDEUA PID2019-106901GB-I00, FPI; MICIN with funding from European Union NextGenerationEU (PRTR-C17.I1): QUANTERA MAQS PCI2019-111828-2); MCIN/AEI/ 10.13039/501100011033 and by the "European Union NextGeneration EU/PRTR" QUANTERA DYNAMITE PCI2022-132919 within the QuantERA II Programme that has received funding from the European Union's Horizon 2020 research and innovation programme under Grant Agreement No 101017733Proyectos de I+D+I "Retos Colaboracion" QUSPIN RTC2019-007196-7); Fundacio Cellex; Fundacio Mir-Puig; Generalitat de Catalunya (European Social Fund FEDER and CERCA program, AGAUR Grant No. 2021 SGR 01452, QuantumCAT U16-011424, co-funded by ERDF Operational Program of Catalonia 2014-2020); Barcelona Supercomputing Center MareNosstrum (FI-2023-1-0013); EU (PASQuanS2.1, 101113690); EU Horizon 2020 FET-OPEN OPTologic (Grant No 899794); EU Horizon Europe Program (Grant Agreement 101080086 - NeQST), National Science Centre, Poland (Symfonia Grant No. 2016/20/W/ST4/00314); ICFO Internal "QuantumGaudi" project; European Union's Horizon 2020 research and innovation program under the Marie-Sklodowska-Curie grant agreement No 101029393 (STREDCH) and No 847648 ("La Caixa" Junior Leaders fellowships ID100010434: LCF/BQ/PI19/11690013, LCF/BQ/PI20/11770012, LCF/BQ/PR21/11840013). Views and opinions expressed are, however, those of the author(s) only and do not necessarily reflect those of the European Union, European Commission, European Climate, Infrastructure and Environment Executive Agency (CINEA), nor any other granting authority. Neither the European Union nor any granting authority can be held responsible for them.
2309.15217
RAGAS: Automated Evaluation of Retrieval Augmented Generation
We introduce RAGAs (Retrieval Augmented Generation Assessment), a framework for reference-free evaluation of Retrieval Augmented Generation (RAG) pipelines. RAG systems are composed of a retrieval and an LLM based generation module, and provide LLMs with knowledge from a reference textual database, which enables them to act as a natural language layer between a user and textual databases, reducing the risk of hallucinations. Evaluating RAG architectures is, however, challenging because there are several dimensions to consider: the ability of the retrieval system to identify relevant and focused context passages, the ability of the LLM to exploit such passages in a faithful way, or the quality of the generation itself. With RAGAs, we put forward a suite of metrics which can be used to evaluate these different dimensions \textit{without having to rely on ground truth human annotations}. We posit that such a framework can crucially contribute to faster evaluation cycles of RAG architectures, which is especially important given the fast adoption of LLMs.
Shahul Es, Jithin James, Luis Espinosa-Anke, Steven Schockaert
2023-09-26T19:23:54Z
http://arxiv.org/abs/2309.15217v1
# RAGAS: Automated Evaluation of Retrieval Augmented Generation ###### Abstract We introduce **RAGAS** (**R**etrieval **A**ugmented **G**eneration **A**ssessment), a framework for reference-free evaluation of Retrieval Augmented Generation (RAG) pipelines. RAG systems are composed of a retrieval and an LLM based generation module, and provide LLMs with knowledge from a reference textual database, which enables them to act as a natural language layer between a user and textual databases, reducing the risk of hallucinations. Evaluating RAG architectures is, however, challenging because there are several dimensions to consider: the ability of the retrieval system to identify relevant and focused context passages, the ability of the LLM to exploit such passages in a faithful way, or the quality of the generation itself. With RAGAs, we put forward a suite of metrics which can be used to evaluate these different dimensions _without having to rely on ground truth human annotations_. We posit that such a framework can crucially contribute to faster evaluation cycles of RAG architectures, which is especially important given the fast adoption of LLMs. ## 1 Introduction Language Models (LMs) capture a vast amount of knowledge about the world, which allows them to answer questions without accessing any external sources. This idea of LMs as repositories of knowledge emerged shortly after the introduction of BERT Devlin et al. (2019) and became more firmly established with the introduction of ever larger LMs Roberts et al. (2020). While the most recent Large Language Models (LLMs) capture enough knowledge to rival human performance across a wide variety of question answering benchmarks Bubeck et al. (2023), the idea of using LLMs as knowledge bases still has two fundamental limitations. First, LLMs are not able to answer questions about events that have happened after they were trained. Second, even the largest models struggle to memorise knowledge that is only rarely mentioned in the training corpus Kandpal et al. (2022); Mallen et al. (2023). The standard solution to these issues is to rely on _Retrieval Augmented Generation (RAG)_Lee et al. (2019); Lewis et al. (2020); Guu et al. (2020). Answering a question then essentially involves retrieving relevant passages from a corpus and feeding these passages, along with the original question, to the LM. While initial approaches relied on specialised LMs for retrieval-augmented language modelling Khandelwal et al. (2020); Borgeaud et al. (2022), recent work has suggested that simply adding retrieved documents to the input of a standard LM can also work well Khattab et al. (2022); Ram et al. (2023); Shi et al. (2023), thus making it possible to use retrieval-augmented strategies in combination with LLMs that are only available through APIs. While the usefulness of retrieval-augmented strategies is clear, their implementation requires a significant amount of tuning, as the overall performance will be affected by the retrieval model, the considered corpus, the LM, or the prompt formulation, among others. Automated evaluation of retrieval-augmented systems is thus paramount. In practice, RAG systems are often evaluated in terms of the language modelling task itself, i.e. by measuring perplexity on some reference corpus. However, such evaluations are not always predictive of downstream performance Wang et al. (2023). Moreover, this evaluation strategy relies on the LM probabilities, which are not accessible for some closed models (e.g. ChatGPT and GPT-4). Question answering is another common evaluation task, but usually only datasets with short extractive answers are considered, which may not be representative of how the system will be used. To address these issues, in this paper we present **RAGAs1**, a framework for the automated assess ment of retrieval augmented generation systems. We focus on settings where reference answers may not be available, and where we want to estimate different proxies for correctness, in addition to the usefulness of the retrieved passages. The RAGAs framework provides an integration with both llama-index and Langchain, the most widely used frameworks for building RAG solutions, thus enabling developers to easily integrate RAGAs into their standard workflow. ## 2 Related Work Estimating faithfulness using LLMsThe problem of detecting hallucinations in LLM generated responses has been extensively studied (Ji et al., 2023). Several authors have suggested the idea of predicting factuality using a few-shot prompting strategy (Zhang et al., 2023). Recent analyses, however, suggest that existing models struggle with detecting hallucination when using standard prompting strategies (Li et al., 2023; Azaria and Mitchell, 2023). Other approaches rely on linking the generated responses to facts from an external knowledge base (Min et al., 2023), but this is not always possible. Yet another strategy is to inspect the probabilities assigned to individual tokens, where we would expect the model to be less confident in hallucinated answers than in factual ones. For instance, BARTScore (Yuan et al., 2021) estimates factuality by looking at the conditional probability of the generated text given the input. Kadavath et al. (2022) use a variation of this idea. Starting from the observation that LLMs provide well-calibrated probabilities when answering multiple-choice questions, they essentially convert the problem of validating model generated answers into a multiple-choice question which asks whether the answer is true or false. Rather than looking at the output probabilities, Azaria and Mitchell (2023) propose to train a supervised classifier on the weights from one of the hidden layers of the LLM, to predict whether a given statement is true or not. While the approach performs well, the need to access the hidden states of the model makes it unsuitable for systems that access LLMs through an API. For models that do not provide access to token probabilities, such as ChatGPT and GPT-4, different methods are needed. SelfCheckGPT (Manakul et al., 2023) addresses this problem by instead sampling multiple answers. Their core idea is that factual answers are more stable: when an answer is factual, we can expect that different samples will tend to be semantically similar, whereas this is less likely to be the case for hallucinated answers. Automated evaluation of text generation systemsLLMs have also been leveraged to automatically evaluate other aspects of generated text fragments, beyond factuality. For instance, GPTScore (Fu et al., 2023) uses a prompt that specifies the considered aspect (e.g. fluency) and then scores passages based on the average probability of the generated tokens, according to a given autoregressive LM. This idea of using prompts was previously also considered by Yuan et al. (2021), although they used a smaller fine-tuned LM (i.e. BART) and did not observe a clear benefit from using prompts. Another approach directly asks ChatGPT to evaluate a particular aspect of the given answer by providing a score between 0 and 100, or by providing a rating on a 5-star scale (Wang et al., 2023). Remarkably, strong results can be obtained in this way, although it comes with the limitation of being sensitive to the design of the prompt. Rather than scoring individual answers, some authors have also focused on using an LLM to select the best answer among a number of candidates (Wang et al., 2023), typically to compare the performance of different LLMs. However, care is needed with this approach, as the order in which the answers is presented can influence the result (Wang et al., 2023). In terms of how ground truth answers or, more generally, generations, have been typically used in the literature, most approaches have relied on the availability of one or more reference answers. For instance, BERTScore (Zhang et al., 2020) and MoverScore (Zhao et al., 2019) use contextualised embeddings, produced by a pre-trained BERT model, to compare the similarity between the generated answer and the reference answers. BARTScore (Yuan et al., 2021) similarly uses reference answers to compute aspects such as precision (estimated as the probability of generating the generated answer given the reference) and recall (estimated as the probability of generating the reference given the generated answer). ## 3 Evaluation Strategies We consider a standard RAG setting, where given a question \(q\), the system first retrieves some context \(c(q)\) and then uses the retrieved context to generate an answer \(a_{s}(q)\). When building a RAG system, we usually do not have access to human-annotated datasets or reference answers. We therefore focus on metrics that are fully self-contained and reference-free. We focus in particular three quality aspects, which we argue are of central importance. First, **Faithfulness** refers to the idea that the answer should be grounded in the given context. This is important to avoid hallucinations, and to ensure that the retrieved context can act as a justification for the generated answer. Indeed, RAG systems are often used in applications where the factual consistency of the generated text w.r.t. the grounded sources is highly important, e.g. in domains such as law, where information is constantly evolving. Second, **Answer Relevance** refers to the idea that the generated answer should address the actual question that was provided. Finally, **Context Relevance** refers to the idea that the retrieved context should be focused, containing as little irrelevant information as possible. This is important given the cost associated with feeding long context passages to LLMs. Moreover, when context passages are too long, LLMs are often less effective in exploiting that context, especially for information that is provided in the middle of the context passage (Liu et al., 2023). We now explain how these three quality aspects can be measured in a fully automated way, by prompting an LLM. In our implementation and experiments, all prompts are evaluated using the gpt-3.5-turbo-16k model, which is available through the OpenAI API2. Footnote 2: [https://platform.openai.com](https://platform.openai.com) FaithfulnessWe say that the answer \(a_{s}(q)\) is faithful to the context \(c(q)\) if the claims that are made in the answer can be inferred from the context. To estimate faithfulness, we first use an LLM to extract a set of statements, \(S(a_{s}(q))\). The aim of this step is to decompose longer sentences into shorter and more focused assertions. We use the following prompt for this step3: Footnote 3: To help clarify the task, we include a demonstration as part of the prompt. This demonstration is not explicitly shown in the listing of the prompts throughout this paper. _Given a question and answer, create one or more statements from each sentence in the given answer. question:_[question] _answer:_[answer] where [question] and [answer] refer to the given question and answer. For each statement \(s_{i}\) in \(S\), the LLM determines if \(s_{i}\) can be inferred from \(c(q)\) using a verification function \(v(s_{i},c(q))\). This verification step is carried out using the following prompt: _Consider the given context and following statements, then determine whether they are supported by the information present in the context. Provide a brief explanation for each statement before arriving at the verdict (Yes/No). Provide a final verdict for each statement in order at the end in the given format. Do not deviate from the specified format. statement:_[statement 1] _... statement:_[statement _n_] The final faithfulness score, \(F\), is then computed as \(F=\frac{|V|}{|S|}\), where \(|V|\) is the number of statements that were supported according to the LLM and \(|S|\) is the total number of statements. Answer relevanceWe say that the answer \(a_{s}(q)\) is relevant if it directly addresses the question in an appropriate way. In particular, our assessment of answer relevance does not take into account factuality, but penalises cases where the answer is incomplete or where it contains redundant information. To estimate answer relevance, for the given answer \(a_{s}(q)\), we prompt the LLM to generate \(n\) potential questions \(q_{i}\) based on \(a_{s}(q)\), as follows: _Generate a question for the given answer. answer:_[answer] We then obtain embeddings for all questions using the text-embedding-ada-002 model, available from the OpenAI API. For each \(q_{i}\), we calculate the similarity \(\text{sim}(q,q_{i})\) with the original question \(q\), as the cosine between the corresponding embeddings. The answer relevance score, AR, for question \(q\) is then computed as: \[\text{AR}=\frac{1}{n}\sum_{i=1}^{n}\text{sim}(q,q_{i}) \tag{1}\] This metric evaluates how closely the generated answer aligns with the initial question or instruction. Context relevanceThe context \(c(q)\) is considered relevant to the extent that it exclusively contains information that is needed to answer the question. In particular, this metric aims to penalise the inclusion of redundant information. To estimate context relevance, given a question \(q\) and its context \(c(q)\), the LLM extracts a subset of sentences, \(S_{ext}\), from \(c(q)\) that are crucial to answer \(q\), using the following prompt: _Please extract relevant sentences from the provided context that can potentially help answer the following question. If no relevant sentences are found, or if you believe the question cannot be answered from the given context, return the phrase "Insufficient Information". While extracting candidate sentences you're not allowed to make any changes to sentences from given context._ The context relevance score is then computed as: \[\text{CR}=\frac{\text{number of extracted sentences}}{\text{total number of sentences in }c(q)} \tag{2}\] ## 4 The WikiEval Dataset To evaluate the proposed framework, we ideally need examples of question-context-answer triples which are annotated with human judgments. We can then verify to what extent our metrics agree with human assessments of faithfulness, answer relevance and context relevance. Since we are not aware of any publicly available datasets that could be used for this purpose, we created a new dataset, which we refer to as _WikiEval4_. To construct the dataset, we first selected 50 Wikipedia pages covering events that have happened since the start of 20225. In selecting these pages, we prioritised those with recent edits. For each of the 50 pages, we then asked ChatGPT to suggest a question that can be answered based on the introductory section of the page, using the following prompt: Footnote 4: [https://huggingface.co/datasets/explodinggradients/wikiEval](https://huggingface.co/datasets/explodinggradients/wikiEval) Footnote 5: That is, beyond the reported training cutoff of the model we used in our experiments. _Your task is to formulate a question from given context satisfying the rules given below: 1. The question should be fully answered from the given context. 2. The question should be framed from a part that contains non-trivial information. 3. The answer should not contain any links. 4. The question should be of moderate difficulty. 5. The question must be reasonable and must be understood and responded to by humans. 6. Do not use phrases that 'provided context', etc in the question context: We also used ChatGPT to answer the generated question, when given the corresponding introductory section as context, using the following prompt: _Answer the question using the information from the given context. question:_[question] _context:_[context] All questions were annotated along the three considered quality dimensions by two annotators. Both annotators were fluent in English and were given clear instructions about the meaning of the three considered quality dimensions. For faithfulness and context relevance, the two annotators agreed in around 95% of cases. For answer relevance, they agreed in around 90% of the cases. Disagreements were resolved after a discussion between the annotators. FaithfulnessTo obtain human judgements about faithfulness, we first used ChatGPT to answer the question without access to any additional context. We then asked the annotators to judge which of the two answers was the most faithful (i.e. the standard one or the one generated without context), given the question and corresponding Wikipedia page. Answer relevanceWe first used ChatGPT to obtain candidate answers with lower answer relevance, using the following prompt: _Answer the given question in an incomplete manner. question:_[question] We then asked human annotators to compare this answer, and indicate which of the two answers had the highest answer relevance. Context relevanceTo measure this aspect, we first added additional sentences to the context by scraping back-links to the corresponding Wikipedia page. In this way, we were able to add information to the context that was related but less relevant for answering the question. For the few pages without any back-links, we instead used ChatGPT to complete the given context. ## 5 Experiments Table 1 analyses the agreement between the metrics proposed in Section 3 and the human assessments from the proposed WikiEval dataset. Each WikiEval instance requires the model to compare two answers or two context fragments. We count how often the answer/context preferred by the model (i.e. with highest estimated faithfulness, answer relevance, or context relevance) coincides with the answer/context preferred by the human annotators. We report the results in terms of accuracy (i.e. the fraction of instances on which the model agrees with the annotators). To put the results in context, we compare our proposed metrics (shown as _RAGAs_ in Table 1) with two baseline methods. For the first method, shown as _GPT Score_, we ask ChatGPT to assign a score between 0 and 10 for the three quality dimensions. To this end, we use a prompt that describes the meaning of the quality metric and then asks to score the given answer/context in line with that definition. For instance, for evaluating faithfulness, we used the following prompt: _Faithfulness measures the information consistency of the answer against the given context. Any claims that are made in the answer that cannot be deduced from context should be penalized._ _Given an answer and context, assign a score for faithfulness in the range 0-10._ _context:_ [context] _answer:_ [answer] Ties, where the same score is assigned by the LLM to both answer candidates, were broken randomly. The second baseline, shown as _GPT Ranking_, instead asks ChatGPT to select the preferred answer/context. In this case, the prompt again includes a definition of the considered quality metric. For instance, for evaluating answer relevance, we used the following prompt: _Answer Relevancy measures the degree to which a response directly addresses and is appropriate for a given question. It penalizes the present of redundant information or incomplete answers given a question. Given an question and answer, rank each answer based on Answer Relevancy._ _question:_ [question] _answer 1:_ [answer 1] _answer 2:_ [answer 2] The results in Table 1 show that our proposed metrics are much closer aligned with the human judgements than the predictions from the two baselines. For faithfulness, the RAGAs prediction are in general highly accurate. For answer relevance, the agreement is lower, but this is largely due to the fact that the differences between the two candidate answers are often very subtle. We found context relevance to be the hardest quality dimension to evaluate. In particular, we observed that ChatGPT often struggles with the task of selecting the sentences from the context that are crucial, especially for longer contexts. ## 6 Conclusions We have highlighted the need for automated reference-free evaluation of RAG systems. In particular, we have argued the need for an evaluation framework that can assess faithfulness (i.e. is the answer grounded in the retrieved context), answer relevance (i.e. does the answer address the question) and context relevance (i.e. is the retrieved context sufficiently focused). To support the development of such a framework, we have introduced _WikiEval_, a dataset which human judgements of these three different aspects. Finally, we have also described RAGAs, our implementation of the three considered quality aspects. This framework is easy to use and can provide developers of RAG systems with valuable insights, even in the absence of any ground truth. Our evaluation on WikiEval has shown that the predictions from RAGAs are closely aligned with human predictions, especially for faithfulness and answer relevance. \begin{table} \begin{tabular}{l c c c} \hline \hline & **Faith.** & **Ans. Rel.** & **Cont. Rel.** \\ \hline RAGAs & **0.95** & **0.78** & **0.70** \\ GPT Score & 0.72 & 0.52 & 0.63 \\ GPT Ranking & 0.54 & 0.40 & 0.52 \\ \hline \hline \end{tabular} \end{table} Table 1: Agreement with human annotators in pairwise comparisons of faithfulness, answer relevance and context relevance, using the WikEval dataset (accuracy).
2309.05004
Reconstructing the kinetic chemotaxis kernel using macroscopic data: well-posedness and ill-posedness
Bacterial motion is steered by external stimuli (chemotaxis), and the motion described on the mesoscopic scale is uniquely determined by a parameter $K$ that models velocity change response from the bacteria. This parameter is called chemotaxis kernel. In a practical setting, it is inferred by experimental data. We deploy a PDE-constrained optimization framework to perform this reconstruction using velocity-averaged, localized data taken in the interior of the domain. The problem can be well-posed or ill-posed depending on the data preparation and the experimental setup. In particular, we propose one specific design that guarantees numerical reconstructability and local convergence. This design is adapted to the discretization of $K$ in space and decouples the reconstruction of local values of $K$ into smaller cell problems, opening up parallelization opportunities. Numerical evidences support the theoretical findings.
Kathrin Hellmuth, Christian Klingenberg, Qin Li, Min Tang
2023-09-10T11:36:01Z
http://arxiv.org/abs/2309.05004v4
Numerical Reconstruction of the kinetic chemotaxis kernel from macroscopic measurement, wellposedness and illposedness ###### Abstract Directed bacterial motion due to external stimuli (chemotaxis) can, on the mesoscopic phase space, be described by a velocity change parameter \(K\). The numerical reconstruction for \(K\) from experimental data provides useful insights and plays a crucial role in model fitting, verification and prediction. In this article, the PDE-constrained optimization framework is deployed to perform the reconstruction of \(K\) from velocity-averaged, localized data taken in the interior of a 1D domain. Depending on the data preparation and experimental setup, this problem can either be well- or ill-posed. We analyze these situations, and propose a very specific design that guarantees local convergence. The design is adapted to the discretization of \(K\) and decouples the reconstruction of local values into smaller cell problem, opening up opportunities for parallelization. We further provide numerical evidence as a showcase for the theoretical results. inverse problems in PDEs; numerical methods for inverse problems with PDEs; PDE-constrained optimization; kinetic chemotaxis equation; numerical analysis; well- and ill-posedness; mathematical biology ## Funding K.H. acknowledges support by the German Academic Scholarship Foundation (Studienstiftung des deutschen Volkes) and the Marianne-Plehn-Program.. Q.L. is partially supported by Vice Chancellor for Research and Graduate Education, DMS-2308440 and ONR-N000142112140. M.T. is partially supported by the Strategic Priority Research Program of Chinese Academy of Sciences, XDA25010401 and NSFC12031013. ## 1 Introduction Kinetic chemotaxis equation is one of the classical equations describing the collective behavior of bacteria motion. Presented on the phase space, the equation describes the "run-and-tumble" bacteria motion. The solution \(f(t,x,v)\) represents the density of bacteria at any given time \(t\) for any location \(x\) moving with velocity \(v\). Since it contains more detailed phase-space information, compared to macroscopic models at the population level, such as the Keller Segel model, the equation has the greater potential to capture the fine motion of the bacteria. Indeed, it is observed that the dynamics predicted by the model is in high agreement with real measurements, see Berg (1993); Emako et al. (2016); Saragosti et al. (2011, 2010). It is noteworthy that these comparisons are conducted in the forward-simulation setting. Guesses are made about parameters, and simulations are run to be compared with experimental measurements. To fully reveal the bacteria's motion and its interaction with the environment, inverse perspectives have to be taken. The measurement data can be at the individual or the population level, i.e., biophysicists can use a high-resolution camera and trace each single bacterium for a long time or they can take photos and record the evolution of the density of bacteria on a cell cultural dish. These collected data should be used to unveil the true interaction between particles Li et al. (2019). This framework necessitates the application of numerical inversion algorithms. To be specific, we frame this problem into a PDE-constrained optimization and study the well-posedness and the ill-posedness of the numerical reconstruction when different types of initial condition and measurement schemes are provided. As more first-principle based physics get involved in applications, kinetic models are becoming more important in scientific domains, see modeling of neutrons Davison and Sykes (1958), photons or electrons Rybicki and Lightman (1986) and rarefied gas Cercignani (2012). The applications on biological and social science have also been put forward in Othmer et al. (1988) for cell motion, in Taylor-King et al. (2014) for animal (birds) migration or in Abli et al. (2023); Carrillo et al. (2009); Chu et al. (2022); Motsch and Tadmor (2014); Toscani (2006) for opinion formation. In most, if not all of these models, parameters are included to characterize the interactions between agents or with the media. The applications in which the interactions are hard to be measured experimentally naturally prompts the use of inverse solvers. The most prominent application of inverse problem confined to the domain of kinetic-equation governed systems is optical tomography from medical imaging, where non-intrusive boundary data maps out the relation between optical properties of interior bio-tissue and the measured light intensity on the surface of the domain. Mathematically the problem is framed to evaluate the richness of data in the albedo operator. Singular decomposition is deployed as a specific mathematical technique to conduct such investigation Bal et al. (2008); Choulli and Stefanov (1996); Lai et al. (2019); Li and Sun (2020), and these studies have their numerical counterparts in Arridge and Schotlan (2009); Chen et al. (2018); Egger and Schlottbom (2013); Prieto and Dorn (2016); Ren (2010), just to mention a few references. Since tracing every single bacterium is much more difficult than measuring the density evolution and is sometimes not possible in some extreme environments, one natural question is whether it is possible to unveil how the bacteria interact with the environment by the measurement at the population level. Due to the specific biological question at hand, the biggest difference between our problem setup and the previous ones is the fact that our measurements are taken in the interior of the domain, but are macroscopic. The kind of data preparation is intrusive in the sense that photos are taken over the entire cultural dish but not only on the boundary (domain surface), so it enriches the available dataset. While optical tomography equipment can read off the velocity information, the photos usually only provide density information, except for very special cases Jeckel et al. (2019); Zhang et al. (2010) that have very high requirements on the lab equipements. Since the measurement is macroscopic, this reduces the richness of data. In Hellmuth et al. (2022) the authors examined the theoretical aspect of this reconstruction problem with macroscopic interior data. It was shown that trading off the microscopic information for the interior data still gives us sufficient information to recover the transition kernel, but the experiments need to be accordingly designed. However, in the theoretical paper we assumed that the transition kernel is an unknown function, and thus an infinitely dimensional object, and the available data is the full map (from initial condition to density for all time and space), and thus an infinite dimensional object as well. This infinite-to-infinite setup is hard to be implemented numerically, so the theoretical results only provide a guidance but not a direct guarantee. The current paper can be seen as the numerical counterpart of Hellmuth et al. (2022). In particular, we study, on the discrete level, if measurement data are finite in size, and the to-be-reconstructed transition kernel is also represented by a finite dimensional vector, can one still successfully recover the unknowns. It turns out that the numerical issue is significantly more convoluted. When the dimension of \(K\), the transition kernel, is changed from infinite to finite, we expect the amount of data needed to recover this finite-dimentional parameter should also be reduced. However, by how much and in what way is far from being clear. We will present below two different scenarios to argue that when data is prepared well, a stable reconstruction is expected, but when the data "degenerates," it loses information for a full recovery. Such well-posedness and ill-posedness are separately presented in two subsections of Section 3. Then in Section 5 we present the numerical evidence to showcase the theoretical prediction. It should be noted that it is well within anticipation that different data preparation gives different conditioning for parameter reconstruction. This further prompts the study of experimental design. In the context of reconstructing the transition kernel in the chemotaxis equation, in Section 4 we will design a particular experimental setup that guarantees a unique reconstruction. We should further note that reconstructing parameters for bacterial motion using the inversion perspective is not entirely new. In literature, there exist two different approaches: the first involves the utilization of statistical information at the individual level to extrapolate the microscopic transition kernel, whereas the second entails employing density data at a macroscopic scale to reconstruct certain parameters associated with a parametrized model through an optimization framework Ford and Lauffenburger (1991); Giometto et al. (2015); Salek et al. (2019); Tranquillo et al. (1988). To our knowledge, these available studies focus on either microscopic or macroscopic models with a very limited number of unknowns to be recovered, and data of the corresponding scale are used to construct model parameters of the corresponding scale. For instance, in Pohl et al. (2017); Seyrich et al. (2018), the tumbling behavior is inferred statistically on a microscopic level, i.e. the tumbling, as an individual random process, is described by a few moments of its probability distribution that are recovered from data. In Egger et al. (2015); Fister and Mccarthy (2008), the macroscopic problem was considered where parameterization emerged from discretization, and regularization was used to counter the noise. Moreover, the viewpoint of constructing the optimization problem in this article significantly differs from the existing literature. Similar as in Egger et al. (2015); Fister and Mccarthy (2008), we recover the discretized version of the kinetic parameter, as this framework brings more flexibility. Our focus, however, lies on the study of well- and ill-posedness of the optimization problem related to the parameter reconstruction. To observe these effects, no regularization is applied and numerical examples are presented in a noise-free setting. This demonstrates the necessity for well-designed experimental setups, which are adapted to the fineness of the parameter discretization. ## 2 Framing a PDE-constrained optimization problem We frame the problem as a PDE-constrained optimization, which is to reconstruct \(K\) that fits data as much as possible conditioned on the fact that the kinetic chemotaxis model is satisfied. To start off, we first present the kinetic chemotaxis model. Denoting \(f(t,x,v)\) the probability density distribution of bacteria in space \(x\in\mathbb{R}^{1}\), time \(t>0\) and velocity \(v\in V\), the equation writes: \[\partial_{t}f+v\cdot\nabla_{x}f=\mathcal{K}(f):=\int_{V}K(x,v,v^{\prime})f(x,t,v^{\prime})-K(x,v^{\prime},v)f(x,t,v)\,\text{d}v^{\prime}, \tag{1}\] \[f(t=0,x,v)=\phi(x,v)\in L^{\infty}{}_{+,c}(\mathbb{R}\times V) \tag{2}\] where \(v\cdot\nabla_{x}f\) characterizes the "run"-part where bacteria move straight forward with velocity \(v\), and the terms on the right characterize the "tumble"-part, with bacteria changing from having velocity \(v^{\prime}\) to \(v\) using the transitional rate \(K(x,v,v^{\prime})\geq 0\) and \(K(x,v,v^{\prime})\) is called the tumbling kernel. Initial data is given at \(t=0\) and is denoted by \(\phi(x,v)\). We reduce the original problem for \((x,v)\in\mathbb{R}^{3}\otimes\mathbb{S}^{2}\) to \((x,v)\in\mathbb{R}^{1}\otimes\{\pm 1\}\)Giometto et al. (2015); Saragosti et al. (2011, 2010),i.e. the bacteria either moves to the left or to the right, and \(x\) is 1D in space. This simple setting on the one hand applies to the case when experiments are conducted in a bacteria culture tube, thus is biologically meaningful, on the other hand, it includes the difficulties of our setting of inversion. More details will be discussed in the subsequent part. Moreover, in some applications, the environment changes with time then the tumbling kernel \(K\) may depend on time as well, we focus on the time-independent case here when the outside signaling does not change. To understand the the particle interaction with the environment, one needs to determine \(K\) and data is collected to infer it. Typically, it is unnecessary to recover it as a function, but some fine-discretization of it would suffice. To do so, we assume that \(K\) can be well represented by a list of finite many parameters: \[K(x,v,v^{\prime})=\sum_{r=1}^{R}K_{r}(v,v^{\prime})\mathds{1}_{I_{r}}(x)\,, \tag{3}\] meaning in the interval of \(I_{r}=[a_{r-1},a_{r})\), \(r=2,...,R-1\) (with \(a_{r-1}<a_{r}\) and \(I_{1}=(-\infty,a_{1})\), \(I_{R}=[a_{R-1},\infty)\)), \(K(x,v,v^{\prime})\) can be well approximated by a function independent of the spatial variable \(x\). Since \(V=\{\pm 1\}\), there are only two choices for the velocity change encrypted by \(K_{r}(v,v^{\prime})\): \(K_{r}(1,-1)\) or \(K_{r}(-1,1)\), and thus there are in total \(2R\) free values for \(K\). Throughout the paper we abuse the notation and denote \(K\in\mathbb{R}^{2R}\) as the unknown vector to be reconstructed. Moreover, we set: \[K_{r}=[K_{r,1},K_{r,2}]\,,\quad\text{with}\quad K_{r,1}=K_{r}(1,-1)\,\quad K_ {r,2}=K_{r}(-1,1)\,. \tag{4}\] The dataset is also finite in size. In particular, we mathematically represent the local pixel reading of the photo by a test function \(\mu_{l}\in L^{1}(\mathbb{R})\) for some \(l\), then the data takes the form of \[M_{l}(K)=\int_{\mathbb{R}}\int_{V}f_{K}(x,T,v)\,\text{d}v\;\mu_{l}(x)\,\text{ d}x,\qquad l=1,...,L\,, \tag{5}\] where \(f_{K}\) denotes the solution to (1) with kernel \(K\). Denote the ground-truth transition kernel to be \(K_{\star}\), then the true data is: \[y_{l}=M_{l}(K_{\star})\,,\qquad l=1,...,L\,. \tag{6}\] As discussed in Section 1, when \(K\) is reduced to be represented by a finite dimensional vector we expect the amount of data needed is also finite, but how to do the reduction for a stable reconstruction is still unknown. Mathematically, this amounts to studying the _intricate relation_ between \(R\) and \(L\) and \(\{\mu_{l}\}\). The numerical inversion is presented as a PDE-constrained optimization. We aim to minimize the square loss between the simulated data \(M(K)\) and the data \(y\): \[\begin{split}\min_{K}&\quad\mathcal{C}(K)=\min \frac{1}{2L}\sum_{l=1}^{L}\left(M_{l}(K)-y_{l}\right)^{2}\\ &\text{subject to}\quad\eqref{eq:M_K},\text{ and }\eqref{eq:M_K}.\end{split} \tag{7}\] There are many algorithms that can be deployed to solve this minimization problem, and we are particularly interested in calling the simple gradient-descent (GD) algorithm. The update is given by: \[K^{(n+1)}=K^{(n)}-\eta_{n}\nabla_{K}\mathcal{C}(K^{(n)})\,, \tag{8}\] with a suitable step size \(\eta_{n}\in\mathbb{R}_{+}\). It is a standard application of calculus-of-variation, as detailed in Appendix A, to derive that the \((r,i)\)-th (\(i=1,2\), \(r=1,\cdots,R\)) entry of the gradient \(\nabla_{K}\mathcal{C}\): \[\frac{\partial\mathcal{C}}{\partial K_{r,i}}=\int_{0}^{T}\int_{I_{r}}f(t,x,v_{ i}^{\prime})(g(t,x,v_{i}^{\prime})-g(t,x,v_{i}))\,\text{d}x\,\text{d}t\,, \tag{9}\] where \((v_{i},v_{i}^{\prime})=\big{(}(-1)^{i},(-1)^{i+1}\big{)}\) in analogy to notation (4) for \(K\) and \(g\) is the adjoint state that solves the adjoint equation \[-\partial_{t}g-v\cdot\nabla g=\tilde{\mathcal{K}}(g):=\int_{V}K(x,v^{\prime},v)\big{(}g(x,t,v^{\prime})-g(x,t,v)\big{)}\,\text{d}v^{\prime}, \tag{10}\] \[g(x,t=T,v)=-\frac{1}{L}\sum_{l=1}^{L}\mu_{l}(x)\left(M_{l}(K)-y_{l}\right). \tag{11}\] Notice that by definition of the measurement procedure (5), the final condition of \(g\) in (11) is independent of \(v\) and contains the spatial test functions \(\mu_{l}\). The convergence of GD in (8) is guaranteed for a suitable step size if the objective function is convex. Denoting \(H_{K}\mathcal{C}\) the Hessian function of the loss function, we need \(H_{K}\mathcal{C}>0\) at least in a small neighborhood around \(K_{\star}\). If so, a constant step size \(\eta_{n}=\eta=\frac{2\lambda_{\min}}{\lambda_{\max}^{2}}\) approximates the step size suggested in Wright and Recht (2022) for optimal convergence. Here \(\lambda_{\min},\lambda_{\max}\) denote the smallest and largest eigenvalues of \(H_{K}\mathcal{C}(K_{\star})\). More sophisticated methods include line search for the step size or higher order methods for the update are also possible, see e.g. Ren (2010), Wright and Recht (2022). To properly set up the problem, we make some general assumptions and fix some notations. _Assumption 1_.: We make assumptions to ensure the wellposedness of the forward problem in a feasible set, in particular: * We will work locally in \(K\), so we assume in a neighbourhood \(\mathcal{U}_{K_{\star}}\) of \(K_{\star}\), there is a constant \(C_{K}\) so that for all \(K\in\mathcal{U}_{K_{\star}}\): \[0<\|K\|_{\infty}\leq C_{K}\,.\] (12) * Assume the initial data \(\phi\) be in the space \(L^{\infty}_{+,c}(\mathbb{R}\times V)\) of non negative, compactly supported functions with essential bound \[\big{\|}\phi\big{\|}_{L^{\infty}(\mathbb{R}\times V)}=:C_{\phi}\,.\] * Reciprocally, we assume the test functions \(\mu_{l}\), \(l=1,...,L\), are in the space \(L^{1}(\mathbb{R})\) with uniform \(L^{1}\) bound \[\int_{\mathbb{R}}|\mu_{l}|\,\text{d}x\leq C_{\mu},\quad l=1,...,L\,.\] These assumptions allow us to operate \(f\) and \(g\) in the right spaces. In particular, we can give an upper bound for both the forward and adjoint solution in \(L_{\infty}\) sense, see Lemma B.1 and B.2 in Appendix B. In fact, these assumptions are in line with realistic modelling: the boundedness of the parameter \(K\) emerges from its interpretation as a probability of changing directions. Non-negativity and boundedness of the initial bacteria density are physical, as bacteria cannot infinitely aggregate due to volume filling effects. ## 3 Well-posedness vs. ill-posedness The well-posedness of the inversion heavily depends on the data preparation. If a suitable experimental setting is arranged, the optimization problem is expected to provide local wellposedness around the groundtruth parameter \(K_{\star}\), so the classical GD can reconstruct the groundtruth. However, if data becomes degenerate, we also expect ill-conditioning and the GD will find it hard to converge to the global minimum. We spell out the two scenarios in the two theorems below. **Theorem 3.1**.: _Assume the hessian matrix of the cost function is positive definite at \(K_{\star}\) and let the remaining assumptions of Proposition 3.1 hold, then there exists a neighbourhood \(U\) of \(K_{\star}\), in which the optimization problem (7) is Tykhonov well-posed. In particular, the gradient descent algorithm (8) with initial value \(K_{0}\in U\) converges._ This theorem provides the well-posedness of the problem. To be specific, it spells out the sufficient condition for GD to find the global minimizer \(K_{\star}\). The condition of the hessian being positive definite at \(K_{\star}\) may seem strong, but, paying attention to certain restrictions such as the minimal of measurements number \(L\geq 2R\), we can carefully craft an experiment so to make sure it holds true. This line of study is in essence experimental design, as we will be more specific in Section 4. On contrary to the previous wellposedness discussion, we also provide a negative result below on ill-conditioning. **Theorem 3.2**.: _Let \(L=2R\) and let Assumption 1 hold for all considered quantities. Consider a sequence \((\mu_{1}^{(m)})_{m}\) of test functions for the first measurement \(M_{1}(K)\) for which one of the following scenarios holds:_ 1. \(\mu_{1}^{(m)}\to\mu_{2}\) _in_ \(L^{1}\) _as_ \(m\to\infty\)_._ 2. \((\mu_{1}^{(m)})_{m}\) _and_ \(\mu_{2}\) _are mollifications of singular point-measurements in measurement points_ \(\{(x_{1}^{(m)})_{m},x_{2}\}\) _such that_ \(x_{1}^{(m)}\to x_{2}\) _as_ \(m\to\infty\)_. Furthermore, let the assumptions of Proposition_ 3.3 _hold._ _Then, as \(m\to\infty\), the loss function cannot be strongly convex, and the convergence of the gradient descent algorithm (8) to \(K_{\star}\) cannot be guaranteed. In scenario 2, this holds independently of the mollification parameter._ The two theorems, to be proved in detail in Section 3.1 and 3.2 respectively, hold vast contrast to each other. The core of the difference between the two theorems is the data selection, with the former guaranteeing the convexity of the objective function, and the latter does not. To evaluate the convexity of the loss function amounts to the study of the hessian, a \(2R\times 2R\) matrix: \[H_{K}\mathcal{C}(K)=\frac{1}{L}\sum_{l=1}^{L}\left(\nabla_{K}M_{l}(K)\otimes \nabla_{K}M_{l}(K)+(M_{l}(K)-y_{l})H_{K}M_{l}(K)\right)\,. \tag{13}\] It is a well-known fact Polyak and Shcherbakov (2017) that a positive definite hessian provides the strong convexity of the loss function, and is a sufficient criterion that permits the convergence in the parameter space. If \(H_{K}\mathcal{C}(K_{\star})\) is known to be positive, given in a small neighborhood, the hessian matrix does not change much, the convexity is guaranteed. Such boundedness of perturbation in the hessian is spelled out in Proposition 3.1, and Theorem 3.1 naturally follows. Theorem 3.2 is to look at the opposite side of the problem. In particular, it examines the degeneracy when two data collection points get very close. The degeneracy is reflected mathematically by the deficient rank structure in the hessian (13), prompting the collapse of the landscape of the objective function. The two scenarios of deficient ranks are presented in Proposition 3.3 and 3.2 respectively, and then Theorem 3.2 naturally follows. ### Local well-posedness of the optimization problem Generally speaking, it would not be easy to characterize the landscape of the distribution and thus hard to prescribe conditions for obtaining global convergence. However, suppose the data is prepared well enough that guarantees the positive definiteness for the Hessian \(H_{K}\mathcal{C}(K_{\star})\) evaluated at the groundtruth \(K_{\star}\), there is a good chance that in a small neighborhood of this groundtruth, positive-definiteness persists and GD, if starts within this neighborhood, finds the global minimum to (7). This gives us a local well-posedness. This local behavior is characterized in the following proposition. **Proposition 3.1**.: _Let Assumption 1 hold. Assume the Hessian \(H_{K}\mathcal{C}(K_{\star})\) is positive definite at \(K_{\star}\), and that there is a uniform bound for the Hessian of the measurements in the neighborhood \(\mathcal{U}_{K_{\star}}\) in the sense that \(|H_{K}M_{l}(K)(v,v^{\prime})|_{F}\leq C_{H_{K}M}\) for all \(l=1,...,L\) and \(K\in\mathcal{U}_{K}\) in the Frobenius norm. Then there exists a (bounded) neighbourhood \(U\in\mathcal{U}_{K}\), of \(K_{\star}\), where \(H_{K}\mathcal{C}(K)\) is positive definite for all \(K\in U\). Moreover, the minimal eigenvalues \(\lambda_{\min}(H_{K}\mathcal{C})\) satisfies_ \[|\lambda_{\min}(H_{K}\mathcal{C}(K_{\star}))-\lambda_{\min}(H_{K}\mathcal{C}(K) )|\leq\|K_{\star}-K\|_{\infty}C^{\prime}, \tag{14}\] _where the constant \(C^{\prime}\) depends on the measurement time \(T\), \(R\), and the bounds \(C_{\mu}\), \(C_{\phi}\), \(C_{K}\) in Assumption 1 and \(C_{H_{K}M}\). As a consequence, the radius of \(U\) can be chosen as \(\lambda_{\min}(H_{K}\mathcal{C}(K_{\star}))/C^{\prime}\)._ The proposition is hardly surprising. Essentially it suggests the hessian term is Lipschitz continuous with respect to its argument. This is expected if the solution to the equation is somewhat smooth. Such strategy will be spelled out in detail in the proof. With this proposition in hand, Theorem 3.1 is immediate. Proof for Theorem 3.1.: By Proposition 3.1, there exists a neighbourhood \(U\) in which the Hessian is positive definite, \(H_{K}\mathcal{C}(K)>0\) for all \(K\in U\). Without loss of generality, we can assume that \(U\) is a convex set. By the strong convexity of \(\mathcal{C}\) in \(U\), the minimizer \(K_{\star}\in U\) of \(\mathcal{C}\) is unique and thus the finite dimension of the parameter space \(K\in\mathbb{R}^{2R}\) guarantees Tykhonov well-posed of the optimization problem (7) (Ferrentino and Boniello, 2019, Prop.3.1). Now we give the proof for Proposition 3.1. It mostly relies on the matrix perturbation theory (Horn and Johnson, 1985, Cor. 6.3.8) and continuity of the equation. Proof for Proposition 3.1.: According to the matrix perturbation theory, the minimal eigenvalue is continuous with respect to a perturbation to the matrix, we have \[\begin{split}&|\lambda_{\min}(H_{K}\mathcal{C}(K_{\star}))- \lambda_{\min}(H_{K}\mathcal{C}(K))|\leq\|H_{K}\mathcal{C}(K_{\star})-H_{K} \mathcal{C}(K)\|_{F}\\ &\leq\frac{1}{L}\sum_{t}\left\|(\nabla_{K}M_{l}\otimes\nabla_{K} M_{l})(K_{\star})-(\nabla_{K}M_{l}\otimes\nabla_{K}M_{l})(K)\|_{F}\right.\\ &\hskip 28.452756pt+\left\|(M_{l}(K)-y_{l})H_{K}M_{l}(K)\right\|_{F }\right\}\\ &\leq\frac{1}{L}\sum_{t}\left(\|\nabla_{K}M_{l}(K_{\star})-\nabla _{K}M_{l}(K)\|_{F}\left(\|\nabla_{K}M_{l}(K_{\star})\|_{F}+\|\nabla_{K}M_{l}(K )\|_{F}\right)\right.\\ &\hskip 28.452756pt+\left.|M_{l}(K)-y_{l}\|H_{K}M_{l}(K)\right|_{F }\right)\end{split} \tag{15}\] where we used the hessian form (13), triangle inequality and sub-multiplicativity for Frobenius norms. To obtain the bound (14) now amounts to quantifying each term on the right hand side of (15) and bounding them by \(\|K_{\star}-K\|_{\infty}\). This is respectively achieved in Lemmas 3.3, 3.5 and 3.6 that give controls to \(M_{l}(K)-y_{l}\), \(\|\nabla_{K}M_{l}(K)\|_{F}\) and \(\|\nabla_{K}M_{l}(K_{\star})-\nabla_{K}M_{l}(K)\|_{F}\). Putting these results together, we have: \[\begin{split}&|\lambda_{\min}(H_{K}\mathcal{C}(K_{\star}))- \lambda_{\min}(H_{K}\mathcal{C}(K))|\leq\|H_{K}\mathcal{C}(K_{\star})-H_{K} \mathcal{C}(K)\|_{F}\\ &\leq 2\|K_{\star}-K\|_{\infty}C_{\mu}C_{\phi}e^{2C_{K}|V|T}\Bigg{[}8 RC_{\phi}C_{\mu}e^{2|V|C_{K}T}T\left(|V|T^{2}+\frac{1}{C_{K}}\left(\frac{e^{2C_{K}|V|T} -1}{2C_{K}|V|}-T\right)\right)\\ &\hskip 28.452756pt+\left.|V|^{2}TC_{H_{K}M}\right]\\ &=:\|K_{\star}-K\|_{\infty}C^{\prime}.\end{split}\] The positive definiteness in a small neighborhood of \(K_{\star}\) now follows. Finally, given \(\|K_{\star}-K\|_{\infty}<\lambda_{\min}(H_{K}\mathcal{C}(K_{\star}))/C^{\prime}\), the triangle inequality shows \[\lambda_{\min}(H_{K}\mathcal{C}(K))\geq\lambda_{\min}(H_{K}\mathcal{C}(K_{ \star}))-|\lambda_{\min}(H_{K}\mathcal{C}(K_{\star}))-\lambda_{\min}(H_{K} \mathcal{C}(K))|>0.\] We note the form of \(C^{\prime}\) is complicated but the dependence is spelled out in the following lemmas and summarized in the theorem statement. As can be seen from the proof, Proposition 3.1 strongly relies on the boundedness of the terms in (15). We present the estimates below. **Lemma 3.3**.: _Let Assumptions 1 holds, then the measurement difference is upper bounded by:_ \[|M_{l}(K)-y_{l}|\leq|V|C_{\mu}|\big{(}f_{K_{*}}-f_{K}\big{)}(T)\|_{L^{\infty}( \mathbb{R}\times V)}\leq\|K_{*}-K\|_{\infty}2|V|^{2}C_{\mu}C_{\phi}Te^{2C_{K}|V| T}.\] Proof.: Apply Lemma B.1 to the difference equation for \(\bar{f}:=f_{K_{*}}-f_{K}\) \[\partial_{t}\bar{f}+v\cdot\nabla_{x}\bar{f}=\mathcal{K}_{K}(\bar{f})+ \mathcal{K}_{(K_{*}-K)}(f_{K_{*}}) \tag{16}\] with initial condition \(0\) and source \(h=\mathcal{K}_{(K_{*}-K)}(f_{K_{*}})\in L^{1}((0,T);L^{\infty}(\mathbb{R} \times V))\) by the regularity (39) of \(f_{K_{*}}\). This leads to \[\operatorname*{ess\,sup}_{v,x}|\bar{f}|(x,t,v) \leq\int_{0}^{t}e^{2|V|C_{K}(t-s)}\operatorname*{ess\,sup}_{v,x} |\mathcal{K}_{(K_{*}-K)}(f_{K_{*}})(s)|\,\mathrm{d}s\] \[\leq 2|V|\|K_{*}-K\|_{\infty}e^{2|V|C_{K}t}C_{\phi}t, \tag{17}\] where we used the estimate \(\|f_{K_{*}}(s)\|_{L^{\infty}(\mathbb{R}\times V)}\leq e^{2|V|C_{K}s}\|\phi\|_ {L^{\infty}(\mathbb{R}\times V)}\) from Lemma B.1 in the last step. To estimate the gradient \(\nabla_{K}M_{l}(K)\) and its difference, we first recall the form in (9) with \(\mathcal{C}\) changed to \(M_{l}\) here. Analogously, we can use the adjoint equation to explicitly represent the gradient: **Lemma 3.4**.: _Let Assumption 1 hold. Denote by \(f_{K}\) the mild solution of (1) and by \(g_{l}\in C^{0}\left([0,T];L^{\infty}(V;L^{1}(\mathbb{R}))\right)\) the mild solution of_ \[-\partial_{t}g_{l}-v\cdot\nabla g_{l}=\tilde{\mathcal{K}}(g_{l}):=\int_{V}K(x,v^{\prime},v)\big{(}g_{l}(x,t,v^{\prime})-g_{l}(x,t,v)\big{)}\,dv^{\prime}, \tag{18}\] \[g_{l}(t=T,x,v)=-\mu_{l}(x)\,.\] _Then_ \[\frac{\partial M_{l}(K)}{\partial K_{r,i}}=\int_{0}^{T}\int_{I_{r}}f^{\prime} (g_{l}^{\prime}-g_{l})\,dx\,dt\,, \tag{19}\] _where we used the abbreviated notation \(h:=h(t,x,v_{i})\) and \(h^{\prime}:=h(t,x,v_{i}^{\prime})\) for \(h=f,g_{l}\), with \((v_{i},v_{i}^{\prime})\) defined as in (9)._ We omit explicitly writing down the \(x,t\) dependence when it is not controversial. The proof for this lemma is the application of calculus-of-variation and will be omitted from here. We are now in the position to derive the estimates of the gradient norms. **Lemma 3.5**.: _Under Assumption 1, the gradient is uniformly bounded_ \[\|\nabla_{K}M_{l}(K)\|_{F}\leq\sqrt{2R}2C_{\phi}C_{\mu}e^{2C_{K}|V|T}T,\qquad \text{ for all }K\in\mathcal{U}_{K}.\] Proof.: The Frobenius norm is bounded by the entries \(\|\nabla M_{l}(K)\|_{F}\leq\sqrt{2R}\max_{r,i}|\frac{\mathrm{d}M_{l}(K)}{ \mathrm{d}K_{r,i}}|\). Representation (19) together with (40) then gives the bound \[\left|\frac{\mathrm{d}M_{l}}{\mathrm{d}K_{r,i}}\right|\leq 2C_{\phi}\int_{0}^{T}e ^{2|V|C_{K}t}\max_{v}\left(\int_{\mathbb{R}}|g_{l}|\;\mathrm{d}x\right)\, \mathrm{d}t, \tag{20}\] Application of lemma B.2 to \(g=g_{l},h=0\) and \(\psi=-\mu_{l}\) yields \[\max_{v}\int_{\mathbb{R}}|g_{l}|\;\mathrm{d}x\;(t)\leq\int_{\mathbb{R}}|-\mu_{ l}(x)|\,\mathrm{d}x\;e^{2C_{K}|V|(T-t)}\leq C_{\mu}e^{2C_{K}|V|(T-t)}, \tag{21}\] which, when plugged into (20), gives \[\left|\frac{\partial M_{l}}{\partial K_{r,i}}\right|\leq 2C_{\phi}C_{\mu}e^{2C_{K}|V| T}T\,.\] **Lemma 3.6**.: _In the setting of Theorem 3.1 and under Assumption 1, the gradient difference is uniformly bounded in \(K\in\mathcal{U}_{K}\) by_ \[\|\nabla M_{l}(K_{*})-\nabla M_{l}(K)\|_{F}\] \[\leq\sqrt{2R}\|K_{*}-K\|_{\infty}2C_{\phi}C_{\mu}e^{2C_{K}|V|T} \left(|V|T^{2}+\frac{1}{C_{K}}\left(\frac{e^{2C_{K}|V|T}-1}{2C_{K}|V|}-T \right)\right)\,.\] Proof.: Now consider the entries of \(\nabla M_{l}(K_{\star})-\nabla M_{l}(K)\) to show smallness of \(\|\nabla M_{l}(K_{\star})-\nabla M_{l}(K)\|_{F}\). Rewrite, using lemma 3.4 and (40) \[\left|\frac{\partial M_{l}(K_{\star})}{\partial K_{r,i}}-\frac{ \partial M_{l}(K)}{\partial K_{r,i}}\right| =\left|\int_{0}^{T}\int_{I_{r}}f_{K_{\star}}(g^{\prime}_{\ l,K_{ \star}}-g_{l,K_{\star}})-f_{K}(g^{\prime}_{\ l,K}-g_{l,K})\,\mathrm{d}x\, \mathrm{d}t\right|\] \[\leq\int_{0}^{T}\|(f_{K_{\star}}-f_{K})(t)\|_{L^{\infty}(\mathbb{ R}\times V)}2\max_{v}\int_{\mathbb{R}}|g_{l,K_{\star}}(t)|\,\mathrm{d}x\, \mathrm{d}t\] \[\quad+2C_{\phi}\int_{0}^{T}e^{2|V|C_{K}t}\max_{v}\int_{\mathbb{R} }\|(g_{l,K_{\star}}-g_{l,K})(t)|\,\mathrm{d}x\,\mathrm{d}t.\] The first summand can be bounded by (17) and (21). To estimate the second summand, apply Lemma B.2 to \(\bar{g}:=g_{l,K_{\star}}-g_{l,K}\) with evolution equation \[-\partial_{t}\bar{g}-v\cdot\nabla_{x}\bar{g} =\tilde{\mathcal{K}}_{K_{\star}}(\bar{g})+\tilde{\mathcal{K}}_{( K_{\star}-K)}(g_{l,K}),\] \[\bar{g}(t=T)=0,\] and \(h=\tilde{\mathcal{K}}_{(K_{\star}-K)}(g_{l,K})\in L^{1}((0,T);L^{\infty}(V;L^ {1}(\mathbb{R})))\) by the regularity (44) of \(g_{l,K}\epsilon\in C^{0}\left((0,T);L^{\infty}(V;L^{1}(\mathbb{R}))\right)\). This leads to \[\max_{v}\int_{\mathbb{R}}|\bar{g}|\mathrm{d}x \leq e^{2|V|C_{K}(T-t)}\int_{0}^{T-t}\max_{v}|\tilde{\mathcal{K}}_ {(K_{\star}-K)}(g_{l,K})(T-s,v)\|_{L^{1}(\mathbb{R})}\,\mathrm{d}s\] \[\leq 2|V|\|K_{\star}-K\|_{\infty}e^{2|V|C_{K}(T-t)}\int_{0}^{T-t} \max_{v}|g_{l,K}(T-s,v)\|_{L^{1}(\mathbb{R})}\,\mathrm{d}s\] \[\leq\|K_{\star}-K\|_{\infty}\frac{C_{\mu}}{C_{K}}e^{2|V|C_{K}(T- t)}(e^{2C_{K}|V|(T-t)}-1),\] where we used (21) in the last line. In summary, one obtains \[\left|\frac{\partial M_{l}(K_{\star})}{\partial K_{r,i}}-\frac{ \mathrm{d}M_{l}(K)}{\mathrm{d}K_{r,i}}\right|\] \[\leq\|K_{\star}-K\|_{\infty}\] \[\qquad\qquad\qquad+2C_{\phi}\int_{0}^{T}e^{2|V|C_{K}t}\frac{C_{ \mu}}{C_{K}}e^{2C_{K}|V|(T-t)}(e^{2C_{K}|V|(T-t)}-1)\,\mathrm{d}t\bigg{]}\] \[\leq\|K_{\star}-K\|_{\infty}2C_{\phi}C_{\mu}e^{2C_{K}|V|T}\left(| V|T^{2}+\frac{1}{C_{K}}\left(\frac{e^{2C_{K}|V|T}-1}{2C_{K}|V|}-T\right) \right).\] Together with the boundedness of the gradient (20), this shows that the first summands in (15) are Lipschitz continuous in \(K\) around \(K_{\star}\), which concludes the proof of Proposition 3.1. ### Ill-conditioning for close measurements While the positive hessian at \(K_{\star}\) guarantees local convergence, such positive-definiteness will disappear when data are not prepared well. Especially, when a minimal number of measurements is considered and two measurements, \(M_{1}(K)\) and \(M_{2}(K)\) for example, become close, we will show that the hessian degenerates, and the strongly convexity is lost, and hence the convergence to \(K_{\star}\) is no longer guaranteed. The closeness of two measurements can be quantified through different manners. For example, we can argue that the two measurements are close when the two test functions \(\mu_{1},\mu_{2}\) are close in \(L^{1}\) sense. Or they can be close if the reading of the measurements are taken at two locations closeby. In this case, \(\mu_{1}\) and \(\mu_{2}\) can be taken as mollifiers from direct Dirac-\(\delta\) readings of the density at \(x_{1}\) and \(x_{2}\), and the closeness is quantified by \(|x_{1}-x_{2}|\). We will study how the hessian degenerates in these two scenarios. In both cases, we examine the two parts in (13) and evaluate their change as two measurements get close. In particular, the application of Lemma 3.3 already suggests the second part in (13) is negligible for \(K\) is close to \(K_{\star}\) and the rank structure of the hessian is predominantly controlled by the first part, which reads as the summation of many rank 1 matrices \(\nabla_{K}M_{l}(K)\otimes\nabla_{K}M_{l}(K)\). When two measurements (\(\mu_{1}\) and \(\mu_{2}\)) get close, we will argue that \(\nabla_{K}M_{1}(K)\) is almost parallel to \(\nabla_{K}M_{2}(K)\), making the hessian lacking at least one rank, and the strong convexity is lost. Mathematically, this means we need to show \(\|\nabla_{K}M_{1}(K)-\nabla_{K}M_{2}(K)\|_{2}\approx 0\) when \(\mu_{1}\approx\mu_{2}\) in the two senses spelled out above. Recalling (19), we have for every \(r\in\{1,\cdots,R\}\) and \(i\in\{1,2\}\) \[\frac{\partial M_{1}(K)}{\partial K_{r,i}}-\frac{\partial M_{2}( K)}{\partial K_{r,i}} =\int_{0}^{T}\int_{I_{r}}f^{\prime}\big{(}(g_{1}-g_{2})^{\prime}-( g_{1}-g_{2})\big{)}\,\mathrm{d}x\,\mathrm{d}t\] \[=\int_{0}^{T}\int_{I_{r}}f^{\prime}(\bar{g}^{\prime}-\bar{g})\, \mathrm{d}x\,\mathrm{d}t\,, \tag{22}\] where \(\bar{g}:=g_{1}-g_{2}\) solves (10) with final condition \(\bar{g}(t=T,x,v)=\mu_{2}(x)-\mu_{1}(x)\). So the bulk of the analysis in the two subsections below is to quantify the smallness of (22) in terms of the smallness of \(\mu_{1}(x)-\mu_{2}(x)\). #### 3.2.1 \(L^{1}\) measurement closeness The following proposition states the loss of strong convexity as \(\mu_{2}-\mu_{1}\to 0\) in \(L^{1}(\mathbb{R})\). In particular, the requirement of Proposition 3.1 that \(H_{K}\mathcal{C}(K_{\star})\) is positive definite is no longer satisfied, so local well-posedness of the optimization problem and thus the convergence of the algorithm can no longer be guaranteed. **Proposition 3.2**.: _Let Assumption 1 hold. Then, as \(\mu_{1}^{(m)}\xrightarrow{m\to\infty}\mu_{2}\) in \(L^{1}(\mathbb{R})\), one eigenvalue of the Hessian \(H_{K}\mathcal{C}(K_{\star})\) vanishes._ This proposition immediately allows us to prove scenario 1 in Theorem 3.2: Proof of Theorem 3.2.: Propositions 3.2 establishes one eigenvalue of \(H_{K}\mathcal{C}(K_{\star})\) vanishes as \(m\to\infty\). This lack of positive definiteness and thus strong convexity of \(\mathcal{C}\) around \(K_{\star}\) means that it cannot be guaranteed that the minimizing sequences of \(\mathcal{C}\) converge to \(K_{\star}\). We now give the proof of the proposition. Proof.: As argued above, we show \(\|\nabla_{K}M_{1}^{(m)}(K)-\nabla_{K}M_{2}(K)\|_{2}\to 0\) as \(m\to\infty\). Recall (22), we need to show: \[\frac{\partial M_{1}^{(m)}(K)}{\partial K_{r,i}}-\frac{\partial M _{2}(K)}{\partial K_{r,i}}\xrightarrow{m\to\infty}0\quad\forall(r,i)\in\{1, \cdots,R\}\times\{1,2\}\,. \tag{23}\] where \(\bar{g}:=g_{1}-g_{2}\) solves (10) with final condition \(\bar{g}(t=T,x,v)=\mu_{2}(x)-\mu_{1}^{(m)}(x)\). Application of Lemma B.2 gives \[\|\bar{g}(t)\|_{L^{m}(V;L^{1}(\mathbb{R}))}\leq e^{2C_{K}|V|(T-t)}\|\mu_{2}- \mu_{1}^{(m)}\|_{L^{m}(V;L^{1}(\mathbb{R}))}=e^{2C_{K}|V|(T-t)}\|\mu_{2}-\mu_{ 1}^{(m)}\|_{L^{1}(\mathbb{R})}.\] Plug this into (22) and estimate \(f\) by (40) to obtain \[\left|\frac{\partial(M_{1}^{(m)}-M_{2})(K)}{\partial K_{r,i}}\right| \leq 2C_{\phi}\int_{0}^{T}e^{2C_{K}|V|t}\|\bar{g}(t)\|_{L^{m}(V;L ^{1}(\mathbb{R}))}\,\mathrm{d}t\] \[\leq 2C_{\phi}e^{2C_{K}|V|T}T\|\mu_{2}-\mu_{1}^{(m)}\|_{L^{1}( \mathbb{R})}.\] Since every entry \((r,i)\) converges, the gradient difference vanishes \(\|\nabla_{K}M_{1}^{(m)}(K)-\nabla_{K}M_{2}(K)\|_{2}\to 0\) as \(m\to\infty\). We utilize this fact to show the degeneracy of the Hessian. Noting: \[H_{K}\mathcal{C}(K_{\star})=\underbrace{\left[\sum_{l=3}^{2R}\nabla M_{l} \otimes\nabla M_{l}+2\nabla M_{2}\otimes\nabla M_{2}\right]}_{A}+\underbrace{ \left[\nabla M_{1}^{(m)}\otimes\nabla M_{1}^{(m)}-\nabla M_{2}\otimes\nabla M _{2}\right]}_{B^{(m)}}\,.\] It is straightforward that the rank of \(A\) is at most \(2R-1\), so the \(j\)-th largest eigenvalue \(\lambda_{j}(A)=0\) vanishes for some \(j\). Moreover, since \(\|\nabla_{K}M_{1}^{(m)}(K)-\nabla_{K}M_{2}(K)\|_{2}\to 0\), we have \(\|B^{(m)}\|_{F}\to 0\). Using the continuity of the minimal eigenvalue with respect to a perturbation of the matrix, the \(j\)-th largest eigenvalue of \(H_{K}\mathcal{C}(K_{\star})\) vanishes \[|\lambda_{j}(H_{K}\mathcal{C}(K_{\star}))|=|\lambda_{j}(H_{K}\mathcal{C}(K_{ \star}))-\lambda_{j}(A)|\leq\|B^{(m)}\|_{F}\to 0,\quad\text{as }m\to\infty\,.\] #### 3.2.2 Pointwise measurement closeness We now study the second scenario of Theorem 3.2 and consider \(\mu_{1}\), \(\mu_{2}\) as mollifications of a singular pointwise testing. For this purpose, let \(\xi\in C_{c}^{\infty}\left(\mathbb{R}\right)\) be a smooth function, compactly supported in the unit ball \(B_{1}(0)\) with \(0\leq\xi\leq 1\) and \(\xi(0)=1\). In the following, we consider the measurement test functions \[\mu_{i}^{\eta}(x)=\frac{1}{\eta}\xi\left(\frac{x-x_{i}}{\eta}\right),\quad i=1,2. \tag{24}\] Our aim is to show that the assertion of Theorem 3.2 is true independently of the mollification parameter \(\eta>0\). This shows that in the limit as \(\eta\to 0\), i.e. in the pointwise measurement case, we still lose strong convexity around \(K_{\star}\). **Proposition 3.3**.: _Let \(\mu_{1}^{\eta},\mu_{2}^{\eta}\) be of the form (24) with measurement locations \(x_{2}\notin\{a_{r}\}_{r=1,\ldots,R}\) for the partition of \(\mathbb{R}\) from (3). Consider a small neighbourhood of \(K_{\star}\), and let Assumption 1 hold. Additionally, let the measurement time \(T\) and locations be chosen such that_ \[\left(e^{T|V|C_{K}}-1\right)<1,\qquad\min_{r}|x_{2}-a_{r}|-T>\eta_{0}>0.\] _If the initial condition \(\phi\) is uniformly continuous in \(x\), uniformly in \(v\), then \(\nabla_{K}M_{1}(K)\to\nabla_{K}M_{2}(K)\) as \(x_{1}\to x_{2}\) in the standard Euclidean norm, and the convergence is independent of \(\eta\leq\eta_{0}\)._ This proposition explains the breakdown of well-posedness presented in Theorem 3.2 in the second scenario. Since the proof for the theorem is rather similar to that of the first scenario, we omit it from here. Similar to the previous scenario, we need to show smallness of the gradient difference (22). This time, we have to distinguish two sources of smallness: For singular parts of the adjoint \(\bar{g}\), the smallness of the corresponding gradient difference is generated by testing it on a sufficiently regular \(f\) at close measuring locations. So it is small in the weak sense. The regular parts \(\bar{g}_{>N}\) of \(\bar{g}\) represent the difference of \(\bar{g}\) and its singular parts and evolve form the integral operator on the right hand side of (10), which exhibits a diffusive effect. Smallness is obtained by adjusting the cut off regularity \(N\). Let us mention, however, that the time constraint is mostly induced for a technical reason. In order to bound the size of the regular parts of the adjoint solution, we use the plain Gronwall inequality which leads to an exponential growth that we counterbalance by a small measuring time \(T\). The spatial requirement \(\min_{r}|x_{2}-a_{r}|-T>\eta_{0}>0\) is a reflection of the fact that we need the measuring blob (support of \(\mu\)) to be somewhat centered in the constant pieces of the piecewise-constant function \(K\). This helps to force the measuring to precisely pick up only the information from that particular piece. This specific design will later be discussed in Section 4 as well. To put the above considerations into a mathematical framework, we deploy the singular decomposition approach, and we are to decompose \[\bar{g}=\sum_{n=0}^{N}\bar{g}_{n}+\bar{g}_{>N}, \tag{25}\] where the regularity of \(\bar{g}_{n}\) increases with \(n\). Here, we define \(\bar{g}_{0}\) as the solution to \[-\partial_{t}\bar{g}_{0}-v\cdot\nabla_{x}\bar{g}_{0} =-\sigma\bar{g}_{0}\,,\] \[\bar{g}_{0}(t=T,x,v) =\mu_{2}^{\eta}(x)-\mu_{1}^{\eta}(x)\,,\] for \(\sigma(x,v)\coloneqq\int_{V}K(x,v^{\prime},v)\,\mathrm{d}v^{\prime}\), and \(\bar{g}_{n}\) are inductively defined by \[-\partial_{t}\bar{g}_{n}-v\cdot\nabla_{x}\bar{g}_{n} =-\sigma\bar{g}_{n}+\tilde{\mathcal{L}}(\bar{g}_{n-1})\,, \tag{26}\] \[\bar{g}_{n}(t=T,x,v) =0\,,\] where we used the notation \(\tilde{\mathcal{L}}(\bar{g})\coloneqq\int K(x,v^{\prime},v)\bar{g}(x,t,v^{ \prime})\,\mathrm{d}v^{\prime}\). The remainder \(\bar{g}_{>N}\) satisfies \[-\partial_{t}\bar{g}_{>N}-v\cdot\nabla_{x}\bar{g}_{N} =-\sigma\bar{g}_{>N}+\tilde{\mathcal{L}}(\bar{g}_{N}+\bar{g}_{>N})\,, \tag{27}\] \[\bar{g}_{>N}(t=T,x,v) =0\,.\] It is a straightforward calculation that \[\eqref{eq:22}=\sum_{n=0}^{N}\int_{0}^{T}\int_{I_{r}}f^{\prime}(\bar{g}_{n}^{ \prime}-\bar{g}_{n})\,\mathrm{d}x\,\mathrm{d}t+\int_{0}^{T}\int_{I_{r}}f^{ \prime}(\bar{g}_{>N}^{\prime}-\bar{g}_{>N})\,\mathrm{d}x\,\mathrm{d}t\,. \tag{28}\] We are to show, in the two lemmas below, that both terms are small when \(x_{1}\to x_{2}\). To be more specific: **Lemma 3.7**.: _Let the assumptions of Proposition 3.3 be satisfied. For any \(\varepsilon>0\), and any \(n\in\mathbb{N}_{0}\), there exists a \(\delta_{n}(\varepsilon)>0\) such that_ \[\left|\int_{0}^{T}\int_{I_{r}}f^{\prime}\bar{g}_{n}\,\mathrm{d}x\,\mathrm{d}t \right|\leq\varepsilon\,,\quad\text{if}\quad|x_{1}-x_{2}|<\delta_{n}( \varepsilon)\,. \tag{29}\] The remainder can be bounded similarly. **Lemma 3.8**.: _Under the assumptions of Proposition 3.3, one has_ \[\left|\int_{0}^{T}\int_{I_{r}}f^{\prime}\bar{g}_{>N}\,\mathrm{d}x\,\mathrm{d} t\right|\leq T^{2}|V|C_{K}C_{\phi}e^{2|V|C_{K}T}(e^{C_{K}|V|T}-1)^{N}C_{\mu},\] _which becomes arbitrarily small for large \(N\)._ The proofs for both lemmas exploit the continuity of \(f\) by choice of \(\phi\), and the smallness of the higher regularity components of the \(g\) term. Since it is not keen to the core of the paper, we leave the details to Appendix C. The application of the two lemmas gives Proposition 3.3: Proof of Proposition 3.3.: Let \(\varepsilon>0\). Because \(e^{C_{K}|V|T}-1<1\) by assumption, we can choose \(N\in\mathbb{N}\) large enough such that \(2T^{2}|V|C_{K}C_{\phi}e^{2|V|C_{K}T}\big{(}e^{C_{K}|V|T}-1\big{)}^{N}<\frac{ \varepsilon}{2}\). Furthermore, let \(|x_{1}-x_{2}|<\min_{n\leq N}\delta_{n}(\frac{\varepsilon}{4(N+1)})\). Then with the triangle inequality and Lemmas 3.7 and 3.8, we obtain from (28) \[\left|\frac{\partial(M_{1}-M_{2})(K)}{\partial K_{r,i}}\right| \leq\sum_{n=0}^{N}\left|\int_{0}^{T}\int_{I_{r}}f^{\prime}(\bar{ g}_{n}^{\prime}-\bar{g}_{n})\,\mathrm{d}x\,\mathrm{d}t\right|+\left|\int_{0}^{T} \int_{I_{r}}f^{\prime}(\bar{g}_{>N}^{\prime}-\bar{g}_{>N})\,\mathrm{d}x\, \mathrm{d}t\right|\] \[\leq 2N\frac{\varepsilon}{4(N+1)}+2T^{2}|V|C_{K}C_{\phi}e^{2|V|C_{K} T}\big{(}e^{C_{K}|V|T}-1\big{)}^{N}C_{\mu}\] \[\leq \varepsilon\,.\] ## 4 Experimental Design As discussed in the previous sections, it is clear that different setups bring different conditioning to the inverse problem. We are to study a particular design where the well-posedness can be ensured. To be more specific, in Proposition 3.1 we require the positive-definiteness of the Hessian term at \(K_{*}\). This is a strong assumption and is typically not true unless certain initial condition and the measuring setups are in place. We propose to use the following: **Design (D)**.: _We divide the domain \(I=[a_{0},a_{R})\) into \(R\) intervals \(I=\bigcup_{r=1}^{R}I_{r}\) with \(I_{r}=[a_{r-1},a_{r})\), and the center for each interval is denoted by \(a_{r-1/2}:=\frac{a_{r-1}a_{r}}{2}\). The spatial supports of the values \(K_{r}(v,v^{\prime})\) takes on the form of (3). The design is:_ * _initial condition_ \(\phi(x,v)=\sum_{r=1}^{R}\phi_{r}(x)\) _is a sum of_ \(R\) _positive functions_ \(\phi_{r}\) _that are compactly supported in_ \(\frac{a_{r-1/2}+[-d,d]}{a_{r-1/2}}\) _with_ \(d<\min\left(\frac{a_{r-a_{r-1}}}{a}\right)\)_, symmetric and monotonously decreasing in_ \(|x-a_{r-1/2}|\) _(for instance, a centered Gaussian with a cut-off tail);_ * _measurement test functions_ \(\mu_{l_{i}^{r}}=\bar{C}_{\mu}\mathds{1}_{\{(-1)^{i}T-d_{\mu},(-1)^{i}T+d_{\mu }\}+a_{r-1/2}}\)_,_ \(i=1,2\)_, for some_ \(\bar{C}_{\mu}>0\)_, centered around_ \(\frac{a_{r-1/2}}{a_{r-1/2}\pm T}\) _with_ \(d_{\mu}\leq d\)_;_ * _measurement time_ \(T\) _such that_ \[T<\min\left((1-\delta)\frac{0.09}{C_{K}|V|},\min_{r}\left(\frac{a_ {r}-a_{r-1}}{4}-\frac{d}{2}\right)\right)\] (30) _for_ \[\delta=(d+d_{\mu})/T<e^{-TC_{K}|V|}.\] (31) _Remark 4.1_.: Note that this design requires a delicate balancing between \(T\) and \(d\) and \(d_{\mu}\). Requirement (30) prescribes that \(T\) must not be too large. On the other hand, (31) requires that it must not be too small compared to \(d,d_{\mu}\). An exemplary choice of \(d=d_{\mu}=cT^{2}\) for some \(c>0\), for instance, automatically verifies requirement (31) for small enough \(T\). This particular design of initial data and measurement is to respond to the fact that the equation has a characteristic and particles moves along the trajectories. The measurement is set up to single out the information we would like to reconstruct along the propagation. The visualization of this design is plotted in Figure 1. Under this design, we have the following proposition: **Proposition 4.1**.: _The design (D) decouples the reconstruction of \(K_{r}\). To be more specific, recall (4)_ \[K=\left[K_{r}\right],\quad\text{with}\quad K_{r}=\left[K_{r,1},K_{r,2}\right].\] _The Hessian \(H_{K}\mathcal{C}\) has a block diagonal structure with each of the blocks is a \(2\times 2\) matrix given by the Hessian \(H_{K_{r}}\mathcal{C}\)._ Proof.: By the linearity of (1), (10), their solutions \(f=\sum_{r=1}^{R}f_{r}\) and \(g=\sum_{r=1}^{R}\sum_{i=1}^{2}g_{l^{r}_{r}}\) decompose into solutions \(f_{r}\) of (1) with initial conditions \(\phi_{r}\) and \(g_{l^{r}_{r}}\) with final condition \(-(M_{l^{r}_{r}}-y_{l^{r}_{r}})\mu_{l^{r}_{r}}/2R\), the summands of the final condition (11), correspondingly. By construction of \(T\) and the constant speed of propagation \(|v|=1\), the spatial supports of \(f_{r}\) and \(g_{l^{r}_{1}}\), \(g_{l^{r}_{2}}\) are is fully contained only in \(I_{r}\) for all \(t\in[0,T],v\in V\). As such, only \(f_{r}\) and \(g_{l^{r}_{r}}\) carry information about \(K_{r}\), and no information for other \(K_{s}\) with \(s\neq r\). This not only makes boundary conditions superfluous, but also translates the problem of finding a \(2R\) valued vector \(K\) into \(R\) individual smaller problems of finding the two-constant pair \((K_{r,1},K_{r,2})\) within \(I_{r}\). This comes with the cost of prescribing very detailed measurements depending on the experimental scales \(I_{r}\) and \(d\), but opens the door for parallelized computation. Furthermore, under mild conditions, this design ensures the local reconstructability of the inverse problem. **Theorem 4.2**.: _Let Assumption 1 hold. Given the Hessian \(H_{K}M_{l}(K)\) is bounded in Frobenius norm in a neighbourhood of \(K_{*}\), Design (D) generates a locally well-posed optimization problem (7)._ The proof is layed out in the subsequent subsection 4.1. _Remark 4.3_.: Let us mention that the bounds for \(T\) in Design (D) are not optimal. In the proof of theorem 4.2 we used crude estimates, and we believe these estimates can potentially be relaxed to allow for longer measurement times \(T\). Furthermore, the setup can easily be modified to use different measurement times for different intervals \(I_{r}\), for instance. In this case, again, the bounds on \(T\) can be relaxed. _Remark 4.4_.: Design (D) shares similarities with the theoretical reconstruction setting in Hellmuth et al. (2022), where a pointwise reconstruction of a continuous kernel \(\tilde{K}\) was obtained by a sequence of experiments where the measurement time \(T\) became small and the measurement location gets close to the initial location. The situation is also seen here. As we refine the discretization for the underlying \(K\)-function using higher dimensional vector, measurement time has to be shortened to honor the refined discretization. However, we should also note the difference. In Hellmuth et al. (2022), we studied the problem in higher dimension and thus explicitly excluded the ballistic part of the data from the measurement ### Proof of Theorem 4.2 Given Theorem 3.1, it remains to prove \(H_{K}\mathcal{C}(K_{\star})>0\). As the Hessian attains a block diagonal structure (Proposition 4.1), we are to study the \(2\times 2\)-blocks \[H_{K_{\star}}\mathcal{C}(K_{\star})=\nabla_{K_{\star}}M_{l_{1}^{ \mathrm{T}}}(K_{\star})\otimes\nabla_{K_{\star}}M_{l_{1}^{\mathrm{T}}}(K_{ \star})+\nabla_{K_{\star}}M_{l_{2}^{\mathrm{T}}}(K_{\star})\otimes\nabla_{K_{ \star}}M_{l_{2}^{\mathrm{T}}}(K_{\star}). \tag{32}\] Here the two measurements \(M_{l_{1}^{\mathrm{T}}},\ M_{l_{2}^{\mathrm{T}}}\) are inside \(I_{r}\), and \(\nabla_{K_{\star}}=[\partial_{K_{\star},1},\partial_{K_{\star},2}]\). The positive definiteness of the full \(H_{K}\mathcal{C}(K_{\star})\) is equivalent to the positive definiteness of each individual \(H_{K_{\star}}\mathcal{C}(K_{\star})\). This is established in the subsequent proposition. **Proposition 4.2**.: _Let Assumption 1 hold. If the Hessian \(H_{K}M_{l}(K)\) is bounded in Frobenius norm in a neighbourhood of \(K_{\star}\), then the Design (D) produces a positive-definite hessian \(H_{K}\mathcal{C}(K_{\star})\)._ According to (32), \(H_{K_{\star}}\mathcal{C}(K_{\star})\) is positive definite if \[\left|\frac{\partial M_{1}(K_{\star})}{\partial K_{1,1}}\right|> \left|\frac{\partial M_{1}(K_{\star})}{\partial K_{1,2}}\right|\quad\text{ and}\quad\left|\frac{\partial M_{2}(K_{\star})}{\partial K_{1,1}}\right|< \left|\frac{\partial M_{2}(K_{\star})}{\partial K_{1,2}}\right| \tag{33}\] holds true for the measurements \(M_{1},M_{2}\) corresponding to \(K_{1}\). Due to design symmetry, it is sufficient to study the first inequality. Consider the difference \(\frac{\partial M_{1}(K_{\star})}{\partial K_{1,1}}-\frac{\partial M_{1}(K_{ \star})}{\partial K_{1,2}}\). Similar to (25) and (28), we are to decompose the equation for \(f\) and \(g\) ((1) and (18) respectively, with \(K=K_{\star}\)) into the ballistic parts \(g_{1}^{(0)}\) and \(f^{(0)}\) and the remainder terms. Namely, let \(g_{1}^{(0)}\) and \(f^{(0)}\) satisfy \[\begin{cases}-\partial_{t}g_{1}^{(0)}-v\cdot\nabla_{x}g_{1}^{(0)}&=-\sigma g _{1}^{(0)}\\ g_{1}^{(0)}(t=T,x,v)&=\mu_{1}(x)\end{cases}\quad\text{and}\quad\begin{cases} \partial_{t}f^{(0)}-v\cdot\nabla_{x}f^{(0)}&=-\sigma f^{(0)}\\ f^{(0)}(t=0,x,v)&=\phi(x,v).\end{cases} \tag{34}\] Then the following two lemmas are in place with \(\mu_{1}(x)\) and \(\phi(x,v)\) as in Design (D). **Lemma 4.5**.: _In the setting of Proposition 4.2, for \((v,v^{\prime})=(+1,-1)\), the ballistic part_ \[B:= \left|\int_{0}^{T}\int_{I_{1}}f^{(0)}(v^{\prime})(g_{1}^{(0)}(v^{ \prime})-g_{1}^{(0)}(v))\,dx\,dt\right| \tag{35}\] \[-\left|\int_{0}^{T}\int_{I_{1}}f^{(0)}(v)(g_{1}^{(0)}(v)-g_{1}^{( 0)}(v^{\prime}))\,dx\,dt\right|\] _satisfies_ \[B\geq C_{\phi\mu}\left(e^{-TC_{K}|V|}T-(d_{\mu}+d)\right)>0, \tag{36}\] _where \(C_{\phi\mu}=\int_{I_{1}}\phi_{1}(x)\mu_{1}(-T+x)\,dx=\max_{a,b}\int_{I_{1}} \phi_{1}(x+a)\mu_{1}(-T+x+b)\,dx\) by construction of \(\phi_{1},\mu_{1}\)._ At the same time, the remainder term is small. **Lemma 4.6**.: _In the setting of Proposition 4.2, the remaining scattering term_ \[S:=\int_{0}^{T}\int_{I_{1}}f(v^{\prime})\big{(}g_{1}(v^{\prime})-g_{1}(v)\big{)} \,dx\,dt-\int_{0}^{T}\int_{I_{1}}f^{(0)}(v^{\prime})\big{(}g_{1}^{(0)}(v^{ \prime})-g_{1}^{(0)}(v)\big{)}\,dx\,dt\] _is bounded uniformly in \((v,v^{\prime})\) by_ \[|S|\leq 4C_{\phi\mu}T\frac{C_{K}|V|T}{(1-C_{K}|V|T)^{2}}. \tag{37}\] Proposition 4.2 is a corollary of Lemmas 4.5, 4.6. Proof of Proposition 4.2.: By the bounds obtained in lemmas 4.5, 4.6, one has \[\left|\frac{\partial M_{1}(K_{\star})}{\partial K_{1,1}}\right|- \left|\frac{\partial M_{1}(K_{\star})}{\partial K_{1,2}}\right|\geq B-2|S|\] \[\geq C_{\phi\mu}\left(e^{-TC_{K}|V|}T-(d_{\mu}+d)\right)-8C_{\phi \mu}T\frac{C_{K}|V|T}{(1-C_{K}|V|T)^{2}}\] \[\geq C_{\phi\mu}T\left(1-TC_{K}|V|-\delta-8\frac{0.09(1-\delta) }{(1-0.09)^{2}}\right).\] By assumption \(0<T<(1-\delta)\frac{0.09}{C_{K}|V|}\) with \(\delta=\frac{d+d_{\mu}}{T}<1\), the last line is positive. In total, this shows the first part of inequality (33). As the second part can be treated in analogy, it follows that \(H_{K_{1}}\mathcal{C}(K_{\star})\) is positive definite. Finally, Theorem 4.2 is a direct consequence of Proposition 4.2. Proof of Theorem 4.2.: Repeated application of the arguments to all \(H_{K_{\star}}\mathcal{C}(K_{\star})\), \(r=1,...,R\) shows that \(H_{K}\mathcal{C}(K_{\star})>0\). Assuming boundedness of the Hessian \(H_{K}M_{l}(K)\) in a neighbourhood of \(K_{\star}\), theorem 3.1 proves local well-posedness of the inverse problem. The proofs for the Lemmas 4.5-4.6 are rather technical and we leave them to Appendix D. Here we only briefly present the intuition. According to Figure 1, \(f^{(0)}(v^{\prime}=-1)\) and \(g_{1}^{(0)}(v^{\prime}=-1)\) have a fairly large overlapping support, whereas \(g_{1}^{(0)}(v=+1)\) overlaps with \(f^{(0)}(v^{\prime}=-1)\) and \(g_{1}^{(0)}(v^{\prime}=-1)\) with \(f^{(0)}(v=+1)\) only for a short time spans \(T\approx T\) and \(T\approx 0\) respectively. Recalling (35), we see the negative components of the term \(B\) are small, making \(B\) positive overall. The smallness of \(S\) is a result of small measurement time \(T\). ## 5 Numerical experiments As a proof of concept for the prediction given by the theoretical results in Section 3, we present some numerical evidence. An explicit finite difference scheme is used for the discretization of (1) and (10). In particular, the transport operator is discretized by the Lax-Wendroff method and the operator \(\mathcal{K}\) is treated explicitly in time. The scheme is consistent and stable when \(\Delta t\leq\min(\Delta x,C_{K}^{-1})\), and thus it converges according to the Lax-Equivalence theorem. More sophisticated solvers for the forward model Filbet and Yang (2014) can be deployed when necessary. Also, when a compatible solver Apel and Flaig (2012) for the adjoint equation exists, these pairs of solvers can readily be incorporated in the inversion setting. All subsequent experiments were conducted with noise free synthetic data \(y_{l}=M_{l}(K_{\star})\) that was generated by a forward computation with the true underlying parameter \(K_{\star}\). ### Illustration of well-posedness In Section 4, it was suggested a specific design of initial data and measurement mechanism can provide a successful reconstruction of the kernel \(K\), and that the loss function is expected to be strongly convex. We observe it numerically as well. In particular, we set \(R=20\) and use Gaussian initial data, and plot the (marginal) loss function in Figure 2. Figure 3 depicts the convergence of some parameter values \(K_{r}(v,v^{\prime})\) in this scenario against the corresponding loss Figure 2: (Marginal) loss functions \(\mathcal{C}(K)\) for \(R=20\): For a fixed \(r\in\{2,9,13,15\}\), we plot \(\mathcal{C}\) as a function of \(K_{r}\) with all \(K_{ssr}\) set to be the groundtruth \((K_{\star})_{s}\). function value. An exponential decay of the loss function, as expected from theory (Polyak and Shcherbakov, 2017, Th.3], can be observed. The strictly positive-definiteness feature persists in a small neighborhood of the optimal solution \(K_{\star}\). This means adding a small perturbation to \(K_{\star}\), the minimal eigenvalue of the Hessian matrix \(H_{K}\mathcal{C}(K)\) stays above zero. In Figure 4 we present, for two distinct experimental setups, the minimum eigenvalue as a function of the perturbation to \(K_{r}(v,v^{\prime})\). In both cases, the green spot (the groundtruth) is positive, and it enjoys a small neighborhood where the minimum eigenvalue is also positive, as predicted by Theorem 3.1. In the right panel, we do observe, as one moves away from the groundtruth, the minimal eigenvalue takes on a negative value, suggesting the loss of convexity. This numerically verifies that the well-posedness result in Theorem 3.1 is local in nature. The panel on the left deploys the experiment design provided by Section 4. The simulation is ran over the entire domain of \([0,1]^{2}\) and the positive-definiteness stays throughout the domain, hinting the proposed experimental design (D) can potentially be globally well-posed.To generate the plots, a simplified setup with \(R=2\) and constant initial data was considered. ### Ill-conditioning for close measurement locations We now provide numerical evidence to reflect the assertion from 3.2. In particular, the strong convexity of the loss function would be lost if measurement location \(x_{1}\) becomes close to \(x_{2}\). We summarize the numerical evidence in Figure 5. Here we still use \(R=20\) and constant initial data but vary the detector positions. To be specific, we assign values to \(x_{1}\) using \(\{x_{1}^{(0)}=c_{1}-T\,,x_{1}^{(1)}=c_{1}+\frac{T}{2}\,,x_{1}^{(2)}=c_{1}+ \frac{4}{5}T\,,x_{1}^{(3)}=x_{2}=c_{1}+T\}\). As the superindex grows, \(x_{1}\to x_{2}\) with \(x_{1}^{(3)}=x_{2}\) when the two measurements exactly coincide. For \(x_{1}=x_{2}\), the cost function is no longer strongly convex around the ground truth \(K_{\star}\), as its hessian is singular. The thus induced Figure 4: Minimal eigenvalues of the Hessian \(H_{K}\mathcal{C}(K)\) around the true parameter \(K_{\star}\) for two experimental designs. We perturb \(K\) by changing values in \(K_{1}(1,-1)\) and \(K_{2}(-1,1)\). The groundtruth is marked green in both plots. Figure 3: Convergence of the parameter values \(K_{r}(v,v^{\prime})\) from (3) for \(r=2,9,13,15\) to the ground truth as the cost function converges. vanishing learning rate \(\eta=\frac{2\lambda_{\min}}{\lambda_{\max}^{2}}\) was exchanged by the learning rate for \(x_{1}=x_{1}^{(2)}\) in this case to observe the effect of the gradient. In the first, third and fourth panel of Figure 5, we observe that the cost function as well as the parameter reconstructions for \(K_{9}\) and \(K_{15}\) nevertheless converge,but convergence rates that slow down significantly comparing purple (for \(x_{1}^{(0)}\)), blue (for \(x_{1}^{(1)}\)), green (for \(x_{1}^{(2)}\)) and orange (for \(x_{1}^{(3)}\)) due to smaller learning rates. The overlap of the parameter reconstructions for \(x_{1}\in\{x_{1}^{(2)},x_{1}^{(3)}\}\) is due to the coinciding choice of the learning rate and a very similar gradient for parameters \(K_{9},K_{15}\) whose information is not reflected in the measurement in \(x_{1}\). As parameter \(K_{1}\) directly affects the measurement at \(x_{1}\), Panel 2 showcases the degenerating effect of the different choices of \(x_{1}\) on the reconstruction. Whereas convergence is still obtained in the blue curve (for \(x_{1}^{(1)}\)), the reconstructions of \(K_{1}\) from measurements at \(x_{1}^{(2)}\) (green) and \(x_{1}^{(3)}\) (orange) clearly fail to converge to the true parameter value in black. This offset seems to grow with stronger degeneracy in the measurements. ## 6 Discussion In this paper we present an optimization framework for the reconstruction of the velocity jump parameter \(K\) in the chemotaxis equation (1) using velocity averaged measurements (5) from the interior domain. In the numerical setting when PDE-constrained optimization is deployed, depending on the experimental setup, the problem is can be either locally well-posedness or ill-conditioned. We further propose a specific experimental design that is adaptive to the discretization of \(K\). This design decouples the reconstruction of local values of the parameter \(K\) using the corresponding measurements. The design thus opens up opportunities to parallelization. As a proof of concept, numerical evidence were presented. They are in good agreement with the theoretical predictions A natural extension of the results presented in the current paper is the algorithmic performance in higher dimensions. The theoretical findings seem to apply in a straightforward manner, but details need to be evaluated. Numerically one can certainly also refine the solver implementation. For example, it is possible that higher order numerical PDE solvers that preserve structures bring extra benefit. More sophisticated optimization methods such as the (Quasi-)Newton method or Sequential Quadratic Programming are possible alternatives for conducting the inversion Burger and Muhlhuber (2002); Haber et al. (2000); Ren (2010); Smyl et al. (2021). Furthermore, we adopted a first optimize, then discretize approach in this article. Suggested in Apel and Flaig (2012); Gunzburger (2002); Liu and Wang (2019), a first discretize, then optimize framework could be bring automatic compatibility of forward and adjoint solvers, but extra difficulties Hinze et al. (2008) need to be resolved. Our ultimate goal is to form a collaboration between practitioners to solve the real-world problem related to bacteria motion reconstruction Le (2002). To that end, experimental design is non avoidable. A class of criteria proposed under the Bayesian shed light, see Alexanderian (2021) and references therein. In our context, it translates to whether the design proposed in Section 4 satisfies these established optimality criteria. ## Appendix A Derivation of the gradient (9) This section justifies formula (9) for the gradient of the cost function \(\mathcal{C}\) with respect to \(K\). Let us first introduce some notation: Denote by \[\mathcal{J}(f)\coloneqq\frac{1}{2L}\sum_{i=1}^{L}\left(\int_{\mathbb{R}}\int_{V} f(T,x,v)\,\mathrm{d}v\,\,\mu_{t}(x)\,\mathrm{d}x-y_{l}\right)^{2}\] the loss for \(f\in\mathcal{Y}=\{h\mid h,\partial_{t}h+v\cdot\nabla h\in C^{0}([0,T];L^{ \infty}(\mathbb{R}\times V))\}\). Note that mild solutions of (1) are contained in \(\mathcal{Y}\), since \(\mathcal{K}(f)\in C^{0}([0,T];L^{\infty}(\mathbb{R}\times V))\) by regularity of \(f\) from Lemma B.1. Then \(\mathcal{C}(K):=\mathcal{J}(f_{K})\) in the notation of (5). The Lagrangian function for the PDE constrained optimization problem (7) reads \[\mathcal{L}(K,f,g,\lambda)=\mathcal{J}(f)+(g,\partial_{t}f+v\cdot\nabla f- \mathcal{K}(f))_{x,v,t}+\langle\lambda,f(t=0)-\phi\rangle_{x,v},\] for \(f\in\mathcal{Y}\) and \(g\in\mathcal{Z}=\{h\mid h,\partial_{t}h+v\cdot\nabla h\in C^{0}([0,T];L^{ \infty}(V;L^{1}(\mathbb{R})))\}\). For \(f=f_{K}\), our cost function \(\mathcal{C}(K)=\mathcal{J}(f_{K})=\mathcal{L}(K,f_{K},g,\lambda)\) and To avoid the calculation of \(\frac{\partial f_{K}}{\partial K}\), choose the Lagrange multipliers \(g,\lambda\) such that \(\frac{\partial\mathcal{L}}{\partial f}|_{\begin{subarray}{c}K=\tilde{K},\\ f=f_{K}\end{subarray}}=0\). Then \[\frac{\mathrm{d}\mathcal{C}(\hat{K})}{\mathrm{d}K_{r}} =\frac{\partial\mathcal{L}}{\partial K_{r}}\bigg{|}_{ \begin{subarray}{c}K=\tilde{K},\\ f=f_{K}\end{subarray}}=-\frac{\partial(g,\mathcal{K}_{K}(f))_{x,t,v}}{ \partial K_{r}}\bigg{|}_{\begin{subarray}{c}K=\tilde{K},\\ f=f_{K}\end{subarray}}\] \[=\int_{0}^{T}\int_{f_{r}}f_{\tilde{K}}(x,t,v^{\prime})\big{(}g(x,t,v^{\prime})-g(x,t,v)\big{)}\,\mathrm{d}x\,\mathrm{d}t.\] To compute the gradient, \(g\) has to be specified. Recall the requirement \[\begin{split} 0&=\frac{\partial L}{\partial f} \bigg{|}_{\begin{subarray}{c}K=\tilde{K},\\ f=f_{K}\end{subarray}}\\ &=\frac{1}{L}\sum_{l=1}^{L}\left(\int_{\mathbb{R}}\int_{V}f(T)\, \mathrm{d}v\,\,\mu_{l}\,\mathrm{d}x-y\right)\frac{\partial}{\partial f}\left( \mu_{l},f(T)\right)_{x,v}\bigg{|}_{\begin{subarray}{c}K=\tilde{K},\\ f=f_{K}\end{subarray}}\\ &\qquad\qquad+\frac{\partial}{\partial f}\Bigg{[}(g,\partial_{t}f+v \cdot\nabla f-\mathcal{K}_{K}(f))_{x,t,v}+\langle\lambda,f(t=0)\rangle_{x,v} \bigg{]}\bigg{|}_{\begin{subarray}{c}K=\tilde{K},\\ f=f_{K}\end{subarray}}\end{split} \tag{38}\] We will motivate the choice of \(g\) such that the derivatives cancel out each other. Because we are dealing with mild solutions where integration by parts in time and space cannot be used right away, we approximate \(f\) and \(g\) by sequences of functions * \((f^{n})_{n}\subset C^{1}([0,T];L^{\infty}(\mathbb{R}\times V))\cap C^{0}([0,T] ;W^{1,\infty}(\mathbb{R};L^{\infty}(V)))\) that converge \(f_{n}\to f\) with \(\partial_{t}f_{n}+v\cdot\nabla f_{n}\to\partial_{t}f+v\cdot\nabla f\) in \(C^{0}([0,T];L^{\infty}(\mathbb{R}\times V))\) and * \((g^{n})_{n}\subset C^{1}([0,T];L^{\infty}(V;L^{1}(\mathbb{R})))\cap C^{0}([0,T ];L^{\infty}(V;W^{1,1}(\mathbb{R})))\) with \(g_{n}\to g\) with \(-\partial_{t}g_{n}-v\cdot\nabla g_{n}\to-\partial_{t}g-v\cdot\nabla g\) in \(C^{0}([0,T];L^{\infty}(V;L^{1}(\mathbb{R})))\). This is possible, because the respective spaces for \(f_{n}\) and \(g_{n}\) are dense in \(\mathcal{Y}\) and \(\mathcal{Z}\). Replacing \(f\) by \(f_{n}\) and \(g\) by \(g_{n}\) in \(\{g,\partial_{t}f+v\cdot\nabla f-\mathcal{K}(f)\}_{x,t,v}\), we obtain \[\langle g,\partial_{t}f+v\cdot\nabla f-\mathcal{K}(f)\rangle_{x, t,v}=\lim_{n}(g_{n},\partial_{t}f_{n}+v\cdot\nabla f_{n}-\mathcal{K}(f_{n}))_{x,t,v}\] \[=\lim_{n}(\{-\partial_{t}g_{n}-v\cdot\nabla g_{n}-\tilde{ \mathcal{K}}(g_{n}),f_{n}\}_{x,t,v}+\{f_{n}(t=T),g_{n}(t=T)\}_{x,v}-\{f_{n}(t=0 ),g_{n}(t=0)\}_{x,v})\] \[=(-\partial_{t}g-v\cdot\nabla g-\tilde{\mathcal{K}}(g),f)_{x,t,v}+ \langle f(t=T),g(t=T)\rangle_{x,v}-\langle f(t=0),g(t=0)\rangle_{x,v},\] where \[\tilde{\mathcal{K}}_{K}(g)\coloneqq\int_{V}K(x,v^{\prime},v)(g(x,t,v^{\prime})- g(x,t,v))\,\mathrm{d}v^{\prime}.\] Now, collect all terms in (38) with the same integration domain and equate them to \(0\). This leads to \[-\partial_{t}g-v\cdot\nabla g-\tilde{\mathcal{K}}_{K}(g)=0 \text{in }x\in\mathbb{R},v\in V,t\in(0,T)\] \[g(x,t=T,v)=-\frac{1}{L}\sum_{i=1}^{L}\left(\int_{\mathbb{R}}\int _{V}f(T,x,v)\,\mathrm{d}v\ \mu_{l}(x)\,\mathrm{d}x-y\right)\mu_{l}(x) \text{in }x\in\mathbb{R},(v\in V)\] \[\lambda=g(t=0) \text{in }x\in\mathbb{R},v\in V.\] Note that since \(g\) reflects the measurement procedure, it makes sense that \(g(t=T)\) is isotropic in \(v\). For computation of \(\frac{\mathrm{d}\mathcal{C}(\hat{K})}{\mathrm{d}K_{r}}\), use the solution \(g\) to the first two equations with kernel \(K=\hat{K}\) and \(f=f_{\hat{K}}\). ## Appendix B Some a-priori estimates By Assumption 1, semigroup theory yields the existence of a mild solution to (1)-(2). **Lemma B.1**.: _Let Assumption 1 hold and assume \(h\in L^{1}((0,T);L^{\infty}(\mathbb{R}\times V))\). Then there exists a mild solution_ \[f\in C^{0}\left([0,T];L^{\infty}(\mathbb{R}\times V)\right) \tag{39}\] _to_ \[\partial_{t}f+v\cdot\nabla_{x}f =\mathcal{K}(f)+h,\] \[f(t=0,x,v) =\phi(x,v)\in L^{\infty}{}_{+}(\mathbb{R}\times V)\] _that is bounded_ \[\max_{v}\|f(t)\|_{L^{\infty}(\mathbb{R})}\leq e^{2|V|C_{K}t}C_{\phi}+\int_{0}^ {t}e^{2|V|C_{K}(t-s)}\|h(s)\|_{L^{\infty}(\mathbb{R}\times V)}\,ds.\] We carry out the proof once to make clear, how the constant in the bound is derived. Proof.: Rewrite (1) as \[\partial_{t}f=\mathcal{A}f+\mathcal{B}f+h\] with operators \(\mathcal{A}:\mathcal{D}(\mathcal{A})\to\mathcal{X},f\mapsto-v\cdot\nabla_{x}f\) and \(\mathcal{B}:\mathcal{X}\to\mathcal{X},f\mapsto\mathcal{K}(f)\), where the function spaces \(\mathcal{D}(\mathcal{A})\coloneqq W^{1,\infty}(\mathbb{R};L^{\infty}(V))\) and \(\mathcal{X}\coloneqq L^{\infty}(\mathbb{R}\times V)\) are used. The transport operator \(\mathcal{A}\) generates a strongly continuous semigroup \(T(t)u(x)=u(x-vt)\) with operator norm \(|T(t)|\leq 1\). Clearly, \(\mathcal{B}\) is bounded in operator norm by \(2|V|C_{K}\). The bounded perturbation theorem, see e.g. Engel and Nagel (2001), shows that \(\mathcal{A}+\mathcal{B}\) generates a strongly continuous semigroup \(S\) with \(\|S(t)\|\leq e^{2|V|C_{K}t}\). As \(\phi\in\mathcal{X}\), (1) admits a mild solution \[f(t)=S(t)\phi+\int_{0}^{t}S(t-s)h(s)\,\mathrm{d}s.\] The regularity of the solution of (1)-(2) is improved by more regular initial data. This is exploited in the proof of ill-conditioning for pointwise measurement closeness in Theorem 3.2. **Corollary B.1**.: _Let Assumption 1 hold._ 1. _Equation (_1_) has a mild solution_ \(f\) _is bounded_ \[\max_{v}\|f(t)\|_{L^{\infty}(\mathbb{R})}\leq e^{2|V|C_{K}t}C_{\phi}\leq e^{2| V|C_{K}T}C_{\phi}=:C_{f}.\] (40) 2. _If, additionally, the initial data_ \(\phi\) _is uniformly continuous in_ \(x\)_, uniformly in_ \(v\)_, then_ \(f\) _is uniformly continuous in_ \(x\)_, uniformly in_ \(v,t\)_, i.e._ \(\max_{v}|f(t,x,v)-f(t,y,v)|<\varepsilon\) _for all_ \(t\in[0,T]\)_, if_ \(|x-y|<\delta(\varepsilon)\)_._ Proof.: Assertion a) is a direct consequence of lemma B.1. We focus on proving assertion b). Let \(\varepsilon>0\). By uniform continuity of \(\phi\) in \(x\), one can choose \(\delta^{\prime}\) such that \[\operatorname*{ess\,sup}_{|x-y|<\delta^{\prime},v}|\phi(x,v)-\phi(y,v)|<e^{-2C _{K}|V|T}\varepsilon/2. \tag{41}\] Now consider \(\delta:=\min\left(\delta^{\prime},\frac{ee^{-2C_{K}|V|T}}{8C_{f}|V|C_{K}(R-1)}\right)\). Integration along characteristics yields \[\operatorname*{ess\,sup}_{|x-y|<\delta,v}|f(t,x,v)-f(t,y,v)|\] \[\leq\operatorname*{ess\,sup}_{|x-y|<\delta,v}|\phi(x-vt,v)-\phi(y -vt,v)|\] \[\qquad+\operatorname*{ess\,sup}_{|x-y|<\delta,v}\int_{0}^{t} \mathcal{K}(f)(t-s,x-vs,v)-\mathcal{K}(f)(t-s,y-vs,v)\,\mathrm{d}s\bigg{|}\] \[\leq\operatorname*{ess\,sup}_{|x-y|<\delta,v}|\phi(x,v)-\phi(y,v)|\] \[\qquad+2C_{K}|V|\int_{0}^{t}\operatorname*{ess\,sup}_{|x-y|< \delta,v}|f(s,x,v^{\prime})-f(s,y,v^{\prime})|\,\mathrm{d}s\] \[\qquad+2C_{f}|V|\operatorname*{ess\,sup}_{|x-y|<\delta,v}\int_{0 }^{t}\max_{v^{\prime},v^{\prime\prime}}|K(x-vs,v^{\prime},v^{\prime\prime})-K (y-vs,v^{\prime},v^{\prime\prime})|\,\mathrm{d}s\] \[=:(i)+(ii)+(iii),\] where \((i)\leq\frac{\varepsilon}{2}e^{-2C_{K}|V|T}\) by (41). By symmetry, \((iii)=2\cdot(iv)\) where \((iv)\) coincides with \((iii)\), but \(x\geq y\). As \(K\) is a step function in space (3), its difference can only be non zero if a jump occurred between \(x-vs\) and \(y-vs\). Boundedness of \(K\) in (12) then lead to the estimate \[(iii)=2\cdot(iv) \leq 2\cdot 2C_{f}|V|\operatorname*{ess\,sup}_{|x-y|<\delta,v}\int_{0 }^{t}C_{K}\sum_{r=1}^{R-1}\mathds{1}_{(x-vs,y-vs]}(a_{r})\,\mathrm{d}s \tag{42}\] \[\leq 4C_{f}|V|C_{K}(R-1)\delta\leq\frac{\varepsilon}{2}e^{-2C_{K} |V|T}.\] In summary, Gronwall's lemma yields \[\operatorname*{ess\,sup}_{|x-y|<\delta,v}|f(t,x,v)-f(t,y,v)|\leq\varepsilon e ^{-2C_{K}|V|(T-t)}\leq\varepsilon.\] Again, semigroup theory shows existence of the adjoint equation (10) with corresponding bounds. **Lemma B.2**.: _Let \(h\in L^{1}((0,T);L^{\infty}(V;L^{1}(\mathbb{R})))\), \(\psi\in L^{1}(\mathbb{R})\) and let (12) hold. Then the equation_ \[-\partial_{t}g-v\cdot\nabla_{x}g=\alpha\tilde{\mathcal{L}}(g)- \sigma g+h, \tag{43}\] \[g(t=T)=\psi(x)\] _with \(\alpha\in\{0,1\}\) and \(\tilde{\mathcal{L}}(g):=\int K(x,v^{\prime},v)g(x,t,v^{\prime})\,dv^{\prime}\) and \(\sigma(x,v):=\int K(x,v^{\prime},v)\,dv^{\prime}\) has a mild solution_ \[g\in C^{0}\left([0,T];L^{\infty}(V;L^{1}(\mathbb{R}))\right) \tag{44}\] _that satisfies_ \[\|g(t)\|_{L^{\infty}(V;L^{1}(\mathbb{R}))}\leq e^{(1+\alpha)|V|C_{K}(T-t)} \left(\|\psi\|_{L^{1}(\mathbb{R})}+\int_{0}^{T-t}\max_{v}\|h(T-s,v)\|_{L^{1}( \mathbb{R})}\,ds\right). \tag{45}\] _If, additionally, \(h\in L^{\infty}([0,T]\times V;L^{1}(\mathbb{R}))\), then_ \[\|g(t)\|_{L^{\infty}(V;L^{1}(\mathbb{R}))} \tag{46}\] \[\qquad\leq e^{(1+\alpha)|V|C_{K}(T-t)}\|\psi\|_{L^{1}(\mathbb{R})} +\frac{e^{(1+\alpha)|V|C_{K}(T-t)}-1}{(1+\alpha)|V|C_{K}}\operatorname*{ess\, sup}_{t,v}|h(t,v)\|_{L^{1}(\mathbb{R})}.\] Proof.: Repeating the arguments in the proof of Lemma B.1, semigroup theory yields the existence of a mild solution \[g(t)=S(T-t)\psi+\int_{0}^{T-t}S(T-t-s)h(T-s)\,\mathrm{d}s\] for the semigroup \(S(t)\) generated by the operator \(v\cdot\nabla_{x}+\alpha\tilde{\mathcal{L}}-\sigma\) with \(\|S(t)\|\leq e^{(1+\alpha)|V|C_{K}t}\). This yields (45) and (46). Proof of Lemma 3.7-3.8 In this section, we provide the proof for the two Lemmas in section 3.2. In particular, Lemma 3.7 discusses the smallness of the first term in (28). Proof for Lemma 3.7.: By the assumption on the initial data and Corollary B.1 b), \(f\) is uniformly continuous in \(x\), uniformly in \(v,t\). For \(n=0\), the boundedness (29) is a consequence of the explicit representation \[\bar{g}_{0}(t,x,v_{0})=e^{-\int_{0}^{T-t}\sigma(x+v_{0}\tau,v_{0})\,d\tau}(\mu_ {2}^{\eta}-\mu_{1}^{\eta})(x+v_{0}(T-t)) \tag{47}\] together with the step function shape (3) of \(K\), the continuity of \(f\) and our assumptions: Write \(p_{0}(t,x,v_{0},v^{\prime}):=f(x,t,v^{\prime})e^{-\int_{0}^{T-t}\sigma(x+v_{0} \tau,v_{0})\,d\tau}\) and assume without loss of generality \(x_{1}\geq x_{2}\), then \[\int_{t_{r}}f^{\prime}\bar{g}_{0}\,\mathrm{d}x\] \[=\int_{t_{r}}p_{0}(t,x,v_{0},v^{\prime})(\mu_{2}^{\eta}-\mu_{1}^ {\eta})(x+v_{0}(T-t))\,\mathrm{d}x\] \[=-\int_{a_{r-1}(x_{1}-x_{2})}^{a_{r-1}}p_{0}(t,x+(x_{1}-x_{2}),v _{0},v^{\prime})\mu_{2}^{\eta}(x+v_{0}(T-t))\,\mathrm{d}x\] \[\quad+\int_{a_{r-1}}^{a_{r}}p_{0}(t,x,v_{0},v^{\prime})\mu_{2}^{ \eta}(x+v_{0}(T-t))\,\mathrm{d}x\] \[\quad+\int_{a_{r-1}}^{a_{r-1}(x_{1}-x_{2})}(p_{0}(t,x,v_{0},v^{ \prime})-p_{0}(t,x+(x_{1}-x_{2}),v_{0},v^{\prime}))\mu_{2}^{\eta}(x+v_{0}(T-t) )\,\mathrm{d}x,\] where we used the substitution \(x\to x-(x_{1}-x_{2})\) for the integration domain of test function \(\mu_{1}^{\eta}(x)=\mu_{2}^{\eta}(x-(x_{1}-x_{2}))\). By uniform continuity and boundedness of \(f\) a similar argumentation as in (42) shows that \(p_{0}(t,x,v_{0},v^{\prime})\) is uniformly continuous in \(x\), uniformly in \(t,v_{0},v^{\prime}\), as well. The corresponding threshold from the epsilon-delta criterion is denoted by \(\delta_{p_{0}}(\varepsilon)\). Then, for \(0\leq|x_{1}-x_{2}|<\delta_{0}(\varepsilon):=\min(\min_{r}|a_{r}-x_{2}|-T-\eta _{0},\delta_{p_{0}}(\varepsilon))\), the first two integrals vanish, because \(\mu_{2}^{\eta}(x+v_{0}(T-t))=0\) for all \(x\) in the integration domain. We are left with \[\Big{|}\int_{t_{r}}f^{\prime}\bar{g}_{0}\,\mathrm{d}x\Big{|} \leq\int_{a_{r-1}}^{a_{r-1}(x_{1}-x_{2})}|p_{0}(t,x,v_{0},v^{\prime })-p_{0}(t,x+(x_{1}-x_{2}),v_{0},v^{\prime})|\mu_{2}^{\eta}(x+v_{0}(T-t))\, \mathrm{d}x\] \[\leq\varepsilon\int_{\mathbb{R}}\mu_{2}^{\eta}(x+v_{0}(T-t))\, \mathrm{d}x=\varepsilon.\] For \(n\geq 1\), source iteration shows that the solution to (26) has the form \[\bar{g}_{n}(t,x,v_{0})= \int_{0}^{T-t}\int_{V}...\int_{0}^{T-t-\sum_{0}^{n-2}s_{j}}\int_{ V}p_{n}(t,x,(v_{i})_{i=0,...,n},(s_{j})_{j=0,...,n-1}).\] \[\qquad\qquad(\mu_{2}-\mu_{1})\left(x+\sum_{l=0}^{n-1}v_{l}s_{l}+v _{n}\left(T-t-\sum_{l=0}^{n-1}s_{l}\right)\right)\mathrm{d}v_{n}\,\mathrm{d}s _{n-1}...\,\mathrm{d}v_{1}\,\mathrm{d}s_{0}\,.\] The function \(p_{n}\) is bounded \(0\leq p_{n}\leq C_{K}^{n}\) and satisfies \[\int_{0}^{T}|p_{n}(t,x+v_{n}t,(v_{i})_{i},(s_{j})_{j})-p_{n}(t,y+v_{n}t,(v_{i}) _{i},(s_{j})_{j})|\,\mathrm{d}t<\varepsilon\] for \(|x-y|<\delta_{p_{n}}(\varepsilon)\), uniformly in \((v_{i})_{i},(s_{j})_{j}\). The assertion then follows in analogy to the case \(n=0\). Lemma 3.8 argues the smallness of the second term in (28). We provide the proof below. It is a consequence of the smallness of \(\bar{g}_{>N}\) by Lemma B.2 and the boundedness of \(f\). Proof for Lemma 3.8.: Application of lemma B.2 to \(g=\bar{g}_{>N},h=\tilde{\mathcal{L}}\bar{g}_{N},\alpha=1\) and \(\psi=0\) yields \[\max_{v}\int_{\mathbb{R}}|\bar{g}_{>N}(t)|\,\mathrm{d}x \leq e^{2C_{K}|V|(T-t)}\int_{0}^{T-t}\sup_{v}\|\tilde{\mathcal{L}}( \bar{g}_{N})(T-s,v)\|_{L^{1}(\mathbb{R})}\,\mathrm{d}s\] \[\leq |V|C_{K}(T-t)e^{2C_{K}|V|(T-t)}\operatorname*{ess\,sup}_{s,v}\| \bar{g}_{N}(s,x,v)\|_{L^{1}(\mathbb{R})}.\] Now, application of the same lemma to the evolution equation (26) for \(g_{n}\), \(n=1,...,N\), shows \[\operatorname*{ess\,sup}_{t,v}\int_{\mathbb{R}}|\bar{g}_{n}|\mathrm{d}x\leq(e^{C _{K}|V|T}-1)\operatorname*{ess\,sup}_{s,v}\int_{\mathbb{R}}|\bar{g}_{n-1}(s,x,v) |\mathrm{d}x.\] The boundedness of \(f\) in (40) and repeated application of the above estimate lead to \[\left|\int_{0}^{T}\max_{v}\int_{\mathbb{R}}f^{\prime}\bar{g}_{N} \,\mathrm{d}x\,\mathrm{d}t\right|\] \[\leq\frac{T^{2}}{2}|V|C_{K}C_{\phi}e^{2|V|C_{K}T}(e^{C_{K}|V|T}-1 )^{N}\operatorname*{ess\,sup}_{s,v}\int_{\mathbb{R}}|\bar{g}_{0}(s,x,v)|\, \mathrm{d}x\] \[\leq\frac{T^{2}}{2}|V|C_{K}C_{\phi}e^{2|V|C_{K}T}\left(e^{C_{K}|V |T}-1\right)^{N}\operatorname*{ess\,sup}_{s,v}\int_{\mathbb{R}}|(\mu_{2}^{ \eta}-\mu_{1}^{\eta})(x+vs)|\,\mathrm{d}x\] \[\leq T^{2}|V|C_{K}C_{\phi}e^{2|V|C_{K}T}(e^{C_{K}|V|T}-1)^{N}C_{ \mu},\] where \(|\bar{g}_{0}(s,x,v)|\leq|(\mu_{2}^{\eta}-\mu_{1}^{\eta})(x+vs)|\) can be observed from the explicit formula for \(\bar{g}_{0}\) in (47). ## Appendix D Proof of Lemmas in Section 4 We provide proofs for Lemma 4.5-4.6 in this section. Proof of Lemma 4.5.: Use the explicit representations \[g_{1}^{(0)}(t,x,v) =e^{-(T-t)\sigma_{1}(v)}\mu_{1}(x+v(T-t)), \tag{48}\] \[f^{(0)}(t,x,v) =e^{-t\sigma_{1}(v)}\phi(x-vt) \tag{49}\] with \(\sigma_{1}(v)=\int_{V}K_{1}(v^{\prime},v)\,\mathrm{d}v^{\prime}\) and set without loss of generality \(c_{1}=0\). Since \(f^{(0)}|_{I_{1}}=f_{1}^{(0)}\) in the notation of the proof of Proposition 4.1, one obtains for \((v,v^{\prime})=(+1,-1)\) \[\int_{0}^{T}\int_{I_{1}}f^{(0)}(v^{\prime})(g_{1}^{(0)}(v^{\prime })-g_{1}^{(0)}(v))\,\mathrm{d}x\,\mathrm{d}t\] \[=\int_{0}^{T}\int_{I_{1}}e^{-t\sigma_{1}(v^{\prime})}\phi_{1}(x- v^{\prime}t)\big{(}e^{-(T-t)\sigma_{1}(v^{\prime})}\mu_{1}(x+v^{\prime}(T-t))\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad-e^{-(T-t)\sigma_{1}( v)}\mu_{1}(x+v(T-t))\big{)}\,\mathrm{d}x\,\mathrm{d}t\] \[\geq e^{-T\sigma_{1}(-1)}T\int_{a_{0}+T}^{a_{1}}\phi_{1}(x)\mu_{ 1}(-T+x)\,\mathrm{d}x-\int_{T-\frac{d_{\mu}+d}{2}}^{T}\int_{I_{1}}\phi_{1}(x) \mu_{1}(-T+x)\,\mathrm{d}x\,\mathrm{d}t\] \[\geq e^{-TC_{K}|V|}TC_{\phi\mu}-\frac{d_{\mu}+d}{2}C_{\phi\mu},\] where the first inequality is due to the fact that \(\phi_{1}(x-v^{\prime}t)\mu_{1}(x+v(T-t))=\phi_{1}(x+t)\mu_{1}(x+(T-t))\neq 0\) only for \(x\in[-t-d,-t+d]\cap[-2T+t-d_{\mu},-2T+t+d_{\mu}]\subset I_{1}\) which is empty for \(t\leq T-\frac{d_{\mu}+d}{2}\). For \((v^{\prime},v)=(-1,+1)\), instead, we obtain \[\left|\int_{0}^{T}\int_{I_{1}}f^{(0)}(v)\big{(}g_{1}^{(0)}(v)-g_{1 }^{(0)}(v^{\prime})\big{)}\,\mathrm{d}x\,\mathrm{d}t\right|\] \[=\bigg{|}\int_{0}^{T}\int_{I_{1}}e^{-t\sigma_{1}(v)}\phi_{1}(x-vt) \big{(}e^{-(T-t)\sigma_{1}(v)}\mu_{1}(x+v(T-t))\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad-e^{-(T-t)\sigma _{1}(v^{\prime})}\mu_{1}(x+v^{\prime}(T-t))\big{)}\,\mathrm{d}x\,\mathrm{d}t\bigg{|}\] \[\leq C_{\phi\mu}\frac{d+d_{\mu}}{2}\] since * \(\phi_{1}(x-vt)\mu_{1}(x+v(T-t))=\phi_{1}(x-t)\mu_{1}(x+T-t)\) vanishes, as its support \([t-d,t+d]\cap[-2T+t-d_{\mu},-2T+t+d_{\mu}]=\varnothing\) is empty by construction of \(T>d\geq d_{\mu}\) and * the support \([t-d,t+d]\cap[-t-d_{\mu},-t+d_{\mu}]\) of \(\phi_{1}(x-vt)\mu_{1}(x+v^{\prime}(T-t))=\phi_{1}(x-t)\mu_{1}(x-(T-t))\) is non-empty only for \(t\leq\frac{d+d_{\mu}}{2}\). Since \(e^{-TC_{K}|V|}-\frac{d_{\mu}+d}{T}>0\) by assumption, this proves the assertion. To show inequality (37) in Lemma 4.6, decompose for some \(N\in\mathbb{N}\) to be determined later \[S =\sum_{\begin{subarray}{c}n,k=0\\ n+k2\end{subarray}}^{N}\int_{0}^{T}\int_{I_{1}}f^{(k)}(v^{\prime})(g_{1}^{(n) }(v^{\prime})-g_{1}^{(n)}(v))\,\mathrm{d}x\,\mathrm{d}t \tag{50}\] \[+\int_{0}^{T}\int_{I_{1}}f(v^{\prime})(g_{1}^{(>N)}(v^{\prime})-g _{1}^{(>N)}(v))\,\mathrm{d}x\,\mathrm{d}t\] \[+\sum_{n=0}^{N}\int_{0}^{T}\int_{I_{1}}f^{(>N)}(v^{\prime})(g_{1 }^{(n)}(v^{\prime})-g_{1}^{(n)}(v))\,\mathrm{d}x\,\mathrm{d}t\,,\] where \(g_{1}^{(n)}\) and \(g_{1}^{(>N)}\) solve (26) and (27) respectively and \(f^{(k)}\) are solutions to \[\partial_{t}f^{(k)}-v\cdot\nabla_{x}f^{(k)} =\mathcal{L}(f^{(k-1)})-\sigma f^{(k)},\] \[f^{(k)}(t=0,x,v)=0,\] with \(\mathcal{L}(h):=\int_{V}K(v,v^{\prime})h(t,x,v^{\prime})\,\mathrm{d}v^{\prime}\), and \(f^{(>N)}\) satisfies \[\partial_{t}f^{(>N)}-v\cdot\nabla_{x}f^{(>N)} =\mathcal{L}(f^{(N)}+f^{(>N)})-\sigma f^{(>N)},\] \[f^{(>N)}(t=0,x,v)=0.\] Each part of \(S\) in representation (50) is estimated separately in the subsequent three lemmas. **Lemma D.1**.: _In the setting of proposition 4.2,_ \[\left|\int_{0}^{T}\int_{I_{1}}f^{(k)}(v^{\prime})(g_{1}^{(n)}(v^{ \prime})-g_{1}^{(n)}(v))\,dx\,dt\right|\leq 2\max_{v,v^{\prime}}\int_{0}^{T}\int_{I _{1}}f^{(k)}(v^{\prime})g_{1}^{(n)}(v)\,dx\,dt\] \[\leq 2\left(C_{K}|V|\right)^{n+k}T^{n+k+1}C_{\phi\mu}\] Proof.: Source iteration \[g_{1}^{(n)}(t,x,v_{0}) =\int_{0}^{T-t}\int_{V}e^{-s_{0}\sigma(v_{0})}K_{1}(\hat{v}_{1},v _{0})g_{1}^{(n-1)}(t+s_{0},x+v_{0}s_{0},\hat{v}_{1})\,\mathrm{d}\hat{v}_{1}\, \mathrm{d}s_{0}\] \[\leq|V|\int_{0}^{T-t}e^{-s_{0}\sigma(v_{0})}K_{1}(v_{1},v_{0})g_{ 1}^{(n-1)}(t+s_{0},x+v_{0}s_{0},v_{1})\,\mathrm{d}s_{0},\] \[f^{(k)}(t,x,v_{0}) =\int_{0}^{t}\int_{V}e^{-s_{0}\sigma(v_{0})}K(v_{0},\hat{v}_{1})f ^{(k-1)}(t-s_{0},x-v_{0}s_{0},\hat{v}_{1})\,\mathrm{d}\hat{v}_{1}\,\mathrm{d}s _{0}\] \[\leq|V|\int_{0}^{t}e^{-s_{0}\sigma(v_{0})}K(v_{0},v_{1})f^{(k-1)}( t-s_{0},x-v_{0}s_{0},v_{1})\,\mathrm{d}s_{0},\] where \(v_{1}=-v_{0}\), together with the explicit formulas (48)-(49) leads to estimates \[0\leq g_{1}^{(n)}(x,t,v_{0}) \leq\left(C_{K}|V|\right)^{n}\int_{0}^{T-t}...\int_{0}^{T-t} \sum_{0}^{T-t}\mu_{1}\left(x+\sum_{i=0}^{n-1}v_{i}s_{i}+v_{n}\left(T-t-\sum_{i =0}^{n-1}s_{i}\right)\right) \tag{51}\] \[0\leq f^{(k)}(x,t,v_{0}) \leq\left(C_{K}|V|\right)^{k}\int_{0}^{t}...\int_{0}^{t-\sum_{i =0}^{k-2}s_{i}}\phi\left(x-\sum_{i=0}^{k-1}v_{i}s_{i}+v_{k}\left(t-\sum_{i=0} ^{k-1}s_{i}\right)\right)\,\mathrm{d}s_{k-1}...\,\mathrm{d}s_{0}.\] Using again \(f^{(k)}|_{I_{1}}=f_{1}^{(k)}\) with initial condition \(\phi_{1}\) in the notation of the proof of Porposition 4.1, this proves \[\left|\int_{0}^{T}\int_{I_{1}}f^{(k)}(v^{\prime})(g_{1}^{(n)}(v^{ \prime})-g_{1}^{(n)}(v))\,\mathrm{d}x\,\mathrm{d}t\right|\leq 2\max_{v,v^{\prime}} \int_{0}^{T}\int_{I_{1}}f_{1}^{(k)}(v^{\prime})g_{1}^{(n)}(v)\,\mathrm{d}x\, \mathrm{d}t\] \[\leq 2\left(C_{K}|V|\right)^{n+k}T^{n+k+1}C_{\phi\mu}.\] The following bound for the second summand in (50) is obtained in analogy to Lemma 3.8. **Lemma D.2**.: _In the setting of Proposition 4.2,_ \[\max_{v}\left|\iint f\left(v^{\prime}\right)(g_{1}^{(>N)}(v^{\prime })-g_{1}^{(>N)}(v))\,dx\,dt\right|\] \[\leq 4T^{2}|V|C_{K}C_{\phi}e^{2|V|C_{K}T}(e^{C_{K}|V|T}-1)^{N} \bar{C}_{\mu}d_{\mu}=:C^{\prime}(T)(e^{C_{K}|V|T}-1)^{N}\] For the third term in (50), one establishes the following bound. **Lemma D.3**.: _In the setting of Proposition 4.2,_ \[\max_{v}\left|\iint f^{(>N)}(v^{\prime})(g^{(n)}(v^{\prime})-g^{( n)}(v))\,dx\,dt\right|\] \[\leq 4|V|C_{K}T^{2}e^{2|V|C_{K}T}(e^{C_{K}|V|T}-1)^{N}C_{\phi} \left(C_{K}|V|T\right)^{n}\bar{C}_{\mu}d_{\mu}\] \[=:C^{\prime\prime}(T)\big{(}e^{C_{K}|V|T}-1\big{)}^{N}\left(C_{K }|V|T\right)^{n}\] Proof.: An estimate for \(f^{(>N)}\) can be derived analogously as the estimate for \(\bar{g}_{>N}\) in Lemma 3.8 from Lemma B.1 \[\|f^{(>N)}\|_{L^{\infty}([0,T]\ll\mathbb{R}\times V)}\leq|V|C_{K}Te^{2|V|C_{K} T}(e^{C_{K}|V|T}-1)^{N}C_{\phi}.\] Together with (51), this proves the lemma. Lemma 4.6 can now be assembled from the previous lemmas. Proof of Lemma 4.6.: Lemmas D.1, D.2 and D.3 yield the \((v,v^{\prime})\) independent bound \[|S| \leq 2C_{\phi\mu}T\sum_{n,k=0\atop n\ll k\geq 1}^{N}\left(C_{K}|V| T\right)^{n\ll k}+\left(e^{C_{K}|V|T}-1\right)^{N}\left(C^{\prime}(T)+C^{ \prime\prime}(T)\sum_{n=0}^{N}\left(C_{K}|V|T\right)^{n}\right)\] \[\leq 4C_{\phi\mu}T\frac{C_{K}|V|T}{(1-C_{K}|V|T)^{2}}+\left(e^{C_{K} |V|T}-1\right)^{N}\left(C^{\prime}(T)+C^{\prime\prime}(T)\frac{1}{1-C_{K}|V|T}\right)\] \[=:4C_{\phi\mu}T\frac{C_{K}|V|T}{(1-C_{K}|V|T)^{2}}+\left(e^{C_{K} |V|T}-1\right)^{N}C(T).\] Because \(e^{C_{K}|V|T}-1<1\) due to the assumption \(T<(1-\delta)\frac{0.09}{C_{K}|V|}\), the second term in the last line becomes arbitrarily small for large \(N\in\mathbb{N}\), which shows that \(|S|\) is in fact bounded by the first term.
2309.12578
SPION: Layer-Wise Sparse Training of Transformer via Convolutional Flood Filling
Sparsifying the Transformer has garnered considerable interest, as training the Transformer is very computationally demanding. Prior efforts to sparsify the Transformer have either used a fixed pattern or data-driven approach to reduce the number of operations involving the computation of multi-head attention, which is the main bottleneck of the Transformer. However, existing methods suffer from inevitable problems, such as the potential loss of essential sequence features due to the uniform fixed pattern applied across all layers, and an increase in the model size resulting from the use of additional parameters to learn sparsity patterns in attention operations. In this paper, we propose a novel sparsification scheme for the Transformer that integrates convolution filters and the flood filling method to efficiently capture the layer-wise sparse pattern in attention operations. Our sparsification approach reduces the computational complexity and memory footprint of the Transformer during training. Efficient implementations of the layer-wise sparsified attention algorithm on GPUs are developed, demonstrating a new SPION that achieves up to 3.08X speedup over existing state-of-the-art sparse Transformer models, with better evaluation quality.
Bokyeong Yoon, Yoonsang Han, Gordon Euhyun Moon
2023-09-22T02:14:46Z
http://arxiv.org/abs/2309.12578v1
# SPION: Layer-Wise Sparse Training of Transformer via Convolutional Flood Filling ###### Abstract. Sparsifying the Transformer has garnered considerable interest, as training the Transformer is very computationally demanding. Prior efforts to sparsify the Transformer have either used a fixed pattern or data-driven approach to reduce the number of operations involving the computation of multi-head attention, which is the main bottleneck of the Transformer. However, existing methods suffer from inevitable problems, such as the potential loss of essential sequence features due to the uniform fixed pattern applied across all layers, and an increase in the model size resulting from the use of additional parameters to learn sparsity patterns in attention operations. In this paper, we propose a novel sparsification scheme for the Transformer that integrates convolution filters and the flood filling method to efficiently capture the layer-wise sparse pattern in attention operations. Our sparsification approach reduces the computational complexity and memory footprint of the Transformer during training. Efficient implementations of the layer-wise sparsified attention algorithm on GPUs are developed, demonstrating a new SPION that achieves up to 3.08\(\times\) speedup over existing state-of-the-art sparse Transformer models, with better evaluation quality. ## 1. Introduction The Transformer is a state-of-the-art deep neural network developed for addressing sequence tasks, originally proposed by Vaswani et al. (2017). One of the main advantages of the Transformer is that, given a sequence of input data points (e.g., a sentence of word tokens), it is able to compute the multi-head attention (MHA) operation in parallel, thereby quickly and accurately capturing long-term dependencies of data points. However, as the sequence length increases, it is true that the overall computational cost and memory space required for training the Transformer also increase quadratically (Bakshmin et al., 2017). Especially, a MHA sub-layer in the Transformer occupies a substantial portion of the total execution time and becomes the main bottleneck as the sequence length increases. The MHA operation requires a large number of dot-product operations to compute the similarity between all data points in the sequence. However, the dot-product operation inherently has limitations in memory bandwidth since it performs only two floating-point operations for each pair of data elements read from memory. Therefore, reducing the number of operations involving dot-product operations is an important consideration for accelerating the training of the Transformer. In addition, the attention matrices used in the MHA computation have two main challenges. First, as the pattern of non-zero entries in the attention matrices varies across different layers, the irregular access of these non-zero entries within the matrices leads to an inefficient utilization of global memory bandwidth and low cache hit rates while performing dot-product operations. Second, the irregular distribution of non-zero entries in attention matrices across different layers results in load imbalance and a reduced level of exploitable parallelism. Hence, in order to mitigate computational complexity and improve data locality of the Transformer, several approaches have addressed the sparsification of computations associated with the MHA operation (Bakshmin et al., 2017; Wang et al., 2018; Wang et al., 2019; Wang et al., 2020; Wang et al., 2021). However, previous approaches suffer from two primary limitations. Firstly, when the Transformer adopts identical fixed patterns of non-zero entries in the attention matrices across all layers during training, it becomes difficult to effectively capture key features within the sequence. Secondly, when the Transformer employs additional parameters to learn the sparsity pattern in the attention matrices, both the model size and the computational overhead increase. To address the limitations of previous approaches, we focus on a specialized sparse attention algorithm that not only significantly reduces memory consumption but also efficiently captures sparse patterns across various types of Transformers and datasets. In this paper, we present a new sparsity-aware layer-wise Transformer (called **SPION**) that dynamically captures the sparse pattern in the MHA operation for each layer. SPION judiciously explores the sparsity pattern in the MHA operation based on a novel convolutional flood filling method. To precisely detect the characteristics of the sparse pattern in the attention matrices, SPION captures the shape of sparse pattern by utilizing a convolution filter and the degree of sparsity through a flood filling-based scheme. During the generation of the sparse pattern for each layer, our SPION constructs a sparsity pattern matrix with a blocked structure, enhancing data locality for the MHA operation that involves sparse matrix multiplication. Furthermore, by capturing layer-specific sparse pattern for each layer, SPION performs layer-wise MHA computations iteratively until convergence is achieved. Our sparsification technique reduces both the time and the number of operations required to train long sequences without using additional parameters. Moreover, since the sparse MHA operations contribute significantly to the overall training workload, we develop an efficient GPU implementation for sparse MHA to achieve high performance with quality of results. We conduct an extensive comparative evaluation of SPION across various classification tasks, including image and text classification, as well as document retrieval involving lengthy sequences. Experimental results demonstrate that our SPION obtains up to 10\(\times\) lower operations and up to 3.08\(\times\) training speedup compared to existing state-of-the-art sparse Transformers. The paper is organized as follows. Section 2 presents the background on Transformer, sparse attention, flood fill algorithm, and related prior work. Section 3 describes the motivation behind our SPION. Section 4 presents an overview and details of our SPION implementation. Section 5 extensively compares the performance of SPION with various state-of-the-art sparse Transformer models. ## 2. Background and Related Work ### Transformer The encoder-only Transformer, which is one of the variants of the Transformer model, is widely used for various classification tasks using text and image datasets (Dong et al., 2018; Li et al., 2019; Li et al., 2019). Algorithm 1 shows pseudo-code for the forward propagation of the original encoder-only Transformer, where each encoder layer consists of a MHA sub-layer (lines 2-10) and a feed-forward sub-layer (lines 11-12). ``` Input:\(E\in\mathbb{R}^{L\times D}\): a set of embedding vectors for an input sequence, \(L\): length of input sequence, \(D\): embedding size for each data point, \(H\): the number of heads, \(W^{Q}\), \(W^{K}\), \(W^{V}\), \(W^{O}\), \(W^{F}\), \(W^{E}\): weight parameters, \(N\): the number of encoder layers 1forencoder_layer = 0 to \(N-1\)do \(\triangleright\) MHA sub-layer 2\(X\gets LayerNorm(E)\) 3\(Q\gets X\times W^{Q},K\gets X\times W^{K},V\gets X\times W^{V}\) 4\(Q_{0},\ldots,H_{-1}\), \(K_{0},\ldots,H_{-1}\), \(V_{0},\ldots,H_{-1}\gets split(Q,K,V)\) 5parallel for\(i=0\)to\(H-1\) 6\(A^{r}_{i}\gets Q_{i}\times K^{T}_{i}\) 7\(A^{s}_{i}\gets softmax(A^{r}_{i}/\sqrt{D/H})\) 8\(A^{c}_{i}\gets A^{s}_{i}\times V_{i}\) 9\(A\leftarrowconcatenate(A^{c}_{0},\ldots,A^{c}_{H-1})\) 10\(O\gets dropout(A\times W^{O})+E\)\(\triangleright\) Feed-forward sub-layer 11\(F\gets ReLU(LayerNorm(O)\times W^{F})\) 12\(E\gets dropout(F\times W^{E})+O\) ``` **Algorithm 1**Forward propagation of encoder layer in encoder-only Transformer The computation within \(i\) loops involves matrix-matrix multiplication (GEMM) to obtain the raw attention score matrix \(A^{r}_{i}\), which captures similarity between query \(Q_{i}\in\mathbb{R}^{L\times(D/H)}\) and key \(K_{i}\in\mathbb{R}^{L\times(D/H)}\) for head \(i\) (line 6). The softmax function is then applied to the matrix \(A^{r}_{i}\), which is scaled by \(1/\sqrt{D/H}\) (line 7). This scaling is performed to prevent the gradients of the attention score matrix \(A^{s}_{i}\) from approaching zero. Thereafter, other GEMM operation is performed between matrices \(A^{s}_{i}\) and \(V_{i}\in\mathbb{R}^{L\times(D/H)}\) to obtain the complete attention matrix \(A^{c}_{i}\) for head \(i\) (line 8). After computing the attention for each head, all matrices \(A^{c}_{0}\) through \(A^{c}_{H-1}\) are concatenated to form the final attention matrix \(A\), and the input embedding \(E\) used at the current encoder layer is added to the attention matrix (lines 9 and 10). Then, the attention matrix is passed through the feed-forward sub-layer to produce new embedding vectors \(E\), which is then fed into the next encoder layer (lines 11 and 12). In terms of computational complexity, it is clear that the main bottleneck of the encoder layer is associated with processing the MHA sub-layer (lines 2-10). More specifically, the number of operations required for computing the attention matrix \(A\) for head \(i\) (loop in line 5) is \(2L^{2}(2D+1)-L(D+1)\), indicating a quadratic increase in operations as the input sequence length (\(L\)) increases. **Sparsification of MHA.** Given the long input sequences, several sparse attention techniques have been proposed to reduce the computational complexity involved in computing the raw attention matrix \(A^{r}\). The basic intuition behind sparse attention techniques is that a subset of data points in the long sequence can effectively represent the entire input sequence. In other words, to reduce the computational workload of computing the raw attention matrix \(A^{r}\), only the highly correlated necessary elements (i.e., data points) in \(Q\) and \(K\) can be utilized. Therefore, when employing sparse attention, the number of operations required to compute \(A^{r}_{i}\) for each head \(i\) is \(C\times(2D-1)\), where \(C\) is the number of critical data points in the long sequence, whereas the number of operations required for the original dense \(A^{r}_{i}\) is \(L^{2}\times(2D-1)\). ### Flood Fill Algorithm The flood fill scheme is originally developed to determine the area connected to a given cell/pixel in a multi-dimensional array and a bitmap image (Li et al., 2019). Starting from a given seed element, the algorithm determines whether to continue or stop the fill operation based on conditional statements that evaluate the properties of the current element and its surrounding element. If the current element is not at the end of the data structure and the neighboring elements have not been explored yet, the algorithm continues; otherwise, it is avoided. Furthermore, the flood fill algorithm dynamically controls the fill operation with a constraint on small values of the elements. If the value of current element is very small, it prevents the fill operation. ### Related Work on Sparse Attention Many previous efforts to achieve efficient Transformers have sought to sparsify the MHA operation both before and during model training/fine-tuning. **Fixed Sparse Pattern.** One of the strategies for performing sparse MHA is to use a predetermined sparsity patterns, where only specific data points in the input sequence are selected before training the model. In this approach, only these selected data points are exclusively used to perform the MHA operation. Several variants of the encoder-only Transformer model adapt the sliding windows approach in which the attention operations are performed using only the neighboring data points (rows) in the matrices \(Q\) and \(K\). The Sparse Transformer (Chen et al., 2017) originally employs the sliding windows attention to sparsify the MHA operation. The Longformer (Long et al., 2015) is an extension to the Sparse Transformer and introduces dilated sliding windows, which extend the receptive field for computing similarity by skipping one data point at a time while performing the sliding windows attention. Furthermore, ETC (Chen et al., 2017) and BigBird (Long et al., 2016) incorporate global attention that performs similarity calculations between a given data point and all other data points in the input sequence. ETC introduces three types of fixed sparsity patterns for attention, called global-global, global-local, and local-global. BigBird further proposes random attention, which randomly selects data points for similarity computation. The main advantage of sparsifying the MHA operation before training is the reduction in computational overhead. As the fixed sparsity patterns are applied to all MHA computations in the entire training procedure, this approach enables reducing the overall memory footprint compared to the original dense MHA operation. However, the primary problem with a fixed pattern is that it is not sufficient to fully capture the dependencies present in the input sequence. For example, depending on the types of dataset and task, the critical data points in the long input sequence can vary. Therefore, predefined static sparse patterns may lose the fine-grained important features and dependencies in the sequence during training. **Data-driven Sparse Pattern.** Sparse patterns in the MHA operation can also be generated by leveraging a data-driven approach, which clusters and sorts the data points of the input sequence during training. For example, Reformer (Reimers et al., 2016) utilizes locality-sensitive hashing to calculate a hash-based similarity and cluster multiple data points into chunks. Similarly, Routing Transformer (Rendle et al., 2017) performs k-means clustering given data points. In the process of training the model, these clustering-based attention approaches also require learning additional functions to effectively identify the relevant dependencies of data points in the input sequence. Wang et al. (Wang et al., 2018) reduce the number of operations required for MHA computation by parameterizing weight matrices involved in MHA using low-rank matrix factorization and eliminating rank-1 components during model training. Peer et al. (Peer et al., 2019) and Lagunas et al. (Lagunas et al., 2019) reduce the size of the fine-tuning model by removing the insignificant layers and sub-blocks in attention matrix based on the parameters from the pretrained model. However, even though utilizing data-driven sparsification techniques during training produces a better quality model, this approach requires additional parameters and operations to learn the sparse patterns, resulting in larger memory space and higher computational cost. ## 3. Motivation: Analysis of Sparse Patterns in MHA In order to recognize the common sparsity patterns in MHA operation, we conducted experiments on the pretrained encoder-only Transformer (Chen et al., 2017). Note that the pretrained model is trained on a large ImageNet-21K dataset. Figure 1 illustrates the attention score matrices \(A^{s}\) from different encoder layers during the first epoch of fine-tuning the model with the CIFAR-10 dataset. Since the sparsity patterns of multiple attention score matrices within the same encoder layer typically show similar patterns, we averaged the attention score matrices across multiple heads in each encoder layer. The results clearly show that most of the elements in \(A^{s}\) are close to zero, indicating that only a few data points in the input sequence are correlated to each other and considered critical. In practice, as the sparse pattern of matrix \(A^{s}\) is maintained for each layer after training the model for a few epochs, a number of studies have shown that considering only the critical data points does not adversely affect the convergence of the model (Long et al., 2015; Long et al., 2016; Long et al., 2016). The characteristics of attention score matrices \(A^{s}\) observed from our experiments are as follows. **Shape of Sparse Pattern.** As shown in Figure 1, the attention score matrices \(A^{s}\) produced by the different encoder layers exhibit distinct sparsity patterns. For example, in the first to eighth encoder layers, the diagonal elements have relatively large values, similar to a band matrix that stores nonzeros within the diagonal band of the matrix. It is obvious that the MHA operation relies on the self-attention scheme and therefore, the resulting values of the dot-product between linearly transformed vectors for the same data points tend to be larger compared to the resulting outputs produced with different data points. In addition to the diagonal sparsity pattern, some encoder layers, specifically layers 9, 10, 11 and 12, show a vertical sparsity pattern, with nonzeros mostly stored in specific columns. This vertical sparsity pattern emerges when the attention operation focuses on the similarity between all data points in \(Q\) and particular data points in \(V\). In light of these observations, applying the same fixed sparse pattern to all layers may lead to the exclusion of unique essential features that need to be captured individually at different layers. Hence, during the training of the Transformer model, to efficiently reduce the number Figure 1. Sparsity patterns in the attention score matrices \(A^{s}\) across different encoder layers during training of the encoder-only Transformer for image classification. of operations, it is crucial to consider layer-wise sparsification of the MHA based on the sparse pattern observed across different layers. Furthermore, the flexibility of changing the sparse pattern needs to be considered during the training of the model. Moreover, generating domain-specific sparse patterns is required for various datasets and tasks. **Degree of Sparsity.** Across different encoder layers, there exists variation not only in the shape of the sparse pattern but also in the number of nonzero elements in \(A^{s}\). For example, layer 12 has a higher number of nonzero elements compared to layer 2, indicating that layer 12 extensively computes the MHA operation using a larger number of data points in the sequence. Hence, it is crucial to consider varying degrees of sparsity for every encoder layer. This approach is essential for effectively reducing computational operations while preserving key features across distinct encoder layers. ## 4. SPION: Layer-wise sparse attention in transformer ### Overview of SPION In this section, we provide a high-level overview and details of our new SPION that dynamically sparsifies the MHA operation, incorporating the major considerations described in Section 3. Note that our SPION is capable of sparsifying the MHA operation either ``` Input:\(E\): input embedding vectors, \(L\): length of input sequence, \(D\): embedding size, \(N\): the number of encoder layers, \(\alpha\): threshold for transition \(transition\gets False\) for\(i=0\)to\(num\_epochs-1\)do \(\triangleright\) Dense MHA Phase if\(transition\)==Falsethen for\(n=0\)to\(N-1\)do \(O,A^{s}_{i}\gets MHA(E)\) \(E\gets Feed\_Forward(O)\) if\(i>i\)then \(distance_{i-1}\gets frobenius(A^{s}_{i-2}-A^{s}_{i-1})\) \(distance_{i}\gets frobenius(A^{s}_{i-1},A^{s}_{i})\) if\(\sqrt{(distance_{i-1}-distance_{i})^{2}}<\alpha\)then \(transition\gets True\) \(P\gets generate\_pattern(A^{s}_{i},t)\) \(\triangleright\) Sparse MHA Phase \(\triangleright\) Sparse MHA Phase if\(transition\)==Truethen for\(n=0\)to\(N-1\)do \(O\gets sparse\)MHA(\(E,P_{n}\)) \(E\gets Feed\_Forward(O)\) ``` **Algorithm 2**Overall Training Procedure of SPION Figure 2. **SPION splits the overall training process into three phases: dense-attention training with dense MHA operation, sparsity pattern generation, and sparse-attention training with sparse MHA operation** after training the model for a few steps or during the process of fine-tuning the model from the scratch using the pre-trained network. As shown in Figure 2a, we decouple the overall training process of SPION into three phases: dense-attention training, sparsity pattern generation, and sparse-attention training. The dense-attention training follows the same training procedure as the original Transformer, without sparsifying the MHA operation. During the dense-attention training phase, the original MHA, which includes two GEMM kernels and a dense softmax kernel (Figure 2b), is performed until the attention score matrix \(A^{s}\) of each encoder layer exhibits a specific sparsity pattern. In order to determine the end of the dense-attention phase or the start of the sparse-attention phase, we measure the Frobenius distance between the attention score matrices \(A^{s}\) produced in the previous step \(i-1\) and the current step \(i\) as defined in Equation 2. \[distance_{i}=\left|\sqrt{\sum(A^{s}_{i-1})^{2}}-\sqrt{\sum(A^{s}_{1})^{2}}\right| \tag{2}\] If there is no significant difference in distance, our SPION ceases the dense-attention training phase and dynamically generates the sparsity pattern for each encoder layer based on our novel convolutional flood-filling scheme, which is described in Section 4.2. Intuitively, it can be assumed that the attention score matrix is generalized when the values in the current attention score matrix are similar to those in the previous attention score matrix. This indicates that it is ready to apply sparsification to the attention score matrix. Thereafter, SPION proceeds with the sparse-attention training phase until convergence by adapting the sparsity pattern, as depicted in Figure 2a. Given the dense matrices \(Q\) and \(K\), along with the newly generated sparsity pattern matrix \(P\), the SDDMM (Sampled Dense-Dense Matrix Multiplication) kernel is utilized to accelerate producing the sparsified attention score matrix \(S^{r}\). Next, the sparse matrix \(S^{r}\) is used to perform the optimized sparse softmax kernel, leveraging warp-level reduction for accelerating performance. After applying the sparse softmax operation, since the attention score matrix \(S^{s}\) remains sparse, we utilize SpMM (Sparse-Dense Matrix Multiplication) kernel to multiply the sparse matrix \(S^{s}\) with the dense matrix \(V\) (Figure 2c). ### Sparsity Pattern Generation with Convolutional Flood Fill Algorithm To precisely detect the sparsity patterns in the attention score matrix \(A^{s}\) generated during dense-attention training, we develop a new convolutional flood fill algorithm that extensively explores the shape, degree and locality of sparsity patterns for each encoder layer. Algorithm 3 shows the pseudo-code for generating the sparsity pattern in the attention score matrix \(A^{s}\). An initial step is to apply a diagonal convolution filter to \(A^{s}\) (line 1 in Algorithm 3) in order to detect the shape of sparsity pattern in \(A^{s}\), as shown in Figure 3. If the diagonal elements in \(A^{s}\) have larger values compared to the others, applying a diagonal convolution filter increases the values of the diagonal elements in the convolution output (\(conv\_out\)), leading to the emergence of a diagonal sparsity pattern. Otherwise, if the off-diagonal elements, especially the vertical ones, in \(A^{s}\) have larger values compared to the others, applying a diagonal convolution filter results in a vertical sparsity pattern in \(conv\_out\) matrix. Note that in order to ensure that the attention score matrix \(A^{s}\) and the convolution output (\(conv\_out\)) have the same size (\(L\times L\)), we adopt zero-padding to \(A^{s}\) during computing the convolution operation defined in Equation 3. \[conv\_out(i,j)=\sum_{f=1}^{F}A^{s}(i+f,j+f)\times filter(f,f) \tag{3}\] After generating the \(conv\_out\) through a diagonal convolution operation, our algorithm performs average pooling on the \(conv\_out\) using a kernel/block of size (\(B\times B\)) as defined in Equation 4 (line 2 Figure 3. Overview of our convolutional flood filling method in the sparsity pattern generation/detection phase in Algorithm 3). \[pool\_out\left(\frac{i}{B},\frac{j}{B}\right)=\frac{1}{B^{2}}\sum_{p=1}^{B}\sum_{q =1}^{B}conv\_out\left(i+p,j+q\right) \tag{4}\] Instead of analyzing the sparsity pattern of \(A^{s}\) element by element, applying average pooling enables capturing block sparsity pattern, which takes into account both the critical data points and their surrounding data points. Hence, since the output of average pooling (\(pool\_out\)) has a smaller size (\(L/B\times L/B\)) compared to the attention score matrix \(A^{s}\), \(pool\_out\) can be considered as an abstract sparsity representation of \(A^{s}\) in blocks. ``` Input:\(pool\_out\in\mathbb{R}^{L/B\times L/B}\), \(r\): current row index, \(c\): current column index, \(fl\_out\in\mathbb{R}^{L/B\times L/B}\), \(t\): threshold value if\((r+1==L/B)\)or\((c+1==L/B)\)then return\(pool\_out\), \(r\), \(c\), \(fl\_out\), \(t\)\(m\leftarrow\max(pool\_out[r+1][c],pool\_out[r][c+1],pool\_out[r+1][c+1])\) if\(pool\_out[r+1][c]==m\)and\(fl\_out[r+1][c]==0\)then if\(pool\_out[r+1][c]>t\)then \(fl\_out[r+1][c]\gets 1\)\(pool\_out,r,c,fl\_out,t\gets fl\_out[r+1][c+1]==m\)and\(fl\_out[r+1][c+1]==0\)then \(fl\_out[r+1][c+1]\gets 1\)\(pool\_out,r,c,fl\_out,t\gets flood\_fill(pool\_out,r,c+1,fl\_out,t)\) if\(pool\_out[r+1][c+1]==m\)and\(fl\_out[r+1][c+1]==0\)then if\(pool\_out[r+1][c+1]>t\)then \(fl\_out[r+1][c+1]\gets 1\)\(pool\_out,r,c,fl\_out,t\gets flood\_fill(pool\_out,r,c+1,fl\_out,t)\) if\(pool\_out[r+1][c+1]==m\)and\(fl\_out[r+1][c+1]==0\)then if\(pool\_out[r+1][c+1]>t\)then \(fl\_out[r+1][c+1]\gets 1\)\(pool\_out,r,c,fl\_out,t\gets flood\_fill(pool\_out,r+1,c+1,fl\_out,t)\) return\(pool\_out,r,c,fl\_out,t\) ``` **Algorithm 4**Implementation of flood_fill() function In order to dynamically explore the crucial elements in the \(pool\_out\) and generate \(fl\_out\), we develop a novel algorithm inspired by the flood-fill algorithm. By adapting the flood filling scheme, our SPION is able to precisely analyze the connectivity between significant elements in \(pool\_out\), while also considering the number of critical nonzero elements. Algorithm 4 shows the pseudo-code for our flood fill-based algorithm recursively executed in Algorithm 3. Unlike the traditional flood fill algorithm, which compares all neighbors of a current element to find the element with the largest value, our new algorithm only compares to the neighboring elements on the right, below, and diagonally below, as shown in Figure 4. In the process of capturing the pattern, it is necessary to sequentially follow the important features starting from the top-left corner to the bottom of the matrix. Hence, comparing the current element with the elements to its right or below checks whether the neighbors of the current element are relevant. If the neighboring elements are not relevant to the current element, our algorithm moves to the element diagonally below to continue comparing the neighbors. The element at the first row and first column of the \(pool\_out\) is used as the starting point (lines 4 and 6 in Algorithm 3). Then the algorithm compares the values of elements to its right, below, and diagonally below, extracting the element with the largest value, \(m\) (line 3 in Algorithm 4). When the value of \(m\) is greater than a predefined threshold \(t\) (line 5, 9, and 13 in Algorithm 4), we determine the corresponding element as the critical element and assign a value of 1 to the corresponding element in \(fl\_out\) (lines 6, 10 and 14 in Algorithm 4). Here, the threshold \(t\), is determined by calculating the \(\alpha\%\) quantile of \(pool\_out\). Hence, even if one of the neighbors is selected as a potential critical point, it may be classified as a non-critical point based on the threshold \(t\). By utilizing the threshold \(t\), we ensure that the selected critical elements have values that are sufficiently large. After determining the critical element, the algorithm recursively compares the values of elements to the right, below, and diagonally below the critical element while avoiding duplicate comparisons with elements that have already been selected. (second condition of lines 4, 8, and 12). This process is repeated until the selected critical element reaches the end of a row or column in the matrix \(pool\_out\) (line 1 in Algorithm 4). Afterward, as shown in the middle of Figure 4, the element at the first row and second column serves as the next starting point for recursively analyzing the connectivity of critical elements. Therefore, considering each element in \(pool\_out\) as a seed point allows for the exploration of the connectivity of critical elements across all elements in \(pool\_out\). Since the attention score between the same data points in the sequence tends to be large for most of the encoder layers, we initially assign a value of 1 to the diagonal elements in \(fl\_out\). The final output of our flood fill algorithm (\(fl\_out\)) can be seen as a more explicit sparsity pattern captured from \(pool\_out\) and can also be considered as the compressed block-level sparsity pattern of \(A^{s}\). To utilize the sparse pattern of \(fl\_out\) in the sparse MHA operation, the size of the matrix \(fl\_out\) needs to be the same as the size of the attention score matrix \(A^{s}\). Therefore, we upsample the \(fl\_out\) using nearest-neighbor interpolation (line 11 in Algorithm 3), resulting in each nonzero element in \(fl\_out\) forming a block of nonzero elements in the final sparsity pattern matrix (\(P\)), as shown on the right of Figure 3. Utilizing a block sparse matrix can improve the model's quality since it incorporates not only the critical elements but also their surrounding elements for MHA operation. Moreover, an optimized blocked matrix multiplication further enhances performance through improved data locality. In the blocked sparse matrix \(P\), elements that require calculation in the attention score matrix are set to 1, while those that do not require Figure 4. Walk-through example of our flood fill algorithm calculation are set to 0. Finally, during the sparse MHA, only the elements of \(A^{s}\) that have the same indices as the elements with a value of 1 in the block sparsity pattern matrix \(P\) will be computed. ### Acceleration of Sparse MHA Implementation on GPUs In this sub-section, we provide details of our parallel sparse MHA implementation on GPUs. Prior to the sparse-attention phase and sparsity pattern generation, we train the model with the dense MHA for several steps. During the dense-attention phase, our SPION implementation uses the NVIDIA cuBLAS library with the tensor cores on GPUs, such as cublasGemmStridedBatchedEx(), to accelerate the dense MHA operation, which involves the multiplication of dense matrices \(Q\) and \(K\), as well as the multiplication of dense matrices \(A^{s}\) and \(V\). ``` Input:\(E\): a set of embedding vectors for an input sequence, \(L\): length of input sequence, \(D\): embedding size for each data point, \(H\): the number of heads, \(W^{Q}\), \(W^{K}\), \(W^{V}\), \(W^{O}\): weight parameters, \(P\): sparsity pattern matrix 1\(X\gets LayerNorm(E)\) 2\(Q\gets X\times W^{Q},K\gets X\times W^{K},V\gets X\times W^{V}\) 3\(Q_{0\dots,H-1}\), \(K_{0\dots,H-1}\), \(V_{0\dots,H-1}\gets split(Q,K,V)\) 4parallelfor\(i=0\)to\(H-1\) 5\(S^{r}_{i}\gets cusparseSDDMM(Q_{i},K^{T}_{i},P)\) 6\(S^{s}_{i}\gets SparseSoftmax(S^{r}_{i},P)\) 7\(S^{r}_{i}\gets cusparseSpMM(S^{r}_{i},V_{i},P)\) 8\(S\gets concatenate(S^{r}_{0},\dots,S^{r}_{H-1})\) 9\(O\gets dropout(S\times W^{O})+E\) ``` **Algorithm 5**GPU Implementation of sparseMHA() on host **Sparse MHA.** Algorithm 5 shows the pseudo-code for the forward propagation of sparse MHA. To accelerate the sparse MHA operation, we utilize NVIDIA cuSPARSE libraries such as cusparseSDDMM and cusparseSpMM. Additionally, we optimize the softmax function to account for sparsity in the sparse raw attention score matrix. \[S=softmax\left(\frac{(P>0)\odot(Q\times K^{T})}{\sqrt{(D/H)}}\right)\times V \tag{5}\] As shown in Equation 5, the sparse MHA produces a sparse attention matrix \(S\) by replacing the GEMM operation in line 7 of Algorithm 1 with the SDDMM operation using cusparseSDDMM (line 5 in Algorithm 5). The SDDMM kernel computes the product of two dense input matrices \(Q\) and \(K\), and the resulting matrix is then subjected to a Hadamard product (element-wise multiplication) using a sampled sparse matrix \(P\). In Equation 5, \((P>0)\) indicates that only the indices of nonzero elements in \(P\) are computed in the result matrix while multiplying \(Q\) and \(K\), and \(\odot\) denotes element-wise multiplication. Therefore, the SDDMM operation produces the sparsified raw attention matrix \(S^{r}=(P>0)\odot(Q\times K^{T})\), which stores only a small number of nonzero elements. To efficiently compute \(S^{s}\) by performing the softmax operation with the sparse matrix \(S^{r}\), we implement the sparse softmax function (line 6 in Algorithm 5). This function computes the probability distribution for only the nonzero values in \(S^{r}\), instead of applying the standard softmax function used in line 8 of Algorithm 1. Thereafter, as \(S^{s}\) remains a sparse matrix, we perform the SpMM operation using cusparseSpMM to compute the product of a sparse matrix \(S^{s}\) and a dense matrix \(V\) (line 7 in Algorithm 5). To perform cursparseSDDMM, SparseSoftmax, and SpMM kernels with \(P\), we convert the sparse matrix \(P\) into the most commonly used Compressed Sparse Row (CSR) format consisting of three data structures: \(row\_ptr\), \(col\_idx\) and \(values\). ``` Input:\(S^{r},P,scale\) 1\(warp\_id\leftarrow\lfloor threadIdx.x/warp\_size\rfloor\) 2\(lane\gets threadIdx.x\%warp\_size\) 3\(b\_ent\leftarrow\) 4\(P.row\_ptr[warp\_id+1]\%L]-P.row\_ptr[warp\_id%L]\) 5\(b\_idx\leftarrow\lfloor warp\_id/L\rfloor\times P.values+P.row\_ptr[warp\_id%L]\) 6\(max\leftarrow-\infty\) 7\(sum\gets 0\) 8for\(k\)=laneto\(b\_cnt\_1\)by\(warp\_size\)do 9\(S^{r}[b\_idx+k]\gets S^{r}[b\_idx+k]\times scale\) 10if\(max<S^{r}[b\_idx+k]\)then 11\(max\gets S^{r}[b\_idx+k]\) 12\(max\gets warp\_reduce\_max(max)\) 13for\(k\)=laneto\(b\_cnt\_1\)by\(warp\_size\)do 14\(sum\gets sum+exp(S^{r}[b\_idx+k]-max)\) 15\(sum\gets warp\_reduce\_sum(sum)\) 16\(sum\gets sum+exp(-max)\times(L-b\_cnt)\) 17for\(k\)=laneto\(b\_cnt\_1\)by\(warp\_size\)do 18\(S^{r}[b\_idx+k]\gets exp(S^{r}[b\_idx+k]-max)/sum\) ``` **Algorithm 6**SparseSoftmax() kernel for forward propagation on GPUs **Sparse Softmax Kernel.** Algorithm 6 shows the pseudo-code for our SparseSoftmax() kernel. In the SparseSoftmax() kernel, as the original softmax function operates in a row-wise manner, each warp is responsible for computing a single row. The variable \(warp\_id\) indicates which row should be calculated on the current warp, and the variable \(lane\) indicates the lane number within a single warp. For each row, \(b\_cnt\) indicates the number of elements that should compute, and \(b\_idx\) indicates where the each row starts at \(S^{r}\). We initialize the variables \(sum\) and \(max\) to 0 and \(-\infty\), respectively. Since there is no need to share these variables among different threads, the variables \(sum\) and \(max\) are initialized inside the kernel. To ensure numerical stability, we normalize the elements in each row by subtracting the maximum value found in each row. To find the maximum value in each row, a lane in a warp fetches and compares a subset of elements within that row (lines 9-11 in Algorithm 6). The function \(warp\_reduce\_max()\) in line 11 enables a lane to exchange values using warp-level primitives and compare them with each other in a reduction manner. After executing the \(warp\_reduce\_max()\) function, each lane within a single warp receives the same maximum value that was passed by all the lanes. We also use the warp-level primitive to aggregate sum value across all lanes within a single warp (line 14) and obtain the final value of _sum_. Finally, the computation of sparse softmax function is completed by dividing the elements in the row by the value of _sum_ variable. Therefore, by only considering the nonzero elements in \(S^{r}\), our SparseSoftmax() kernel greatly reduces the computational complexity of softmax operation. **Integration of GPU kernels with PyTorch.** To execute our optimized CUDA kernels and CUDA libraries for both forward and backward propagation within the PyTorch code, we integrate these kernels and libraries with PyTorch by employing the types library (CDLL) in the Python environment. Furthermore, our custom autograd function is used to process the backward propagation. Before the training begins, we first compile our CUDA code into a shared object (so) file. Once compiled, the CUDA shared object file is loaded and then converted into a Python handle. Subsequently, the Python handle is employed to invoke the CUDA kernels for both forward and backward propagations. ### Computational Complexity Analysis Comparison By applying our sparsification scheme, the sparse raw attention matrix \(S_{l}^{r}\) maintains a total of \(C\) non-zero elements, whereas the original dense matrix \(A_{i}^{r}\) maintains a total of \(L\times L\) non-zero elements. By using the sparse matrix \(S_{i}^{r}\), the total number of operations required to compute \(S\) is reduced to \(2C(2D+1)-L(D+1)\), resulting in approximately \(\frac{L^{2}}{C}\times\) reduction in operations compared to the original attention matrix \(A\), which requires \(2L^{2}(2D+1)-L(D+1)\) operations. In practice, for the real-world AAN dataset (Datta et al., 2016) (\(L=4,\) 096 and \(D=64\)) with the number of \(ncd=1,677,721\) (10% of \(L^{2}\)), the total number of operations required for producing the original \(A\) and the sparsified \(S\) are \(4,328,255,488\) and \(432,585,778\), respectively. Therefore, sparse MHA requires approximately 10 times less operations than the original dense MHA during training. ## 5. Experimental Evaluation This section provides both performance and quality assessments of our SPION implementation. Our SPION is compared with various state-of-the-art sparse Transformer models. **Benchmarking Machines.** The detailed configuration of the benchmarking machines used for the evaluations is shown in Table 1. All the experiments were run on four NVIDIA RTX A5000 GPUs. **Datasets and Tasks.** We evaluated our model on the Long Range Arena (LRA) (Datta et al., 2016), which is widely used for evaluating the effectiveness of efficient transformers for long sequences. We conducted three sequence classification tasks as part of the LRA evaluations, including image classification, ListOps and document retrieval tasks. For all experiments, we measured classification accuracy to compare the quality of the compared models. The three tasks, along with the three datasets used in our evaluations, are as follows: * **Image classification**: This task classifies images into 10 different classes using the CIFAR-10 dataset (Cheng et al., 2016). Each image in the dataset has a size of 32x32 pixels, resulting in a sequence length of 1,024 by considering each pixel as a data point. * **ListOps**: This task involves classifying the answer within the range of 0 to 9, given input equations expressed as a sequence of numbers and mathematical operators. The dataset provided from Nangia et al. (Nangia et al., 2016) is used for evaluating this task. The maximum sequence length is 2,048. * **Document retrieval**: This task classifies whether two given documents are related or not. The AAN dataset (Datta et al., 2016) is used for this task, and the maximum sequence length is is 4,096. **Models Compared.** We compared our SPION with the original encoder-only Transformer and two state-of-the-art efficient sparse Transformers. * **Original encoder-only Transformer**(Datta et al., 2016): This implementation is based on the original Transformer architecture and performs the original dense MHA operation during entire training. * **BigBird**(Krizhevsky et al., 2012): This model incorporates sparse attention mechanisms, including sliding window attention, global attention, and random attention. It is evaluated using a block size of 64 and 3 random blocks. * **Reformer**(Deng et al., 2016): This model utilizes a sparse attention mechanism based on locality-sensitive hashing algorithm. It is evaluated using a bucket size of 32 and 2 hashes. * **SPION-C**: This model is the variation of our SPION model without applying the flood filling scheme. Instead, it selects the top \(\alpha\%\) of block elements from the results of the convolution filter and average pooling (_pool_out_), which enables adjustment of the sparsity ratio when generating the \(P\). * **SPION-F**: This model is the variant of our SPION without using a convolution filter. The model directly applies the flood-fill algorithm immediately after the average pooling operation without applying a convolution filter. * **SPION-CF**: This implementation incorporates both the convolution filter and the flood fill-based scheme while generating sparsity pattern. In our SPION implementation, we used an embedding dimension (\(D\)) of size 64, and the batch size was chosen based on the available memory size, resulting in batch sizes of 256 for image classification, 128 for ListOps and 32 for document retrieval. The size of convolution filter in SPION is set to (\(31\times 31\)) for all experiments. And for the threshold used in the flood filling method, we set \(\alpha\) to 96 for image classification, 98 for ListOps, and 99 for document retrieval tasks. While performing average pooling and upsampling, the block size for our model was determined based on the maximum sequence length of the respective dataset. For image classification task, we used a block size of 32 and for ListOps and document retrieval tasks, a block size of 64 is used. Note that all the experiment results presented in this section are averaged over 3 different executions. \begin{table} \begin{tabular}{|c|c|} \hline Machine & Details \\ \hline \multirow{2}{*}{CPU} & AMD Ryzen Threadripper PRO 5955WX \\ & (16 cores and 32 threads, 128GB RAM) \\ \hline \multirow{2}{*}{GPU} & NVIDIA RTX A5000 \\ & (24 GB Global Memory, 64 SMs, 6 MB L2 cache) \\ \hline \end{tabular} \end{table} Table 1. Machine configuration ### Performance Evaluation **Convergence.** Table 2 shows the accuracy of six models on three different tasks. Our SPION-CF consistently achieved the highest accuracy in all tasks, surpassing the highest accuracy obtained from other compared models by +0.825%, +0.236%, and +0.801% for the three evaluation tasks. It is interesting to see that among the SPION variants, incorporating both the convolution filter and flood-filling scheme led to higher accuracy for all tasks. This indicates that the convolution filter and flood-filling method synergize with each other. More specifically, in the sparsity pattern generation phase, the convolution filter plays a role in increasing the values of distinct shapes appearing in \(A^{s}\), thereby enhancing the ability of the flood filling method to identify critical elements more effectively. Additionally, we observed that the flood filling method shows more significant effect on accuracy compared to the convolution filter. This result demonstrates that, in sparsity pattern generation, it is more important to consider the connectivity between elements than to simply select elements based on their values. BigBird showed relatively lower accuracy for image classification and relatively higher accuracy for document retrieval. This is because the BigBird model was specifically developed to address long sequences of text datasets. **Speedup.** Figure 5 shows the time and memory space required for training, as well as the time required for processing inference on three tasks. Compared to the original Transformer model, our SPION-CF achieved \(1.66\times\), \(2.21\times\) and \(3.08\times\) speedup per training step on each task. SPION achieved significant speedup, particularly for tasks involving longer sequences, such as document retrieval. As the input sequence length increases, the number of operations in MHA increases exponentially. Consequently, in models that handle longer sequences, the proportion of operations accounted for MHA may be larger compared to models dealing with shorter sequences. Therefore, in longer sequences, reducing the count of operations involved in the MHA leads to higher performance. The figure on the right column in Figure 5 shows a breakdown of inference time of compared models. Compared to the original Transformer, for document retrieval task, our SPION-CF achieved a speedup of \(5.54\times\) for the executing MHA operation while achieving a speedup of \(2.78\times\) in total elapsed time for inference. To evaluate the reduction in elapsed time for each operation in the dense MHA and sparse MHA, we compared the elapsed time of each operation in our sparse MHA with that of the original dense MHA. Figure 6 shows a breakdown of the elapsed time for running MHA operations in the original Transformer and SPION on three different tasks. For the image classification task, the replacement of the GEMM operation of \(Q\) and \(K\) with the SDDMM operation resulted in a speedup of \(2.55\times\). The softmax operation achieved a significant speedup of \(42.40\times\). Additionally, the SpMM operation, which replaced the GEMM operation for \(A^{s}\) and \(V\), outperformed with a speedup of \(2.54\times\). This result demonstrates that the softmax function implemented in the original Transformer model is a primary bottleneck. However, since our SPION model performs \begin{table} \begin{tabular}{|c|c|c|c|} \hline \multirow{2}{*}{Model} & \multicolumn{2}{c|}{Image} & \multirow{2}{*}{ListOps} & \multicolumn{1}{c|}{Document} \\ & Classification & & Retrieval \\ \hline Original & 39.022 & 38.615 & 77.796 \\ \hline BigBird & 38.927 & 38.722 & 78.437 \\ \hline Reformer & 40.488 & 36.535 & 75.657 \\ \hline SPION-C & 39.173 & 38.889 & 76.015 \\ \hline SPION-F & 40.926 & 38.889 & 79.120 \\ \hline SPION-CF & **41.313** & **38.958** & **79.238** \\ \hline \end{tabular} \end{table} Table 2. Comparison of accuracy (%) on three LRA tasks Figure 5. Comparison of elapsed time and memory footprint per step for training (left column), and inference time per step (right column) for three tasks. Figure 6. Breakdown of elapsed time (ms) for running MHA operations on three tasks optimized softmax by considering sparsity, the execution time for the softmax operation is significantly reduced. Not only for the image classification task, but also for all other tasks, our SPION achieved speedup in every operation associated with the MHA. **Sparsity Ratio Comparison.** Figure 7 shows the training time and classification accuracy at each sparsity ratio on the ListOps task. Since it is not possible to adjust the sparsity in the flood-filling scheme, we adopted the SPION-C model to evaluate the impact of sparsity ratio. As expected, a higher sparsity ratio leads to shorter training times, but it results in lower accuracy. Thus, it is crucial to find the optimal sparsity ratio that balances between achieving high accuracy and reducing training time. In the ListOps task, a sparsity ratio of 96% provides significantly reduced training time while maintaining a comparable accuracy. Compared to the results of a sparsity ratio of 70%, achieving a 96% sparsity ratio resulted in a speedup of 3.26\(\times\), while the accuracy only dropped by 0.903%, which is less than 1%. **Memory Footprint.** As shown in Figure 5, our SPION-CF achieved a memory footprint reduction of 4.62\(\times\), 7.23\(\times\), and 9.64\(\times\) compared to the original Transformer model. And our SPION variant models resulted the least memory footprint in all tasks. Compared to Reformer, our SPION demonstrates comparable training time but outperforms it in terms of memory footprint. This result shows that our SPION is able to efficiently perform sparse MHA using sparsity patterns without requiring additional memory. In addition, SPION-CF achieved the highest accuracy across all tasks, demonstrating that the achieved memory savings do not come at the expense of accuracy. ## 6. Conclusion Due to the compute-intensive nature of the Transformer, that must perform a large number of MHA operations, we present a novel sparsification scheme that leverages convolution filters and the flood-filling algorithm to dynamically reduce the number of operations in the MHA. Our sparsification technique is capable of identifying various aspects of the sparsity pattern in the MHA operation and is applicable to many other Transformer models, not limited to the encoder-only Transformer. Furthermore, we developed a parallel sparse MHA implementation to achieve high performance. Experimental results on long sequence datasets demonstrate that our parallel implementation on GPUs achieves a significant performance improvement over state-of-the-art sparsity-aware Transformer models, while maintaining comparable or better accuracy.
2309.08520
Scaling Laws for Sparsely-Connected Foundation Models
We explore the impact of parameter sparsity on the scaling behavior of Transformers trained on massive datasets (i.e., "foundation models"), in both vision and language domains. In this setting, we identify the first scaling law describing the relationship between weight sparsity, number of non-zero parameters, and amount of training data, which we validate empirically across model and data scales; on ViT/JFT-4B and T5/C4. These results allow us to characterize the "optimal sparsity", the sparsity level which yields the best performance for a given effective model size and training budget. For a fixed number of non-zero parameters, we identify that the optimal sparsity increases with the amount of data used for training. We also extend our study to different sparsity structures (such as the hardware-friendly n:m pattern) and strategies (such as starting from a pretrained dense model). Our findings shed light on the power and limitations of weight sparsity across various parameter and computational settings, offering both theoretical understanding and practical implications for leveraging sparsity towards computational efficiency improvements.
Elias Frantar, Carlos Riquelme, Neil Houlsby, Dan Alistarh, Utku Evci
2023-09-15T16:29:27Z
http://arxiv.org/abs/2309.08520v1
# Scaling Laws for Sparsely-Connected ###### Abstract We explore the impact of parameter sparsity on the scaling behavior of Transformers trained on massive datasets (i.e., "foundation models"), in both vision and language domains. In this setting, we identify the first scaling law describing the relationship between weight sparsity, number of non-zero parameters, and amount of training data, which we validate empirically across model and data scales; on ViT/JFT-4B and T5/C4. These results allow us to characterize the "optimal sparsity", the sparsity level which yields the best performance for a given effective model size and training budget. For a fixed number of non-zero parameters, we identify that the optimal sparsity increases with the amount of data used for training. We also extend our study to different sparsity structures (such as the hardware-friendly n:m pattern) and strategies (such as starting from a pretrained dense model). Our findings shed light on the power and limitations of weight sparsity across various parameter and computational settings, offering both theoretical understanding and practical implications for leveraging sparsity towards computational efficiency improvements. ## 1 Introduction Foundation models (Bommasani et al., 2021), loosely defined as large (often Transformer-based (Vaswani et al., 2017)) networks that are trained on massive quantities of highly general data, have driven significant progress in deep learning, for both natural language (Brown et al., 2020) and vision tasks (Dosovitskiy et al., 2021). One key property of such models is the predictability of their performance when scaling various model attributes, such as the number of parameters, training characteristics and the amount of data or computation used (Kaplan et al., 2020). This is encapsulated by _scaling laws_, which make it possible to accurately predict the performance of a model specified just through its high-level parameters like size, data and computation. A parallel trend, motivated by computational costs, has been the focus towards increased efficiency for large models. This is usually achieved by employing compressed parameterizations via quantization (Gholami et al., 2021) or sparsification (Hoefler et al., 2021), during inference and/or training, which can lead to real-world speedups via both software and hardware support (Elsen et al., 2020; Yao et al., 2022). Despite major community interest in efficiency, the impact of these compressed representations, in particular of parameter/weight sparsity, on the scaling behavior of foundation models is not well understood; especially, when applying powerful but expensive training-based compression methods (Jacob et al., 2018; Zhu & Gupta, 2017). In this paper, we aim to address this gap by studying the relationship of sparsity with the scaling laws of foundation models. Specifically, we focus on _weight sparsity_, that is, on networks whose individual connections are pruned, and on Transformer-family (Vaswani et al., 2017) models for both vision (Dosovitskiy et al., 2021) and language (Raffel et al., 2020) domains. We use the massive JFT-4B (Google, 2023) and C4 (Raffel et al., 2020) datasets, which are several orders of magnitude larger than what has been employed so far by the vast majority of work on sparsity. In this massive dataset regime, dense models continue to improve with prolonged training, thus it is currently unclear whether sparse models can win at all in a fair comparison using equal amounts of training compute. This is in contrast to popular pruning benchmarks (e.g., ImageNet (Deng et al., 2009) pruning) where dense models tend to saturate quickly (Kuzmedelev et al., 2023), allowing sparse models to achieve major gains relative to dense models with a comparable number of parameters. In order to quantify the benefits of sparsity, or the lack thereof, in this large-dataset regime we develop joint scaling laws that relate the sparsity of a network, its effective size and the amount of data used for training. We show that, for sparsity \(S\), number of non-zero parameters \(N\) and amount of training data/steps \(D\), the validation loss \(L\) approximately satisfies the following law, for both vision and language tasks: \[L(S,N,D)=\left(a_{S}(1-S)^{b_{S}}+c_{S}\right)\cdot\left(\frac{1}{N}\right)^{b _{N}}+\left(\frac{a_{D}}{D}\right)^{b_{D}}+c, \tag{1}\] Intuitively, the first two summands capture the power law scaling in terms of capacity, i.e. sparsity and non-zero parameters, and data, respectively, while \(c\) is a lower bound on the achievable task loss. In more detail, the first multiplicative term captures the impact of sparsity, here expressed as remaining density \((1-S)\), which itself follows a saturating power-law with coefficient \(a_{S}\), exponent \(b_{S}\) and limit constant \(c_{S}\). The exponents \(b_{N}\) and \(b_{D}\) scale the (non-zero) parameter count \(N\), and the data \(D\) term, respectively, as is common in classical scaling laws (Kaplan et al., 2020). We validate this formula empirically using large vision and language datasets, several model sizes, amounts of training data and sparsity levels. Please see Figure 1 (Left) for an illustration of the scaling law fit and extrapolation quality. In turn, this law allows us to obtain several new insights regarding both the power and limitations of weight sparsity, in the foundation model setting: * First, it suggests that sparsity affects each model size in a similar way, i.e., as a multiplicative constant to the size scaling. At the same time, sparsification does not appear to interact significantly with the data scaling; the original dense term in \(D\) is preserved. * Second, we can use our scaling law in Equation (1) to analytically derive the _optimal sparsity_\(S_{\text{opt}}\) for a given inference size and training budget, allowing us to predict the regime where sparsity could actually provide benefits over simple dense model rescaling and extended training. * Our analysis of optimal sparsity \(S_{\text{opt}}\), demonstrated in Figure 1 (Right), shows that its iso-contours run parallel to the dense compute optimal Chinchilla line (Hoffmann et al., 2022) of the respective model and task. Importantly, the optimal sparsity increases with longer training. Further, while optimal dense models define a line on the parameter-FLOPs surface, optimal sparse models form a half-plane (with different sparsities unlocking multiple optimal sizes for a fixed training cost). * In addition, we find that the main conclusions of our law hold also for the hardware-friendly n:m sparsity patterns (Mishra et al., 2021) and that pruning well-trained dense models is more efficient than training from scratch (while sparsifying), if dense checkpoints already exist, but is significantly slower otherwise. In sum, our results provide the first scaling law for characterizing the impact of sparsity on the performance of Transformers trained on massive datasets. From the conceptual perspective, this provides a simple tool to understand the power-but also the limitations-of sparsity for a given task/model combination. From the practical side, this can be used to determine whether sparsity can be a reasonable option for inference or training speedups, in settings where specific software/hardware support for such compressed representations is available. Figure 1: (Left) Fit and extrapolation quality of the \(L(S,N,D)\) scaling law on T5/C4. (Right) Optimal sparsity \(S_{\text{opt}}\) contours fitted on ViT/JFT, for sparse and dense costs (details in Section 3.3). ## 2 Fair Evaluation in the Presence of Strong Scaling In the context of modern Transformers trained on massive datasets, popular evaluation approaches (Gale et al., 2019; Singh and Alistarh, 2020; Sanh et al., 2020; Schwarz et al., 2021; Benbaki et al., 2023) that have been reasonable for standard pruning benchmarks like ResNet50/ImageNet (Singh and Alistarh, 2020; Schwarz et al., 2021) or BERT/GLUE (Sanh et al., 2020; Kurtic et al., 2022), require careful reconsideration to ensure meaningful comparisons. The primary reason for this, which we detail below, is that Transformers trained on massive quantities of data exhibit very different scaling behavior (Kaplan et al., 2020; Hoffmann et al., 2022): * **Training data.** In a standard setting such as ResNet50/ImageNet, significantly increasing the training time of the dense model will quickly run into overfitting (Kuznedekiev et al., 2023). In contrast, the performance improvements of ViT/JFT only start to saturate after extremely long training time (Zhai et al., 2022); overfitting is virtually non-existent. Consequently, the result of sparsifying a ViT pretrained on 100M images over another 100M images (a standard setup for RN50/ImageNet pruning) should not be compared against the initial model as the sparse version has had twice as much overall training. Instead, the proper reference point is a dense model trained on 200M images. However, this dense model will likely be significantly more accurate. * **Model size.** Developing small but accurate dense models used to require arranging many custom modules into a carefully engineered architecture (Howard et al., 2017; Tan and Le, 2019). Naively scaling down a 25M parameter ResNet50 by a factor of 10 will not yield a competitive 2.5M parameter ImageNet model, which is why most pruning papers omit a comparison against such a variant. However, when considering Transformer models and massive datasets, basic width and depth scaling typically results in a very strong family of differently-sized models. Hence, it is critical to always compare sparse models with a dense version of equivalent parameter count. * **Computational costs.** Jointly considering _training data_ and _model size_ leads to the concept of _compute efficiency_(Hoffmann et al., 2022), which is generally disregarded in classic sparsity benchmarks since training is cheap enough to reach full convergence on all models. However, a smaller Transformer trained for longer can outperform a larger one trained with the same budget (i.e., for less steps). This effect renders proper comparisons even more challenging. For example, it means that a 50% sparse model obtained from pruning a model that was pretrained for 100K steps should be compared to a \(2\times\) smaller dense model trained for the same compute, i.e., 200K steps plus the computational cost of pruning. In summary, in a fair foundation model pruning setup, sparsity should not be able to leverage increased training time, a significantly better optimized dense base architecture or more training compute. Otherwise, comparisons would unfairly favor sparse models, since equivalently sized dense versions could not fully exploit their strong scaling properties across all these axes. We would like to note that it is currently unclear whether weight-sparse foundation models _can win at all_ in this highly challenging setting, where all these factors are properly accounted for. Conclusively answering this question will require a full understanding of the _joint scaling_ between sparsity, model size and training data/compute, towards which we take the first step in this paper. ## 3 Scaling Laws for Parameter-Sparse Transformers ### Experimental Setup This section briefly summarizes the setup of our main experiments, extensive sweeps across sparsity, size and data, that we will then subsequently use to develop scaling laws. A detailed discussion of all our choices, including hyper-parameters, can be found in Appendix A. Overview.In terms of models and datasets, we focus on Vision Transformers (Dosovitskiy et al., 2021) trained for multi-label image classification on the JFT-4B dataset (Dehghani et al., 2023), consisting of 4 billion images, as well as encoder-decoder T5 models (Raffel et al., 2020) (improved 1.1 version (Google, 2023b)) trained for masked-language-modelling on C4 (Raffel et al., 2020), consisting of 150+ billion tokens. We follow the model's respective original training recipes (Zhai et al., 2022; Raffel et al., 2020) and carry out sparsification _during_ training via gradual magnitude pruning (Zhu and Gupta, 2017), using a cubic schedule starting at 25% of training and ending at 75%. In general, we note that our setup is optimized for robustness and consistency across scales rather than to fully maximize pruning performance on one particular setting (see also Appendix A and B). Sweep grids.Table 1 lists the grid parameters that we sweep over. For ViTs, we consider 7 target models sizes in \(2\times\) increments each, while we use 4 targets sizes in increments of \(4\times\) for T5. Vision Transformers are trained for 4 different lengths, with the longest corresponding to \(\approx 1.8\) billion images; language models are trained for 3 different lengths up to \(\approx 65\) billion tokens. The set of sparsity targets is the same in both cases, corresponding to \(2\), \(4\) and \(8\times\) compression rate. Overall, the ViT grid was designed to be more extensive whereas the T5 setup was chosen to be more efficient. We execute all runs in the above grids and record the resulting validation losses. This data is then used to fit parametric scaling curves. ### Deriving the Core Law Dense scaling.It is well established (Kaplan et al., 2020; Hoffmann et al., 2022) that the pre-training validation loss of _dense_ Transformers can be approximately modeled, in terms of parameter count \(N\) and amount of training data \(D\), by functions of the following form: \[L(N,D)=\left(\frac{a_{N}}{N}\right)^{b_{N}}+\left(\frac{a_{D}}{D}\right)^{b_{ D}}+c. \tag{2}\] The first two summands capture the power law scaling in terms of size and data, respectively. Meanwhile, \(c\) represents the inherent stochasticity of the modelling problem as a lower bound on the loss. The scaling exponents \(b_{N}\) and \(b_{D}\) are usually quite stable for a particular task, whereas the constant coefficients \(a_{N}\) and \(a_{D}\) vary with minor process changes like a different architecture or optimizer. Scaling laws usually assume an ideal training setup with no data repetition and focus on modelling the non-bottlenecked regime (e.g., with sufficient steps/data/batchsize/etc.) rather than on edge cases (Kaplan et al., 2020; Hoffmann et al., 2022); we follow suit. Further, we deliberately consider the pretraining loss and infinite data setting to assess the effectiveness of sparsity in its most challenging (one essentially needs to fit the data as well as possible) yet also most useful application (all further post-processing would directly benefit from a compressed base model). Preliminary observations.The key question we hope to address is how parameter sparsity \(S\) enters this core scaling relationship; understanding this will enable studying other interesting aspects like optimal sparsity or limit performance. A priori, it is not obvious how \(S\) should enter into Equation (2) to form \(L(S,N,D)\), where \(N\) denotes the number of _non-zero parameters_. Are larger models easier to sparsify, does longer training help highly sparse models more, or is sparsity mostly independent of other parameters? Therefore, to get a first idea about what kind of shape we should expect for \(L(S,N,D)\), we execute the T5 sweep defined in Table 1 and visualize the results. Figure 2 shows validation loss (with a lower bound \(c=1\) subtracted to account for power law saturation against the inherent uncertainty limit) versus model size for all sparsity levels, grouped by the number of training steps. Please observe that the scaling of this plot, as well as most other visualizations in this paper, is log-log. We make three major observations from these graphs: 1. The loss vs. #non-zero curves for all sparsity levels seem to form almost parallel lines, differing primarily in the intercept. 2. The higher the sparsity the lower the loss, but gains are quickly diminishing. 3. The overall shape of all curves is very similar for each training duration, the y-axis just tends to shift a bit downwards with more training steps. \begin{table} \begin{tabular}{|l|c|c|} \hline \hline Model family & ViT & T5 \\ \hline \#Non-zero params & 0.66M, 1.33M, \(\dots\), 42.4M & 1.3M, 5.3M, \(\dots\), 85M \\ Training steps & 55K, 110K, 220K, 440K & 250K, 500K, 1M \\ Sparsities & 0.0, 0.5, 0.75, 0.875 & 0.0, 0.5, 0.75, 0.875 \\ \hline Total \#runs & 112 & 48 \\ \hline \hline \end{tabular} \end{table} Table 1: Grid definition for our main scaling sweeps. Sparse scaling law.We now use the previous insights to construct our \(L(S,N,D)\) formula. Observation 1 suggests that the model size power law scaling for all sparsity levels differs primarily in a constant factor (intercept in a log-log plot); \(b_{N}\) stays consistent. Based on observation 2, we model this sparsity factor as a (quickly) saturating power law. Finally, observation 3 indicates that sparsity and data scaling are mostly independent, hence we simply keep the original \(D\)-term. In summary, these observations lead us to the following formula for the joint scaling law: \[L(S,N,D)=\left(a_{S}(1-S)^{b_{S}}+c_{S}\right)\cdot\left(\frac{1}{N}\right)^{b _{N}}+\left(\frac{a_{D}}{D}\right)^{b_{D}}+c. \tag{3}\] To properly model that \(0.75\) is twice as sparse as \(0.5\), we define the sparsity power-law part via the corresponding compression rate \(1/(1-S)\). Further, \(a_{N}\) is subsumed by \(a_{S}\) and \(c_{S}\), leaving 7 free parameters. On a high level, our scaling law combines a _capacity limit_ term, comprised of size and sparsity (which can encode extra information via its zero pattern), with the standard data limit term. We note that this formulation suggests that higher sparsity is always better (but with potentially quite quickly saturating improvements), which may not be true in practice. For very high sparsity (e.g., \(64\times\) compression) we sometimes see slightly worse performance, presumably due to imperfections in the pruning and optimization process. This phenomenon could potentially be modelled by a quadratic, but for the present study we treat this as a bottleneck-case that we do not necessarily capture. Lastly, \(S=0\) recovers the established \(L(N,D)\) form. T5/C4 results.Next, we fit the coefficients of \(L(S,N,D)\) to our entire T5 sweep data. This is accomplished, following (Hoffmann et al., 2022), by minimizing the Huber-loss of \(\log L\) with \(\delta=0.001\) (for robustness against outliers) using BFGS, for multiple random starting points. We plot actual vs. predictions in Figure 1 (Right) to judge the quality of our final fit (see Appendix C for coefficient values). All in all, the predictions match the observed data quite closely (despite having \(\approx 7\) datapoints per free parameter), demonstrating the compatibility of the law in (3) with the observations. Furthermore, we evaluate extrapolation performance by pruning a 2.3 billion parameter model to \(75\%\) sparsity. This constitutes an \(\approx 6.75\times\)_larger_ target number of non-zero parameters than the maximum in our fitting data, which is a similar level of extrapolation as was done for Chinchilla (Hoffmann et al., 2022). To avoid any architecture bottlenecks and achieve better training utilization, we use the T5-XL architecture (rather than a simply rescaled T5-base) and train with batchsize 256 for 250k steps (rather than 500k with batchsize 128). Despite these changes to our setup, the prediction of our fitted scaling law is quite close to the actual validation loss; see Figure 1 (Right). Vit/JFT-4B results.Lastly, we execute the ViT sweep listed in Table 1 and also fit a scaling law of the same (3) form as for the T5 data. Here we use \(\delta=0.01\) and do not take the log of \(L\) as we find the NLP-optimized settings from before to exclude outliers too aggressively for ViT data (which gives a poor fit for smaller models). We note that this sweep contains \(>2\times\) more datapoints, leading to more robust coefficient estimates. We qualitatively compare predicted and actual loss-vs-data curves in Figure 3, organized by sparsity level. We strongly emphasize that the predictions in all subplots here are produced by _a single joint law_ with the same parameters (_not_ one fit per image). As can be seen, for the most part, our law appears to match the collected datapoints very well. Only at the lowest amount of training, some points are a bit off the prediction curve; we suspect that this Figure 2: Visualization of T5/C4 sweep results for all sizes and sparsities, grouped by training steps. may be related to the fact that these runs only involve comparatively few training steps, which may be a slight bottleneck for the optimization process. ### Optimal Sparsity One particularly interesting feature of the joint scaling law just derived is that it allows easily comparing models with different sparsities but the same number of non-zero parameters and training cost. Thus, we can determine in which situations sparse models are better than dense ones, according to all criteria discussed in Section 2. Specifically, we can define the following quantity: **Optimal sparsity.**_The sparsity value \(S_{\text{opt}}(N,C)\) which yields the lowest validation loss for a fixed number of non-zero parameters \(N\) and fixed training cost \(C\).1_ Footnote 1: We note that it is common in the literature (Hoffmann et al., 2022) to define scaling laws in terms of parameters \(N\) and data \(D\), but switch to expressing scaling in terms of computational cost \(C\) whenever relevant. There are two ways of defining training costs in this context: (a) _densely_, as the cost of training a dense base model of size \(N/(1-S)\) for the same amount of training steps, or (b) _sparsely_, as the actual FLOPs spent to produce the sparse model, assuming that sparsity can be perfectly exploited during training as soon as it appears. For our particular sparsification schedule, (b) can be calculated by multiplying the training costs of a dense model, approximated as \(6ND\)(Kaplan et al., 2020) (or half for encoder-decoder architecture models), by (see Appendix D for derivation): \[c_{\text{mul}}(S)=(0.25+0.50\cdot(1-0.75\cdot S))/(1-S)+0.25. \tag{4}\] As we have assumed that the amount of training equals the amount of new data, we can determine the performance of a sparsity \(S\) model trained for compute \(C=6ND\cdot c_{\text{mul}}(S)\) by querying \(L\) with \(D_{S}=(C/6N)/c_{\text{mul}}(S)\), i.e., scaling down the \(D\) corresponding to \(C\) by the increase in training costs of the sparse model. Inserting \(D_{S}\) and then differentiating with respect to \(S\) gives the contour line for which sparsity \(S\) is optimal, i.e., achieves the lowest loss among all possible sparsity choices, when training for the same compute: \[a_{D}b_{D}\cdot\frac{c^{\prime}_{\text{mul}}(S)}{c_{\text{mul}}(S)}\cdot(D_{S} /c_{\text{mul}}(S))^{-b_{D}}=a_{S}b_{S}\cdot(1-S)^{b_{S}-1}\cdot N^{-b_{N}}. \tag{5}\] An interesting property about this contour is that it implies \(D_{S}=O(N^{b_{N}/b_{D}})\), meaning that if data- is stronger than size-scaling, then the same sparsity is optimal for a smaller data-to-size ratio Figure 3: Visual comparison of the ViT scaling sweep data and the corresponding fitted scaling law. on larger models. This is sensible as a process bottlenecked more by capacity than by data will benefit more from increasing the former, e.g., by adding sparsity. Finally, we want to point out that \(S_{\text{opt}}\) can often also be determined explicitly by solving (4) for \(S\), e.g., here for dense training costs with \(c_{\text{mult}}(S)=1/(1-S)\): \[S_{\text{opt}}(N,C)=\text{max}\,\Big{\{}1-\text{exp}\Big{(}\Big{[}\text{log} \frac{b_{N}a_{D}b_{D}}{a_{S}b_{S}}+b_{N}\text{log}N\Big{]}/(b_{D}+b_{S})\Big{)} \cdot\Big{(}\frac{C}{6N}\Big{)}^{-b_{D}/(b_{D}+b_{S})},0\Big{\}}. \tag{6}\] Empirical results.We now compute optimal sparsity curves for our experimental T5 and ViT data, for which we fit scaling laws in the previous subsection. Figure 1 (Right) and 4 show the optimal sparsity contours, both for dense and sparse costs. An interesting feature of Equation (5) is that all sparsity contours are, by construction, parallel to the Chinchilla compute optimal line (Hoffmann et al., 2022), which denotes ideal utilization of training FLOPs for fully dense models; this can be clearly observed in the plots as well. However, we note that the Chinchilla line does not necessarily correspond to the \(S=0\) case since non-zero sparsity may be optimal in this regime (this is the case for sparse-FLOPs). The key take-away from these results is that as one trains significantly longer than Chinchilla (dense compute optimal), more and more sparse models start to become optimal in terms of loss for the same number of non-zero parameters. This is because the gains of further training dense models start to slow down significantly at some point, allowing sparse models to overtake them. We further illustrate this effect on a subset of our actual ViT data in Figure 5. The practical question now is how much longer training is necessary? In terms of sparse FLOPs, 50% sparsity is already optimal for \(<2\times\) (ViT) and \(<3\times\) (T5) longer training than Chinchilla; for dense FLOPs it is \(\approx 5\times\) and \(\approx 70\times\), respectively. While the latter number seems quite high at first glance, we note that language models of the sizes we consider here are already typically trained for \(>100\times\) longer than Chinchilla (Brown et al., 2020). Additionally, larger models are being trained with more and more data as well, e.g., Llama2-7B with \(\approx 14\times\) Chinchilla (Touvron et al., 2023b). In general, the optimal sparsity at a given point \((N,C)\) is lower for dense than sparse FLOPs since the former assumes that sparsity provides no benefits _during_ training. #### 3.3.1 Limit Performance In the previous section, we have focused only on _when_ sparse models become optimal but not _how much better_ they can be compared to dense models. In this section, we study the following question: How much larger, and thus computationally more expensive, does a dense model need to be in order to match the loss of a smaller sparse model with very long training? Since we have found the scaling term in \(D\) to not interact with sparsity in Section 3.2, it suffices to compute the increase in \(N\) required to lower the loss by the same factor as the increase in \(S\) via: \[\text{gain}(S)=\Big{(}\frac{a_{S}(1-S)^{b_{S}}+c_{S}}{a_{S}+c_{S}}\Big{)}^{-1 /b_{N}}. \tag{7}\] Figure 4: Optimal T5 sparsity contours. Figure 5: Loss vs. sparse pretraining FLOPs for ViT models of varying sparsity. The gains for our particular scaling coefficients are shown in Table 2. They are to be interpreted in the following way: for example, a 75% sparse ViT with \(N\) non-zeros will perform similar to a dense one with \(\approx 2.17N\) parameters, when both are trained with _the same amount of data_. Crucially, this holds for _any_ amount of data and thus also in the infinite limit when training is purely capacity bound. Hence, this expresses an equivalence between dense capacity and sparse capacity. Remarkably, sparsity gains are very similar across vision and text domains, with the sweet-spot being around 75% sparsity at around \(\approx 2.15\times\) gain. We believe that this is due to the relative nature of these quantities with respect to scaling parameters. (At the same time, the fact that the numbers are within 0.01 of each other is likely a coincidence.) ## 4 Extensions ### N:M Sparsity In addition to our previous _unstructured_ sparsity exploration, we now also consider _structured_ n:m sparsity, which can be well accelerated on actual hardware, e.g., as 2:4 sparsity on modern NVIDIA GPUs (Pool and Yu, 2021; Hubara et al., 2021). Similar to how minor changes in the process (optimizer, model shape) generally only affect the multiplicative constants in dense scaling laws (Kaplan et al., 2020), we also expect minor changes in the sparsification process (pattern, algorithm, etc.) to only affect the sparsity term in (3). This can be exploited to fit laws based on significantly less runs: if the dense base scaling is known, one only has to fit \(a_{S}\), \(b_{S}\) and \(c_{S}\) (just 3 rather than 7 parameters) to find the corresponding \(L(S,N,D)\). We now utilize this in the context of n:m sparsity by fitting new laws for 2:4 and 1:4 as well as 4:8 and 2:8 patterns, respectively, based only on a subset of our full grid in Table 1. Concretely, we execute all runs involving either the least amount of steps or the smallest model. Figure 6 visualizes a subset of the collected data, displaying a very similar form to 2, which indicates that the general scaling law shape also holds for n:m sparsity. We also fit scaling laws (with Huber \(\delta=0.01\) as 0.75 patterns will otherwise be treated as an outlier) and calculate sparsity gains as in Section 3.3.1 (see Table 3). In general, it seems that 2:4 and 4:8 perform both very similar to 50% (see Table 2 and also Figure 6), although the n:m estimates are likely slightly more noisy due to less data used in fitting the curves. Meanwhile, 1:4 brings almost no advantage and 2:8 only a slight improvement, which is contrary to our unstructured results. We suspect that the 75% patterns may simply be too stringent to significantly increase capacity beyond their 50% variants. ### Pruning Pretrained Models Lastly, we consider a practical scenario where a set of existing _very well trained_ dense models should be made more efficient via pruning, using _a small fraction_ of the compute spent for the initial pretraining. Our main interest here is to compare the efficiency of sparsifying from scratch and sparsifying from a pretrained checkpoint. For that purpose, we train ViT S/16, M/16 and B/16 models for 4 full epochs on JFT (i.e., 16 billion images) and then start the same gradual sparsification procedure we used before from these checkpoints, for 5.6% of the pretraining budget (as the model is already pretrained, we start to sparsify immediately rather than after 25% of training). Finally, we use our scaling laws from Section 3.2 to determine the amount of training necessary to produce equivalent models of the same quality when starting from scratch. Table 4 shows how much more/less data is \begin{table} \begin{tabular}{|c|c c c|} \hline Family & 0.500 & 0.750 & 0.875 \\ \hline ViT/JFT & \(1.60\times\) & \(2.17\times\) & \(2.63\times\) \\ T5/C4 & \(1.59\times\) & \(2.16\times\) & \(2.63\times\) \\ \hline \end{tabular} \end{table} Table 2: Equivalent dense size multiplier to match performance of a sparse model. Figure 6: Loss vs. size plot for a subset of T5/C4 n:m sparsity data. required to achieve equivalent performance for sparsifying from scratch, when excluding/including the pretraining cost, respectively. If the model already exists and there is thus no pretraining cost, then starting from such a checkpoint is \(>4\times\) more efficient then sparsifying from scratch for 0.5/0.75, and \(>2\times\) for 0.875 sparsity, respectively. The reason why the efficiency gains are decreasing with higher sparsity is most likely the increased divergence from the initial starting point. At the same time, when the pretraining cost is counted as well, pruning throughout the whole training process appears to be \(\geq 4\times\) more efficient, relative to the \(\approx 5\%\) pruning of pretraining budget. Overall, these results clearly demonstrate that, while the sparsification process benefits significantly from a better trained initial model, it does so only up to a certain extent. Finally, we note that the 50% models are \(\approx 0.2-0.3\) points away from their dense baseline loss, which matches our results in Section 3.3.1 that the size gain of 50% sparsity is noticeably less than \(2\times\) for well trained models. ## 5 Related Work Sparsity & pruning.Sparsity and pruning, i.e., having a large number of exactly 0 weights which can be ignored during inference, has a long history (LeCun et al., 1989; Hassibi et al., 1993) and a large number of works have been published on this topic (Hoefler et al., 2021). Current state-of-the-art methods range from simple gradual removal of the smallest weights (Zhu and Gupta, 2017), to partial or full sparse training (Mocanu et al., 2018; Jayakumar et al., 2021; Peste et al., 2021), approximate Hessian-based metrics (Singh and Alistarh, 2020; Frantar et al., 2021) and "soft" sparse optimization (Kusupati et al., 2020; Sanh et al., 2020). Many of these methods can impose very high levels of sparsity at minimal accuracy loss, which can lead to substantial practical speedups with specialized inference algorithms (Kurtz et al., 2020; Elsen et al., 2020). At the same time, most of those works focus on, by modern standards, relatively simple tasks like ResNet50/ImageNet or BERT/GLUE, with rather overparametrized models. In contrast, there has only been very little work when it comes to sparsifying modern Transformers (Vaswani et al., 2017) trained on massive datasets: The Appendix of Gopher (Rae et al., 2021) conducts pruning experiments for a generative language modelling task and finds that, when trained for the same amount of steps, sparse models can outperform dense ones, but leaves open whether this is also possible when accounting for the significantly increased compute spent for producing those sparse models, relative to dense ones trained with the same amount of data/steps. Similarly, (Cerebras, 2022) prunes a GPT-like model, also using significantly more data than its dense baseline. Recently, SparseGPT (Frantar and Alistarh, 2023) showed that it is possible to impose non-trivial amounts of weight-sparsity on extremely large language models, even without retraining; yet, it remains unclear if this can also be done on more recent, smaller and much less undertrained networks. Scaling laws.The key behind the tremendous success of Transformer models are their exceptional scaling properties: increasing model size and/or data brings consistent performance improvements, even at already huge scale. Further, this scaling behavior is very predictable, following relatively simple power-law curves (Kaplan et al., 2020). This can, for example, be utilized to construct a family of training compute optimal models (Hoffmann et al., 2022). More recently, these basic scaling laws are being extended to various more specialized applications, e.g.: optimizing model shapes (Alabdulmohsin et al., 2023), routing mechanisms (Clark et al., 2022), repeating training data multiple times (Muennighoff et al., 2023) and several downstream tasks (Caballero et al., 2023). However, not much is known about the scaling of weight-sparsity for such models. \begin{table} \begin{tabular}{|c|c c|c c|c c|} \hline \multirow{2}{*}{Model} & \multicolumn{2}{c|}{0.500} & \multicolumn{2}{c|}{0.750} & \multicolumn{2}{c|}{0.875} \\ & exc. & inc. & exc. & inc. & exc. & inc. \\ \hline S/16 & \(4.90\times\) & \(0.25\times\) & \(4.27\times\) & \(0.23\times\) & \(2.45\times\) & \(0.13\times\) \\ M/16 & \(4.76\times\) & \(0.25\times\) & \(4.18\times\) & \(0.22\times\) & \(2.57\times\) & \(0.14\times\) \\ B/16 & \(4.35\times\) & \(0.23\times\) & \(4.00\times\) & \(0.21\times\) & \(2.72\times\) & \(0.14\times\) \\ \hline \end{tabular} \end{table} Table 4: Relative amount of data required for sparsifying from scratch to match the validation loss of pruning from a pretrained model, when pretraining cost is excluded (exc.) and included (inc.). Rosenfeld et al. (2021) studies the relationship between width, depth and weight-density for pruning pretrained ResNets trained primarily on the nowadays very small CIFAR10 dataset. Contrarily, we consider modern Transformers trained on datasets many orders of magnitude larger and focus particularly on the data/compute dimension that is crucial in this context, but not very relevant in the setting of Rosenfeld et al. (2021). Transformer efficiency.Overall, making (large) Transformers more efficient is currently a highly active area of research. Probably the currently most popular and practical approach is quantization, that is reducing the numerical precision of weights (and sometimes also activations) (Frantar et al., 2022; Dettmers and Zettlemoyer, 2022; Xiao et al., 2022). Further, there are also many works on Mixture-of-Expert (MoE) models, large ensembles of models/individual layers where each input is only processed by a small part, thus keeping the overall computation cost constant (Du et al., 2022; Fedus et al., 2022; Artetxe et al., 2022; Riquelme et al., 2021). MoEs are a form of _dynamic activation_ sparsity, which is very different from the _static weight_ sparsity that we study in this work; the former trades off increased memory for faster inference, whereas the latter reduces both inference _and_ memory costs. In general, we note that quantization, MoEs and weight sparsity are all complementary techniques that may be stacked for compound gains (Han et al., 2016; Kurtic et al., 2022). ## 6 Discussion Limitations.While we have conducted extensive experiments, for both vision and language domains, our results still have limitations, which we hope will be addressed in future work. * First, our sparsification recipe was optimized for robustness and scalability across a wide range of setups, rather than to fully maximize performance in a particular one. While we believe that the overall shape of our scaling results will remain consistent, we speculate that specific coefficient values can be improved significantly with more extensive per-run tuning and/or better sparsification techniques. * In this work, we performed pruning directly for massive data pretraining tasks. While this is ideal in terms usability, as all down-stream applications would directly benefit from a more efficient base model, it also appears to make compression quite challenging. We think that sparsity rates can probably be improved significantly when pruning is performed directly for more specialized applications that require only a subset of the base model's full capabilities. Similarly, we considered the optimal infinite data setting, which essentially eliminates overfitting from dense baselines. We think that sparsity could be particularly practical when data is limited and has to be repeated. * Finally, as the main goal of this study was understanding core scaling relationships, we focused primarily on the cleanest available performance metric, non-zero parameter count. However, in practice, sparsity acceleration can be quite complex: current software/hardware may not provide ideal speedups and models generally also contain operations (e.g., layer-norms, attention) which do not benefit from weight sparsity. We think extending our results to different target metrics is a very interesting topic for future work. Compatibility with other works.We will now briefly discuss how our scaling insights line up with existing sparsification results on similar models/datasets. * First, the results in the Appendix of Rae et al. (2021), for a decoder-only text-generation model, are consistent with our scaling laws; the improvement through sparsity appears to be similar for each model size and their maximum size advantage of \(2.5\times\) observed at \(0.9\) sparsity is quite close to our limit gains in Section 3.3.1, which are applicable here. * In contrast, Cerebras (2022) report a significantly better gain of \(\approx 5\times\), but in a quite different setting where the baseline is training (not inference) compute optimal and sparsification uses \(>5\times\) more data than the dense comparison point. This is not inconsistent to our results: if we query our fitted T5 scaling law (see Section 3.2) with this setup, we predict 1.54 loss (dense 1B params, 20B tokens) vs. 1.48 loss (80% sparse & 200M non-zeros, 100B tokens), in favor of the longer trained sparse model. * Finally, SparseGPT (Frantar and Alistarh, 2023) notes that post-training pruning becomes significantly easier as the model size increases. However, they do not perform any retraining, and observe this effect primarily relative to the respective unpruned base model, not in terms of improvements over the Pareto size-vs-loss frontier that we study in this work. Hence, we believe that this is likely more related to the pretrained models' initial robustness to pertubations rather than the architecture's inherent sparsifiability. Practical consequences.Our scaling insights lead to a number of practical consequences: Sparsity seems to affect each model size in approximately the same way, while remaining mostly independent of the amount of training data used. This provides evidence that good pruning performance in less expensive settings should generalize to performance at scale, which will hopefully accelerate research on new sparsification recipes and algorithms. Additionally, we have shown that optimal sparsity levels continuously increase with longer training. Sparsity thus provides a means to further improve model performance for a fixed final parameter cost. In particular, when training beyond Chinchilla optimality, where simple dense training starts to run into diminishing returns, sparsity can provide a clear alternative. Thus, our findings can be interpreted as providing practical motivation for further developing sparsity support. ## 7 Acknowledgements The authors would like to thank Svinay Subramanian for his useful feedback and suggestions, especially regarding Section 3.3.1 and Section 4.1. We also like to thank Amir Yazdanbakhsh, Shivani Agrawal, Jeffrey Pennington and Yann Dauphin for their valuable feedback during our discussions.
2304.00180
FCC: Fusing Conversation History and Candidate Provenance for Contextual Response Ranking in Dialogue Systems
Response ranking in dialogues plays a crucial role in retrieval-based conversational systems. In a multi-turn dialogue, to capture the gist of a conversation, contextual information serves as essential knowledge to achieve this goal. In this paper, we present a flexible neural framework that can integrate contextual information from multiple channels. Specifically for the current task, our approach is to provide two information channels in parallel, Fusing Conversation history and domain knowledge extracted from Candidate provenance (FCC), where candidate responses are curated, as contextual information to improve the performance of multi-turn dialogue response ranking. The proposed approach can be generalized as a module to incorporate miscellaneous contextual features for other context-oriented tasks. We evaluate our model on the MSDialog dataset widely used for evaluating conversational response ranking tasks. Our experimental results show that our framework significantly outperforms the previous state-of-the-art models, improving Recall@1 by 7% and MAP by 4%. Furthermore, we conduct ablation studies to evaluate the contributions of each information channel, and of the framework components, to the overall ranking performance, providing additional insights and directions for further improvements.
Zihao Wang, Eugene Agichtein, Jinho Choi
2023-03-31T23:58:28Z
http://arxiv.org/abs/2304.00180v1
FCC: Fusing Conversation History and Candidate Provenance for Contextual Response Ranking in Dialogue Systems ###### Abstract Response ranking in dialogues plays a crucial role in retrieval-based conversational systems. In a multi-turn dialogue, to capture the gist of a conversation, contextual information serves as essential knowledge to achieve this goal. In this paper, we present a flexible neural framework that can integrate contextual information from multiple channels. Specifically for the current task, our approach is to provide two information channels in parallel, **F**using **C**onversation history and domain knowledge extracted from **C**andidate provenance (**FCC**), where candidate responses are curated, as contextual information to improve the performance of multi-turn dialogue response ranking. The proposed approach can be generalized as a module to incorporate miscellaneous contextual features for other context-oriented tasks. We evaluate our model on the MSDialog dataset widely used for evaluating conversational response ranking tasks. Our experimental results show that our framework significantly outperforms the previous state-of-the-art models, improving Recall@1 by 7% and MAP by 4%. Furthermore, we conduct ablation studies to evaluate the contributions of each information channel, and of the framework components, to the overall ranking performance, providing additional insights and directions for further improvements. ## 1 Introduction Response ranking is an essential part of dialogue systems [21; 1], and plays a critical part in information- or search-oriented dialogues where responses may come from diverse yet usually designated sources. As shown in Fig. 1, candidate (1) is the true
2308.16437
AntM$^{2}$C: A Large Scale Dataset For Multi-Scenario Multi-Modal CTR Prediction
Click-through rate (CTR) prediction is a crucial issue in recommendation systems. There has been an emergence of various public CTR datasets. However, existing datasets primarily suffer from the following limitations. Firstly, users generally click different types of items from multiple scenarios, and modeling from multiple scenarios can provide a more comprehensive understanding of users. Existing datasets only include data for the same type of items from a single scenario. Secondly, multi-modal features are essential in multi-scenario prediction as they address the issue of inconsistent ID encoding between different scenarios. The existing datasets are based on ID features and lack multi-modal features. Third, a large-scale dataset can provide a more reliable evaluation of models, fully reflecting the performance differences between models. The scale of existing datasets is around 100 million, which is relatively small compared to the real-world CTR prediction. To address these limitations, we propose AntM$^{2}$C, a Multi-Scenario Multi-Modal CTR dataset based on industrial data from Alipay. Specifically, AntM$^{2}$C provides the following advantages: 1) It covers CTR data of 5 different types of items, providing insights into the preferences of users for different items, including advertisements, vouchers, mini-programs, contents, and videos. 2) Apart from ID-based features, AntM$^{2}$C also provides 2 multi-modal features, raw text and image features, which can effectively establish connections between items with different IDs. 3) AntM$^{2}$C provides 1 billion CTR data with 200 features, including 200 million users and 6 million items. It is currently the largest-scale CTR dataset available. Based on AntM$^{2}$C, we construct several typical CTR tasks and provide comparisons with baseline methods. The dataset homepage is available at https://www.atecup.cn/home.
Zhaoxin Huan, Ke Ding, Ang Li, Xiaolu Zhang, Xu Min, Yong He, Liang Zhang, Jun Zhou, Linjian Mo, Jinjie Gu, Zhongyi Liu, Wenliang Zhong, Guannan Zhang
2023-08-31T03:52:57Z
http://arxiv.org/abs/2308.16437v1
# AntM\({}^{2}\)C: A Large Scale Dataset For Multi-Scenario Multi-Modal CTR Prediction ###### Abstract. Click-through rate (CTR) prediction is a crucial issue in recommendation systems, directly impacting user experience and platform revenue. In recent years, CTR has garnered attention from both industry and academia, leading to the emergence of various public CTR datasets. However, existing CTR datasets primarily suffer from the following limitations. Firstly, users generally click different types of items from multiple scenarios, and modeling the CTR from multiple scenarios can provide a more comprehensive understanding of users and share knowledge between different scenarios. Existing datasets only include CTR data for the same type of items from a single scenario. Secondly, multi-modal features are essential in multi-scenario CTR prediction as they effectively address the issue of inconsistent ID encoding between different scenarios. The existing datasets are based on ID features and lack multi-modal features. Third, a large-scale CTR dataset can provide a more reliable and comprehensive evaluation of complex models, fully reflecting the performance differences between models. While the scale of existing datasets is around 100 million, which is relatively small compared to the real-world industrial CTR prediction. To address these limitations, we propose AntM\({}^{2}\)C, a **M**ulti-Scenario **M**ulti-Modal CTR dataset based on real industrial data from the Alipay platform. Specifically, AntM\({}^{2}\)C possesses the following characteristics: 1) It covers CTR data of 5 different types of items from Alipay, providing insights into the preferences of users for different items, including advertisements, vouchers, mini-programs, contents, and videos. 2) Apart from ID-based features, AntM\({}^{2}\)C also provides 2 multi-modal features, raw text and image features, which can effectively establish connections between items with different IDs. 3) AntM\({}^{2}\)C provides 1 billion CTR data with 200 features, including 200 million users and 6 million items. It is currently the largest-scale CTR dataset available, providing a reliable and comprehensive evaluation for CTR models. Based on AntM\({}^{2}\)C, we construct several typical CTR tasks, including multi-scenario modeling, item and user cold-start modeling, and multi-modal modeling. For each task, we provide comparisons with baseline methods. The dataset homepage is available at [https://www.atecup.cn/home](https://www.atecup.cn/home). Click-through rate prediction; Multi-Scenario; Multi-Modal + Footnote †: leftmargin=*]This work is supported by on the Tab3 page, then click on a coffee coupon during a marketing campaign, and finally use the Alipay search to click a coffee ordering mini-program to place an order. Jointly modeling this multi-scenario CTR data can provide a more comprehensive understanding of user preferences, and the knowledge across scenarios can be shared to improve the CTR performance in each scenario. However, existing CTR datasets have a limited range of item types and generally originate from the same business scenario, which fails to capture the multi-scenario preferences of users. For example, Criteo1 and Avazu2 only involve CTR data for advertisements. As e-commerce platforms, both Amazon3 and AliExpress4 provide CTR data for their e-commerce items. Tenrec (Tentrec, 2018) focuses more on video and article recommendations. Secondly, multi-modal features can address the issue of inconsistent IDs for similar items in different business scenarios and effectively establish a bridge between different scenarios. For example, a video about coffee and a coffee coupon have different IDs in different business scenarios. Directly using ID features cannot perceive the relationship between these two items. Multi-modal features inherently carry semantic meaning and can better compensate for the inconsistency of ID features across different domains. Additionally, with the rise of large language models (LLMs), combining LLMs with CTR prediction has become an emerging research field. Existing CTR datasets are based on ID features and lack abundant multi-modal features, resulting in the CTR model being unable to test the performance in multi-scenarios and multi-modal settings. Furthermore, large-scale datasets can reliably and comprehensively reflect the performance of CTR models, while also highlighting the differences between CTR models. The existing datasets are typically at the scale of 100 million, which is insufficient to further validate the capabilities in larger-scale industrial scenarios. Footnote 1: [https://www.kaggle.com/c/criteo-display-ad-challenge](https://www.kaggle.com/c/criteo-display-ad-challenge) Footnote 2: [https://www.kaggle.com/c/avazu-cr-prediction](https://www.kaggle.com/c/avazu-cr-prediction) To address the aforementioned challenges, we propose the AntM\({}^{2}\)C dataset, a large-scale multi-scenario multi-modal dataset for CTR prediction. Compared with existing CTR datasets, AntM\({}^{2}\)C has the following advantages: * **Diverse business scenarios and item types**: AntM\({}^{2}\)C contains different types of items from five typical business scenarios on the Alipay platform, including advertisements, vouchers, mini-programs, contents, and videos. Each business scenario has a unique data distribution. The abundant intersecting users and similar items between scenarios enable a more comprehensive evaluation for multi-scenario CTR modeling. Through one evaluation, the effectiveness of the CTR model can be evaluated in multiple business scenarios. * **Multi-modal feature system**: AntM\({}^{2}\)C not only includes ID features but also provides rich multi-modal features such as text and image, which can establish connections between similar items across scenarios and provide better evaluation for multi-modal CTR models. Furthermore, the feature system in AntM\({}^{2}\)C includes up to 200 features5, making it more closely aligned with real-world CTR prediction in industrial scenarios. Footnote 5: In the first release, AntM2C open-sourced 10 million samples, including 29 ID features and 2 text features. More data and image features will be gradually released in subsequent phases. * **Largest data scale**: AntM\({}^{2}\)C comprises 200 million users and 6 million items, with a total of 1 billion samples6. The average number of interactions per user is above 50. To the best of our knowledge, AntM\({}^{2}\)C is the largest public CTR dataset in terms of scale, which can provide comprehensive and reliable CTR evaluation results. Footnote 6: In the second release, AntM2C open-sourced 10 million samples, including 29 ID features and 2 text features. More data and image features will be gradually released in subsequent phases. * **Comprehensive benchmark**: Based on AntM\({}^{2}\)C, three typical CTR tasks have been built, including multi-scenario modeling, cold-start modeling, and multi-modal modeling. Benchmark evaluation results based on state-of-the-art models are also provided. The rest of the paper is organized as follows. In Section 2, we briefly review some related works about public CTR datasets. In Section 3, we give a detailed introduction to the dataset collection and data analysis. In Section 4, we conduct empirical studies with baseline CTR methods on different CTR tasks. ## 2. Existing CTR datasets The existing public CTR datasets can be roughly divided into two categories: single-scenario and multi-scenario. Both have been widely adopted by the evaluation of CTR methods. ### Single-Scenario CTR Datasets The Criteo dataset is one of the publicly available datasets for CTR prediction. It contains over 45 million records of user interactions with advertisements, including features such as click-through rates, impression rates, and user demographics. Similar to the Criteo dataset, the Avazu dataset contains over 40 million records of user interactions with mobile advertisements. It includes features such as device information, app category, and user demographics. One of the main limitations of the Criteo and Avazu dataset is they only include CTR data for advertisements and cannot be used to evaluate CTR for other business scenarios or types of items. Additionally, the datasets do not provide text information about the advertisement or user, which can limit the scope of the multi-modal modeling. ### Multi-Scenario CTR Datasets The AliExpress is a dataset gathered from real-world traffic logs of the search system in AliExpress. This dataset is collected from 5 countries: Russia, Spain, French, Netherlands, and America, which can be seen as 5 scenarios. It can be used to develop and evaluate CTR prediction models for e-commerce platforms. The Tenrec dataset is a multipurpose dataset for CTR prediction where click data was collected from two scenarios: articles and videos. Although the above datasets cover different scenarios, the items within these scenarios are similar. The AliExpress dataset only consists of e-commerce items, and Tenrec involves videos and articles that only reflect the personal interests of users in the entertainment and cultural aspects. Additionally, similar to single-scenario datasets, both of these datasets lack textual modal information and only provide features such as IDs. This limitation restricts the application of multi-modal modeling. ## 3. Data Description ### Data Collection AntM2C's data is collected from Alipay, a leading platform for payments and digital services. In order to meet the growing demands of users, Alipay recommends various types of items from different business scenarios to users. #### 3.1.1. Scenarios AntM2C collects CTR data in five scenarios on Alipay, and there are differences in the types of items in each scenario. As shown in Figure 1, the CTR prediction occurs in multiple scenarios, including services and content on search, vouchers on marketing, videos on Tab3 page, and advertisements on the membership page. In the search scenario, when a user enters search words, several relevant mini-apps of services or content are displayed for the user to click on. Marketing scenarios recommend some consumer vouchers, and users click the coupons they are willing to use. On the Tab3 page, the recommended items are primarily short videos, and users will click to watch the videos they are interested in. On the membership page, users may click on some online advertisements. In conclusion, AntM2C includes various types of items from different business scenarios. In section 3.2.2, we will show that there are differences in the data distribution of these different scenarios. The rich and diverse items provide a more comprehensive evaluation for CTR prediction. #### 3.1.2. Data Sampling AntM2C collects 9-day (from 20230709 to 20230717) CTR samples from the above-mentioned five scenarios and then filters out 1 billion samples of relatively high-activity users who have a total click count \(\geq\) 30 across all scenarios. In the first stage of open sourcing, we randomly sampled 10 million data from these 1 billion samples, and their statistical properties are shown in Table 1. We will open all 1 billion data in the subsequent stage. For the purpose of protecting user privacy, we do not explicitly indicate the names of the scenarios in the dataset, but instead use the letters 'A-E' as substitutes. #### 3.1.3. Data Desensitization The AntM2C does not contain any Personal Identifiable Information (PII) and has been desensitized and encrypted. Each user in the dataset was de-linked from the production system when securely encoded into an anonymized ID. Adequate data protection measures were undertaken during the experiment to mitigate the risk of data copy leakage. It is important to note that the dataset is solely utilized for academic research purposes and does not represent any actual commercial use. ### Data Distribution #### 3.2.1. Data Overlapping AntM2C contains a portion of overlapped users across the five scenarios. Table 2 shows the number of intersecting users among different scenarios, indicating that AntM2C can reflect the preferences of the same user for items in different scenarios to effectively conduct multi-scenario CTR evaluation. As for items, due to the significant diversity in item types among different scenarios, there is no intersection of items between different scenarios. \begin{table} \begin{tabular}{c|c c c c c} \hline Scenario & Exposure & Users & Items & Click & Click Rate \\ \hline \hline A & 3,996,614 & 93,465 & 112,098 & 147,656 & 3.69\% \\ B & 8,983,124 & 104,016 & 29,835 & 430,1270 & 47.88\% \\ C & 1,211,813 & 96,689 & 6,408 & 68,566 & 5.66\% \\ D & 1,981,484 & 37,095 & 19,092 & 722,009 & 36.44\% \\ E & 955,162 & 17,904 & 18,265 & 102,671 & 10.75\% \\ \hline ALL & 17,128,197 & 120,721 & 184,306 & 5,342,172 & 31.19\% \\ \hline \end{tabular} \end{table} Table 1. Data statistics of AntM2C. To protect user privacy, AntM2C anonymizes the scenario names as A-E. The click rate is calculated by dividing the number of clicks by the number of exposures. Since negative sampling is applied to the samples, the click rate may be higher than the actual value. Figure 1. An illustration of typical CTR prediction scenarios on the Alipay platform, including service/content search, marketing voucher, Tab3 video recommendation, and advertisement. Each scenario has different types of items, and users have different mindsets when browsing different scenarios. #### 3.2.2. Item & User Frequency Figure 2 illustrates the frequency of user and item in AntM\({}^{2}\)C dataset, including all samples and samples from different scenarios (A-E). The horizontal axis represents the number of frequencies for users/items, while the vertical axis represents the number of users/items at that frequency. It can be observed that, in terms of item distribution, all scenarios exhibit a long-tail distribution, with 80% of the sample appearing less than 5 frequencies. This long-tail distribution is consistent with real-world situations. As for user distribution, there are differences between scenarios. In scenario B, the distribution of user frequency has two peaks, one at less than 5 times and the other around 50 times. After the frequency is greater than 50, the number of users decreases as the frequency increases. In other scenarios, the exposure frequency of users follows a long-tail distribution similar to that of items, where more exposure frequency leads to fewer users. Due to the overlapping users between scenarios, the long-tail distribution of users in multiple scenarios becomes a normal distribution in the global samples. Most users have an exposure frequency of around 50. Overall, the distribution of items and users in AntM\({}^{2}\)C reflects CTR prediction in practice. \begin{table} \begin{tabular}{c|c c c c c} \hline Scenario & A & B & C & D & E \\ \hline \hline A & - & 90537 & 75227 & 19561 & 14937 \\ B & - & - & 83141 & 22721 & 15978 \\ C & - & - & - & 31704 & 17019 \\ D & - & - & - & - & 4788 \\ E & - & - & - & - & - \\ \hline \end{tabular} \end{table} Table 2. Overlapped users across the five scenarios in AntM\({}^{2}\)C. AntM\({}^{2}\)C includes the preferences of the same user for items in different scenarios. \begin{table} \begin{tabular}{c|l|l|l|l} \hline Category & Feature\_name & description & Type & Coverage \\ \hline \hline User & user\_id & user number & ID & 100\% \\ Features & features\_0-26 & user sequences & ID & 85.50\% \\ & query\_entity\_seq & search sequence & Text & 90.32\% \\ \hline Item & item\_id & item number & ID & 100\% \\ Features & item\_entity\_names & entity name of item & Text & 100\% \\ & item\_title & title of item & Text & 95.50\% \\ \hline Other & log\_time & time in log & Text & 100\% \\ Features & scene & scenario number & ID & 100\% \\ \hline Label & label & click label & Int & 100\% \\ \hline \end{tabular} \end{table} Table 3. Features of AntM\({}^{2}\)C. In addition to ID features, AntM\({}^{2}\)C also includes the raw text features of users and items. Figure 2. An illustration of the data distribution of item (left) and user (right) in different scenarios and the overall samples. The horizontal axis represents exposure frequency, and the vertical axis represents the number of samples at that exposure frequency. ### Features The feature system of AntM\({}^{2}\)C, as shown in Table 3, includes ID features of users and items, as well as raw text features. #### 3.3.1. User Features The user features consist of static profile features6 and user sequence features. The static profile features include basic user attributes such as gender, age, occupation, etc. The sequence features provide the user's recent activities on Alipay, including clicked mini-apps, searched services, purchased items, etc. _As mentioned in Section 3.1.3, these user features have been desensitized and encrypted for the purpose of user privacy protection and appear in the dataset in an **encrypted ID** format, making it impossible to reconstruct the original user features. In addition to the ID-based features, AntM\({}^{2}\)C also includes the raw text of user search entities to provide multi-modal evaluation. Footnote 6: User static attributes and item title will be open-sourced in the subsequent phases. #### 3.3.2. Item Features The item features consist of item ID and item textual features. The item ID is a globally unique identifier for each item, and the encoding of item IDs varies across different scenarios. To address the inconsistency of item IDs across scenarios, AntM\({}^{2}\)C also includes the original title text of the items\({}^{6}\) and entities extracted based on the title text. #### 3.3.3. Other Features In addition to user and item features, AntM\({}^{2}\)C also provides additional features such as log time and scene identification. Users can utilize these extra features to flexibly split the training, validation, and testing sets based on time and evaluate the performance in different scenarios. #### 3.3.4. Label The label in AntM\({}^{2}\)C indicates whether the user clicked on the corresponding item. If the user clicked, the label is set to 1, otherwise it is set to 0. The ratio of positive to negative samples in AntM\({}^{2}\)C can be obtained from the click rate in Table 1. It should be noted that there are a large number of negative samples in the actual online logs (samples that were exposed but not clicked on). To address this issue, negative sampling was performed which resulted in a higher click-through rate in the AntM\({}^{2}\)C dataset compared to that in the actual online logs. ## 4. Experimental Evaluation In this section, we describe the applications of AntM\({}^{2}\)C in several CTR prediction tasks. We briefly introduce each task and report the results of some baseline methods. We select the commonly used AUC (Area Under the Curve) as the metrics for all experiments. The baseline methods and evaluation results in the experiment provide a demo of using AntM\({}^{2}\)C. More baselines and evaluations will continue to be updated in future work. ### Multi-Scenario CTR prediction Multi-scenario CTR prediction is a common issue in industrial recommendation systems. It builds a unified model by leveraging CTR data from multiple scenarios. The knowledge sharing between scenarios enables the multi-scenario model to achieve better performance compared to single-scene modeling. We conduct an evaluation on multi-scenario CTR prediction using different baseline methods based on the 5 scenarios in the AntM\({}^{2}\)C dataset. #### 4.1.1. Data preprocess In the multi-scenario CTR evaluation, we divide the AntM\({}^{2}\)C dataset based on time, using the data before 20230717 as the training set and the data on 20230717 as the test set. The training and test sets include samples from all five scenarios, and their data distribution is shown in Table 4. It can be observed that there are differences in the number of training and test samples among different scenarios. Among them, Scenario B has the highest number of samples, which is ten times that of Scenario E. In terms of features, we use the user and item features from the ID category as shown in Table 3. The text features will be used for multi-modal evaluation (see in Section 4.3). #### 4.1.2. Baselines and hyper-parameters We mainly choose the multi-task methods as the baseline methods for multi-scenario CTR prediction. We treat the CTR estimation for each scenario as a task and share the knowledge among the scenarios at the bottom layer, with each scenario's CTR score output at the tower layer. The baseline methods and hyperparameter settings are as follows: * DNN: The DNN is trained on a mixture of samples from all scenarios without tasks, serving as the baseline for multi-scenario CTR prediction. The DNN consists of three layers with 128, 32, and 2 units, respectively. The following multi-task model has the same number of layers and unit settings as the DNN. * Shared Bottom (Kumar et al., 2017): Shared bottom is the most fundamental model in multi-task learning, where the knowledge is shared among the tasks at the bottom layer. Each task has its own independent tower layer and outputs the corresponding CTR score7. Footnote 7: [https://github.com/shenweiechen/DeepCTR](https://github.com/shenweiechen/DeepCTR) * MMoE (Moe et al., 2019): Based on the shared bottom, MMOE introduces multiple expert networks, each specialized in predicting a specific task, sharing a common input layer. Additionally, MMOE adds a gating network that assigns different weights to each expert based on the input data to determine their influence on predicting the output for a specific task. In the experiment, we set the number of experts in MMOE to \(6\)8. Footnote 8: [https://github.com/drawbridge/keras-mmo](https://github.com/drawbridge/keras-mmo) * PLE (PLE, 2018): Based on MMOE, PLE further designs task-specific experts for each task, while retaining the shared expert. This structure allows the model to better learn the differences and correlations among tasks. We set the number of experts in \begin{table} \begin{tabular}{c|c|c} \hline Scenario & Train Set & Test Set \\ \hline \hline A & 3,499,645 & 496,969 \\ B & 7,890,222 & 1,092,901 \\ C & 1,059,578 & 151,670 \\ D & 1,802,707 & 178,777 \\ E & 846,791 & 104,359 \\ \hline Total & 15,098,943 & 2,024,676 \\ \hline \end{tabular} \end{table} Table 4. The distribution of training and testing data in multi-scenario CTR evaluation. The data is divided by time, and there are differences in the data volume between scenarios. PLE to be the same as MMOE, with each of the five scenarios having its own specific expert and one globally shared expert7. Footnote 7: The selection of this threshold \(N\) can vary based on experiments, and we use 100 as an example for all experiments. All baseline methods utilized the Adam (Kingma et al., 2015) optimizer with a learning rate of 1e-3 for parameter optimization. The models were trained for 5 epochs with a batch size of 512. #### 4.1.3. Results Table 5 shows the evaluation results of different baseline methods on multi-scenario CTR prediction, from which we can draw the following conclusions. Firstly, compared to the DNN model that trains all data together without considering scenario characteristics, all multi-task models achieve better performance. This demonstrates that in AntM\({}^{2}\)C, there are differences and commonalities between scenarios, and simply mixing training data will not achieve the best results. Secondly, the CTR performance varies across each scenario, indicating different levels of difficulty between scenarios. For example, in scenario B, where there is a large amount of data, the AUC is generally above 0.93, while in scenario D, the AUC is only around 0.68. The diverse business scenarios and items in AntM\({}^{2}\)C enable a more comprehensive and diverse evaluation of CTR. Finally, the expert-structured MMOE and PLE outperform the shared bottom model, demonstrating that refined model design can enhance the performance on AntM\({}^{2}\)C. AntM\({}^{2}\)C is capable of reflecting the differences between different models. ### Cold-start CTR prediction The cold-start problem is a challenging issue in recommendation systems. Training high-quality CTR models using sparse user-item interaction data is a challenging task. Cold-start primarily involves two aspects: users and items. As shown in Figure 2, the AntM\({}^{2}\)C dataset exhibits a natural long-tail distribution in both users and items. Therefore, we conduct a comprehensive evaluation of cold-start baseline methods based on AntM\({}^{2}\)C dataset. #### 4.2.1. Data preprocess In cold-start CTR prediction, we split the dataset based on time, using data before 20230717 as the training set and data on 20230717 as the validation and test sets. Based on this data division, we simulated two common cold-start problems in practice: few-shot and zero-shot. * Few-shot: users and items that appear in the training set with a count greater than 0 and less than \(N\)9, meaning there is only a small amount of training data for these users and items. Footnote 9: [https://github.com/layertai-labs/DropoutNet](https://github.com/layertai-labs/DropoutNet) * Zero-shot: users and items that have never appeared in the training set, indicating that either the user is visiting the scenario for the first time or the item has been launched and added to the scenario on the first day. Table 6 shows the data distribution of the test set under cold-start CTR evaluation. By using this dataset division, we can comprehensively evaluate and compare the performance of CTR models on few-shot and zero-shot samples. For few-shot samples, we can observe the model's performance with only a small amount of training data and evaluate the model's generalization ability. For zero-shot samples, we can evaluate the model's recommendation ability on samples that it has never seen before. #### 4.2.2. Baselines and hyper-parameters The key issue in cold-start modeling is how to learn user preferences and embeddings of users and items with limited data. In recent years, meta-learning-based cold-start methods have become state-of-the-art methods. We selected several representative methods with publicly available code as our baseline models. * DropoutNet (Krizhevsky et al., 2014): The DropoutNet is a popular cold-start method which applies dropout to control input, and exploits the average representations of interacted items/users to enhance the embeddings of users/items. We implemented the DropoutNet algorithm based on open-source code10. Footnote 10: [https://github.com/layertai-labs/DropoutNet](https://github.com/layertai-labs/DropoutNet) * MAML (Beng et al., 2015): The MAML algorithm is a popular meta-learning approach that aims to enable fast adaptation to new tasks with limited data. MAML learns a good initialization of model parameters that can be effectively adapted to new tasks quickly. We treat each user and item as a task in MAML, and conduct meta-training on warm items. Then we perform meta-testing on cold-start items. The subsequent meta-learning-based algorithms will also follow this task setting. * MeLU (Fang et al., 2015): The MeLU algorithm is the first to apply the MAML to address the cold-start problem in recommender systems. Building upon MAML, MeLU ensures the stability of the learning process by not updating the embeddings in the inner loop (support set). The hyperparameter settings in MeLU were determined based on the public code11 implementation. \begin{table} \begin{tabular}{c|c c|c c} \hline \multirow{2}{*}{Category} & \multicolumn{2}{c|}{Cold-start user} & \multicolumn{2}{c}{Cold-start item} \\ \cline{2-5} & Count & Samples & Count & Samples \\ \hline \hline Few-Shot & 67,110 & 685,774 & 30,315 & 306,964 \\ Zero-Shot & 65 & 2,752 & 14,230 & 121,447 \\ \hline \end{tabular} \end{table} Table 6. Data statistics of cold-start CTR evaluation. The meaning of “zero-shot” is that the users and items have never appeared in the training set, while “few-shot” means that there are only a small number of samples of users and items in the training set. \begin{table} \begin{tabular}{c|c c c c c} \hline \multirow{2}{*}{Methods} & \multicolumn{5}{c}{Scenario} \\ \cline{2-6} & A & B & C & D & E \\ \hline \hline DNN & 0.7846 & 0.9328 & 0.8733 & 0.6880 & 0.8338 \\ Sharedbottom & 0.8039 & 0.9414 & 0.8798 & 0.6915 & 0.8525 \\ MMoE & 0.7986 & 0.9438 & 0.8751 & 0.6854 & 0.8519 \\ PLE & 0.8039 & 0.9429 & 0.8785 & 0.6903 & 0.8506 \\ \hline \end{tabular} \end{table} Table 5. Multi-scenario CTR evaluation on AntM\({}^{2}\)C. The table shows the AUC metric of the baseline methods in different scenarios. * MetaEmb [(8)]: The MetaEmb algorithm also applies the MAML to address the cold-start problem in recommender systems. Unlike MeLU, MetaEmb focuses on optimizing the embeddings of items. It learns an initial representation using all training samples and then quickly adapts the embeddings of cold-start items. We implemented the MetaEmb algorithm based on open-source code12. Although MetaEmb only optimizes the embeddings of items, we have also applied the same approach to optimize the embeddings of users. Footnote 12: [https://github.com/Feiyang/MetaEmbedding](https://github.com/Feiyang/MetaEmbedding) These base models share the common embedding and DNN structure. The dimensionality of embedding vectors of each input field is fixed to 32 for all our experiments. The Adam optimizer with a learning rate of 1e-3 is used to optimize the model parameters, and the training is performed for 3 epochs with a batch size of 512. In addition to the aforementioned cold-start algorithms, the DNN (without any cold-start optimization) is also considered as the baseline method for cold-start CTR. #### 4.2.3. Results Table 7 shows the CTR performance for cold-start users and items. Because there is limited data for cold start users and items, we do not calculate AUC by scenarios, and evaluate the overall performance of cold start users and items. From the table, we can observe several phenomena. Firstly, compared to the results shown in Table 5, the AUC for cold-start users and items are generally lower than the overall level, which demonstrates that AntM\({}^{2}\)C's data can effectively reflect the differences between cold and warm items and users. Secondly, different cold-start methods show distinguishable results in AntM\({}^{2}\)C, and all of them are significantly better than the DNN model without cold-start optimization. This indicates that AntM\({}^{2}\)C can effectively compare the effects of different cold-start methods and demonstrate the distinctiveness between methods. Finally, the lower performance of zero-shot compared to few-shot indicates that zero-shot CTR prediction is more challenging than few-shot. The two cold start modes provided by AntM\({}^{2}\)C can comprehensively evaluate cold-start CTR prediction. ### Multi-Modal CTR prediction With the rise of large language models (LLMs), it has become a hot research topic to effectively transfer the knowledge of LLM to CTR prediction. There have been many works[(3; 4; 9; 11)] based on multi-modal CTR modeling using features such as item and user text. AntM\({}^{2}\)C contains raw text features for both users and items, which can provide a more comprehensive evaluation of multi-modal modeling compared to existing CTR datasets. Therefore, we conduct the evaluation of different multi-modal methods based on the AntM\({}^{2}\)C dataset. #### 4.3.1. Data preprocess In multi-modal evaluation, we adapt the same data processing approach as in multi-scenario evaluation mentioned in Section 4.1.1, and additionally include the text features from Table 3: user query entities and item entities. The text features will be used as inputs to the model together with other ID features. #### 4.3.2. Baselines and hyper-parameters For the baseline model, we use the language model to process the text features, and then concatenate the text embedding with other ID features and input them into the multi-scenario model described in Section 4.1.2. For ease of evaluation, we choose MMoE as the backbone and pre-trained Bert-base13(Berts et al., 2016) as the text embedding extractor. The output dimension of Bert's embeddings is 768. Then, a DNN with two layers, each layer having [(768; 32)] units, is used to reduce the dimension of Bert's embedding to 32. This reduced embedding is concatenated with other features and input into the MMOE model. More powerful language models and the application of text features will continue to be supplemented in future works. Footnote 13: [https://huggingface.co/docs/transformers/main/model_doc/bert](https://huggingface.co/docs/transformers/main/model_doc/bert) #### 4.3.3. Results Table 8 shows the evaluation results of the multi-modal CTR. It can be observed that, after adding the text modality, the CTR performance is better in data-sparse scenarios C, D, and E compared to using only the ID modality in the MMoE. Since the current baseline for using the text modality is relatively simple, the improvement in performance is not significant. However, this shows the potential of the text modality provided in AntM\({}^{2}\)C to improve CTR performance. ## 5. Conclusion and Future Work This paper introduces a large-scale Multi-Scenario Multi-Modal CTR prediction dataset, AntM\({}^{2}\)C dataset. It includes 1 billion CTR data from five business scenarios on the Alipay platform, and each sample contains multi-modal features in addition to ID features, providing a comprehensive evaluation for CTR models. In the first release, we have made 10 million data publicly available, and we will continue to release more data and features. At the same time, we will gradually evaluate the more state-of-the-art baseline methods on AntM\({}^{2}\)C and provide comprehensive and solid evaluation results. \begin{table} \begin{tabular}{c|c c|c c} \hline \hline \multirow{2}{*}{Methods} & \multicolumn{3}{c|}{Item} & \multicolumn{2}{c}{User} \\ \cline{2-5} & Zero-Shot & Few-Shot & Zero-Shot & Few-Shot \\ \hline \hline DNN & 0.8021 & 0.8339 & 0.7931 & 0.9365 \\ DropNet & 0.8097 & 0.8498 & 0.7957 & 0.9387 \\ MAML & 0.8131 & 0.8511 & 0.8133 & 0.9393 \\ MeLU & 0.8197 & 0.8519 & 0.8103 & 0.9404 \\ MetaEmb & 0.8203 & 0.8583 & 0.8091 & 0.9399 \\ \hline \hline \end{tabular} \end{table} Table 7. Cold-start evaluation on AntM\({}^{2}\)C. The table shows the AUC metrics of cold start users and items in zero-shot and few-shot situations. \begin{table} \begin{tabular}{c|c c c c c} \hline \hline \multirow{2}{*}{Methods} & \multicolumn{5}{c}{Scenarios} \\ \cline{2-5} & A & B & C & D & E \\ \hline \hline MMoE & 0.7986 & 0.9438 & 0.8751 & 0.6854 & 0.8519 \\ MMoE+Bert & 0.7951 & 0.9437 & 0.8851 & 0.6974 & 0.8642 \\ \hline \hline \end{tabular} \end{table} Table 8. Multi-modal evaluation on AntM\({}^{2}\)C. This table shows the AUC metrics for each scenario after incorporating the Bert-base model to model text features based on the multi-task CTR estimation using MMoE.
2309.09158
Observational constraints on the Emergent Universe with interacting non-linear fluids and its stability analysis
We investigate a flat Emergent Universe (EU) with a nonlinear equation of state which is equivalent to three different compositions of fluids. In the EU, initially, the evolution of the universe began with no interaction, but as time evolves, an interaction sets in among the three fluids leading to the observed universe. The characteristic of an EU is that it is a singularity-free universe that evolves with all the basic features of the early evolution. A given nonlinear equation of state parameter permits a universe with three different fluids. We get a universe with dark energy, cosmic string, and radiation domination to begin with, which at a later epoch transits into a universe with three different fluids with matter domination, dark matter, and dark energy for a given interaction strength among the cosmic fluids. Later the model parameters are constrained using the observed Hubble data and Type Ia Supernova (SnIa) data from the Pantheon data set. The classical stability analysis of the model is performed using the square speed of sound. It is found that a theoretically stable cosmological model can be obtained in this case, however, the model becomes classically unstable at the present epoch when the observational bounds on the model parameters are taken into account.
Anirban Chanda, Bikash Chandra Roy, Kazuharu Bamba, Bikash Chandra Paul
2023-09-17T04:57:31Z
http://arxiv.org/abs/2309.09158v1
Observational constraints on the Emergent Universe with interacting non-linear fluids and its stability analysis ###### Abstract We investigate a flat Emergent Universe (EU) with a nonlinear equation of state which is equivalent to three different compositions of fluids. In the EU, initially, the evolution of the universe began with no interaction, but as time evolves, an interaction sets in among the three fluids leading to the observed universe. The characteristic of an EU is that it is a singularity-free universe that evolves with all the basic features of the early evolution. A given nonlinear equation of state parameter permits a universe with three different fluids. We get a universe with dark energy, cosmic string, and radiation domination to begin with, which at a later epoch transits into a universe with three different fluids with matter domination, dark matter, and dark energy for a given interaction strength among the cosmic fluids. Later the model parameters are constrained using the observed Hubble data and Type Ia Supernova (SnIa) data from the Pantheon data set. The classical stability analysis of the model is performed using the square speed of sound. It is found that a theoretically stable cosmological model can be obtained in this case, however, the model becomes classically unstable at the present epoch when the observational bounds on the model parameters are taken into account. **Key Words :** _Emergent Universe, Observational Constraints, Cosmological Parameters, Classical Stability_ ## I Introduction Modern cosmology in the present decade is based on astronomical observations. We witnessed a transition from speculative science to experimental science because of precision measurements from different cosmological missions. The observations predict that the universe is not only expanding but is accelerating [1], [2], [3], [4], [5], [6]. After the discovery of cosmic microwave background radiation (CMBR), big-bang cosmology became the standard model to study the evolution of the universe having a beginning at some finite past. However, the standard model of cosmology is plagued with a number of problems, namely, the horizon problem, flatness problem, singularity problem etc. [7], [8]. To resolve these problems, it has been proposed that in the early stage of evolution of the universe, a rapid expansion of space took place which is known as cosmic inflation. A homogeneous scalar field in the framework of standard cosmology permits such an inflation [9], [10], [11]. Furthermore, inflation can address the large-scale structure formation of the universe. The present observational data predict that the present universe is passing through a phase of cosmic acceleration. This late time accelerating phase of the universe may be explained in the standard model by adding a positive cosmological constant (\(\Lambda\)) in Einstein's field equations (EFE). The \(\Lambda\) cold dark matter (\(\Lambda CDM\)) model is currently the most favored model in cosmology which matches well with astronomical observations. The \(\Lambda CDM\) model is found to have some conceptual issues, namely, the exact nature of its main constituents is not yet known. There are issues like fine-tuning and cosmic coincidence which are to be resolved [12], [13], [14], [15]. Recently it is observed from the CMB measurements that the expansion rate of the universe based on local data is different in comparison to the expansion rate that the universe had in the past [16], [17], [18], [19], [20]. This issue is known as the Hubble tension. Since General relativity (GR) and normal matter cannot support the present acceleration of the universe, an alternative is to modify the gravitational or matter sector of the EFE. Modifications in the matter sector lead to different dynamical DE models, namely, Chaplygin gas [21] and its variations [22], [23], models consisting of one or more scalar fields namely, quintessence [24], [25], [26], etc. A detailed review on different DE models including quintessence, K-essence, Tachyon, Pantom etc. can be found in the Refs. [27; 28; 29; 30]. On the other hand modifications in the gravitational sector led to the proposal of different modified theories, namely, \(f(R)\) theories of gravity [31], [32], \(f(R,T)\) gravity [33] with \(T\) being the trace of the energy-momentum tensor, modified Gauss-Bonnet gravity [34], \(f(\mathcal{T})\) gravity [35], [36], [37] where \(\mathcal{T}\) is the torsion scalar, \(f(Q)\) gravity [38] where \(Q\) is the non-metricity scalar, Brane world gravity [39], Horava-Lifshitz theory of gravity ([40]), etc. The modified theories of gravity are tested for the unification of the early inflationary phase with the late time acceleration phase [41]. In the literature different modified gravitational theories are considered to explain several astrophysical and cos mological phenomena and the viability of these models is also tested using astronomical observations [42], [43], [44], [45]. Cosmological models which are free from the initial singularity, have no horizon problem, and no quantum gravity (QG) regime are promising in this context. The "Emergent Universe" (EU) scenario proposed by Ellis and Maartens is one such model which can avoid the singularity problem of Big Bang cosmology [46]. In the EU scenario, the universe emerges as an Einstein static universe in the infinite past (\(t\rightarrow-\infty\)) and avoids the initial singularity by staying large at all times. The universe gradually expands slowly to attain a Big Bang phase of expansion. In the EU model, an inflationary universe emerges from a static phase and eventually leads to a macroscopic universe that occupies the present observed universe in its entirety. Once inflation starts it remains in that phase which can provide an explanation for the present acceleration. Ellis et al. [47] obtained an EU scenario for a closed (\(k=1\)) universe considering a minimally coupled scalar field (\(\phi\)) with a special choice of potential where the universe exits from its inflationary phase followed by reheating when the scalar field starts oscillating around the minimum of the potential. Later it was shown that such a potential occurs naturally by the conformal transformation of the Einstein-Hilbert action with \(\alpha R^{2}\) term, where \(\alpha\) is a coupling constant. Present observations predict that the universe is flat having almost zero spatial curvature. EU scenario in a flat universe can be obtained in a semi-classical theory of gravity. It is also shown that in Starobinsky model, EU model can be obtained considering a flat Robertson-Walker (RW) spacetime geometry with all its features [48]. Another interesting class of EU model in the standard GR framework was proposed by Mukherjee et al. [49] considering a non-linear equation of state (nEoS) in a flat universe. In this framework, the cosmic fluid is equivalent to a mixture of normal and two different fluids, one of them is an exotic kind described by a nEoS which is: \[p=A\rho-B\sqrt{\rho}, \tag{1}\] where \(A\) and \(B\) are constant parameters. The composition of cosmic fluid is determined for a given value of the parameter \(A\). The EU models are explored in different theories of gravity namely, Brans-Dicke theory [50], brane world cosmology [51], Gauss-Bonnet modified gravity [52], Loop quantum cosmology [53], Energy-momentum squared gravity [54], \(f(R,T)\) gravity [55], etc. Beesham \(et.al.\)[56] studied the EU model using a non-linear sigma model. An EU model with particle creation and irreversible matter creation is studied by Ghosh and Gangopadhyay using a thermodynamical approach [57]. The validity of EU models is studied using recent cosmological observations with the estimation of the observational constraints on the model parameters [58], [59], [60]. Recently [61] studied the EU scenario considering cosmic fluids permitted by nEoS in addition to viscosity. The above model determines the observational bounds of the model parameters. In the present work, we investigate the effect of interaction present among the components of the cosmic fluid to estimate the bounds on the model parameters of an EU. In the original EU model [49], the composition of the cosmic fluid is fixed, to begin with, and it cannot explain satisfactorily the different phases of the evolutionary history of the universe. For an EU with radiation domination to begin with, the other two constituents namely cosmic string and DE contribute insignificantly to the total energy density in the early universe for \(A=\frac{1}{3}\). As the interaction sets in, the EU transits from a radiation-dominated phase to a matter and DE-dominated phase. The observational bounds are determined for the late universe using the current Observed Hubble Dataset (OHD) [62] as well as Pantheon supernova compilation [63] and it is found that the analysis differs significantly from the early studies. In the literature, a class of cosmological models where the evolution of the cosmic fluids are probed with interaction and energy exchange from one sector of the fluid to the other are specially interesting. Recently, different interacting cosmological models with interaction among the dark sectors gained popularity because the present universe is not only expanding it is accelerating and we do not have a definite theory to explain the feature [64], [65], [66], [67], [68]. Such an interacting scenario in the evolution of the universe is found in M theory [69] and inflationary models [70], [71]. The energy conservation equation is violated by the individual fluid components in the case of interacting cosmology, however, the total energy density remains conserved. In the present work, we consider interaction among the cosmic fluids which play a crucial role in developing a consistent cosmological model. Interacting cosmology can provide a reasonable explanation of the cosmic coincidence problem. It is well known that there is an explicit tension between the cosmological measurements made using the data from the early and late universe. Specifically the tension in \(H_{0}\) and \(S_{8}\) are of particular importance. The interacting cosmic fluid scenario can alleviate these tensions up to a certain degree [72], [73], [74], [65], [75], [76], [77], [78]. The motivation of our work is to explore the EU scenario which may evolve from a radiation epoch to a matter and DE dominated epoch in the presence of interaction among the fluids that sets in at a late time. The paper is organized as: In sec. (II), the basic field equations for the EU are given. In sec. (III), we introduce interaction among the cosmic fluids that sets in at time \(t>t_{i}\) and the conservation equations for the fluid components are rewritten. The effective EoS parameters in the presence of interaction determined by the strength of the interaction are obtained. In sec. (IV) we use the observational data sets, namely, the OHD and the recent Pantheon compilation of 1048 Type Ia Supernovae (SnIa), to constrain the model parameters. The statistical inferences for the EU model are studied by the determination of the Akaike Information Criterion (AIC) and Bayesian Information Criterion (BIC), which are shown in sec (V). Using the Markov Chain Monte Carlo (MCMC) methods, cosmological parameters and the classical stability of the model are explored in sec. (VI). Finally, the results obtained in the analysis are summarized in sec. (VII) followed by a brief discussion. ## II Field equations We consider a spatially flat, homogeneous, and isotropic spacetime described by the Robertson-Walker (RW) metric, which is given by, \[ds^{2}=-dt^{2}+a^{2}(t)\Big{[}dr^{2}+r^{2}(d\theta^{2}+\sin^{2}\theta\;d\phi^{2 })\Big{]}, \tag{2}\] where \(a(t)\) is the scale factor of the universe and \(r\), \(\theta\), and \(\phi\) are the dimensionless comoving coordinates. The Einstein field equation (EFE) is given by, \[R_{\mu\nu}-\frac{1}{2}g_{\mu\nu}R=8\pi G\;T_{\mu\nu}, \tag{3}\] where, \(R_{\mu\nu}\) is the Ricci tensor, \(R\) is the Ricci scalar, \(g_{\mu\nu}\) is the metric tensor and \(T_{\mu\nu}\) is the energy-momentum tensor of the cosmic fluid. Using the RW metric given by eq. (2), the time-time and the space-space components of the EFE become, \[\rho=3\Big{(}\frac{a^{2}}{a^{2}}\Big{)}, \tag{4}\] \[p=-\left[2\frac{\ddot{a}}{a}+\left(\frac{\dot{a}}{a}\right)^{2}\right], \tag{5}\] where, \(\rho\) denotes the energy density of the cosmic fluid, \(p\) denotes the pressure and we have assumed natural units \(i.e.\), \(c=1\) and \(8\pi G=1\). The conservation equation is, \[\dot{\rho}+3H(\rho+p)=0, \tag{6}\] where, \(H=\frac{\dot{a}}{a}\) is the Hubble parameter and \(p\) is the pressure of the fluid. ### Physical analysis of Emergent Universe Using eqs. 4 and 5 in eq. 1 we arrive at a second-order differential equation for the scale factor given by, \[2\frac{\ddot{a}}{a}+(3A+1)\left(\frac{\dot{a}}{a}\right)^{2}-\sqrt{3}B\frac{ \dot{a}}{a}=0. \tag{7}\] Integrating the above equation twice we obtain the scale factor (\(a(t)\)) which is given by, \[a(t)=\Big{[}\frac{3K(1+A)}{2}\left(K_{1}+\frac{2}{\sqrt{3}B}e^{\frac{\sqrt{3} H}{2}}\right)\Big{]}^{\frac{2}{3(1+A)}}, \tag{8}\] where \(K\) and \(K_{1}\) are two integration constants. It is evident that if \(B<0\) it leads to a singular universe, and if \(B>0\) and \(A>-1\) one gets a non-singular solution. The latter solution is interesting and used to obtain an EU. The scale factor \(a(t)\) remains finite even at infinite past (\(t\rightarrow-\infty\)). Thus the universe emerged from an initial Einstein static phase in this scenario. Using eq.(1) and eq.(6) we obtain the energy density which is given by, \[\rho=\frac{B^{2}}{(1+A)^{2}}+\frac{2BK}{(1+A)^{2}}\frac{1}{a^{\frac{3}{2}(1+A )}}+\frac{K^{2}}{(1+A)^{2}}\frac{1}{a^{3(1+A)}}. \tag{9}\] It is evident that there are three terms in the energy density (\(\rho_{1},\rho_{2},\rho_{3}\)). Now we obtain the expression of pressure from eq.(1) using eq. (9), \[p=-\frac{B^{2}}{(1+A)^{2}}+\frac{BK(A-1)}{(1+A)^{2}}\frac{1}{a^{\frac{3}{2}(1+ A)}}+\frac{AK^{2}}{(1+A)^{2}}\frac{1}{a^{3(1+A)}}, \tag{10}\] where we have identified different barotropic fluids as follows: \(p_{1}=-\frac{B^{2}}{(1+A)^{2}}\), \(p_{2}=\frac{BK(A-1)}{(1+A)^{2}}\frac{1}{a^{\frac{3}{2}(1+A)}}\) and \(p_{3}=\frac{AK^{2}}{(1+A)^{2}}\frac{1}{a^{3(1+A)}}\) respectively. The energy density in the case of an EU is a composition of three different fluid components [49]. The first term can be interpreted as a cosmological constant that accommodates the DE sector of the universe. Comparing the above equation with the barotropic EoS \(p_{i}=\omega_{i}\rho_{i}\) (where \(i=1,2,3\)), with \(\omega_{i}\) being the EoS parameter for the \(i^{th}\) fluid, we can obtain the EoS parameters for the individual fluids as \(\omega_{1}=-1\), \(\omega_{2}=\frac{A-1}{2}\) and \(\omega_{3}=A\). The composition of the cosmic fluid depends on the value of the parameter \(A\) as determined by Mukherjee et al. [49], e.g. for \(A=\frac{1}{3}\) the EU is composed of three types of fluids, dark energy (\(\omega_{1}=-1\)), cosmic string (\(\omega_{2}=-\frac{1}{3}\)) and radiation (\(\omega_{3}=\frac{1}{3}\)) admitting a non-singular model given by eq. 8 and \(A=0\) leads to DE (\(\omega_{1}=-1\)), exotic matter (\(\omega_{2}=-\frac{1}{2}\)), and dust (\(\omega_{3}=0\)). So for a specific value of \(A\), the composition of the cosmic fluid is fixed. It is further shown by Paul and Majumdar [79] that even if one begins with a given \(A\), fluid composition transforms into different types when an interaction sets in that depends on the strength of interaction at the later epoch. In the next section, we consider an interacting fluid scenario for exploring the evolution of the EU. Now for analyzing the model with the observations it is important to represent the scale factor relation with the redshift parameter \(z\) given by \(a=\frac{1}{(1+z)}\), where \(a(t)\) is the scale factor at any time and we assume the present scale factor of the universe, \(a_{0}=1\). The energy density of the universe can be expressed as \(\rho=\sum_{i=1}^{3}\rho_{i}\), and can be expressed in terms of \(z\) as: \(\rho_{1}=\frac{B^{2}}{(1+A)^{2}}\), \(\rho_{2}=\frac{2BK}{(1+A)^{2}}(1+z)^{\frac{2}{2}(1+A)}\) and \(\rho_{3}=\frac{K^{2}}{(1+A)^{2}}(1+z)^{3(1+A)}\). ## III Cosmological models with interacting fluids In this section, we study the effect of interaction among the cosmic fluid components. For a given \(A=\frac{1}{3}\), the EU is composed of DE, cosmic string, and radiation in the absence of interaction. There are a variety of reasons for the origin of interactions among the cosmic fluids. We assume the interaction among the fluids sets in at \(t>t_{i}\), where \(t_{i}\) is the time when interaction began. We also assume that there is an interaction between the DE and radiation sectors only while the cosmic string remains non-interacting. The conservation equations for DE (\(\rho_{1}\)) and radiation (\(\rho_{3}\)) can be written as [64; 65; 66; 67], \[\dot{\rho}_{1}+3H(\rho_{1}+p_{1})=-Q, \tag{11}\] \[\dot{\rho}_{3}+3H(\rho_{3}+p_{3})=Q, \tag{12}\] where, \(\rho_{1}\), \(p_{1}\) and \(\rho_{3}\), \(p_{3}\) are the energy densities and pressures of the dark energy and radiation sectors respectively and \(Q\) represents the strength of interaction which may assume arbitrary forms. There are no strict constraints on the sign of \(Q\) and depending on its sign energy may flow from one sector of fluid to the other. When \(Q>0\) energy flows from the dark energy sector to the radiation sector and for \(Q<0\) radiation sector loses energy. It is evident from eq. (11) and eq. (12) that the individual fluids violate the conservation equation however the total energy density of the fluid remains conserved. The above conservation equations can be recast in the usual form as [79], \[\dot{\rho}_{1}+3H(1+\omega_{1}^{eff})\rho_{1}=0 \tag{13}\] \[\dot{\rho}_{3}+3H(1+\omega_{3}^{eff})\rho_{3}=0 \tag{14}\] where \(\omega_{1}^{eff}\) and \(\omega_{3}^{eff}\) are the effective EoS parameters defined as, \[\omega_{1}^{eff}=\omega_{1}+\frac{Q}{3H\rho_{1}}, \tag{15}\] \[\omega_{3}^{eff}=\omega_{3}-\frac{Q}{3H\rho_{3}}. \tag{16}\] In the literature, different functional forms of interactions were taken up. There are no strict rules to assume a particular form of interaction however some phenomenological choices are made initially which is then verified using astronomical observations. Several authors have considered different forms of \(Q\) such as \(Q\propto\rho_{1}\)[80], \(Q\propto\dot{\rho}_{1}\)[81], \(Q\propto\rho_{3}\)[75; 76]. Cosmological models obtained using several of these interactions are found to be consistent with the observational results [82; 83]. Thus any new interaction form must be constrained using observations to construct a stable cosmological model. In this paper, we consider a non-linear exponential form of interaction given by, \[Q=3\;H\;\eta\;e^{(\alpha-1)}, \tag{17}\] where \(\eta\) is a coupling parameter that denotes the interaction strength and \(\alpha=\frac{\rho_{1}}{\rho_{3}}\), with \(\rho_{i}\) being the energy density of the \(i^{th}\) fluid. For \(\alpha\to 1\) the exponential interaction reduces to a linear one. Yang et al. [84] obtained observational bounds on the cosmological parameters using such an exponential interaction in the \(\Lambda CDM\) model. Recently, Chanda \(et.al.\)[85] employed the exponential form of interaction to obtain cosmological models in modified \(f(R,\mathcal{G})\) gravity, where \(\mathcal{G}\) is the Gauss-Bonnet term. Observational bound on the coupling parameter \(\eta\) was obtained using Union 2.1 supernovae data. In both cases, the present observations preferred a small value of \(\eta\). In this paper, we construct an interacting EU model and probe the observational viability of the model. The total energy density for the cosmic fluid obtained using Eqs. (9), (13) and (14) yields, \[\rho(z)=\rho_{10}(1+z)^{3(1+\omega_{1}^{eff})}+\rho_{20}(1+z)^{2}+\rho_{30}(1+ z)^{3(1+\omega_{3}^{eff})}, \tag{18}\] where \(\rho_{10}=\frac{B^{2}}{(1+A)^{2}}\), \(\rho_{20}=\frac{2BK}{(1+A)^{2}}\) and \(\rho_{30}=\frac{K^{2}}{(1+A)^{2}}\), and the effective EoS parameters are, \[\omega_{1}^{eff}=-1+\eta\;e^{(\alpha-1)}, \tag{19}\] \[\omega_{3}^{eff}=A-\eta\;\alpha\;e^{(\alpha-1)}. \tag{20}\] In the original EU [49], the matter-energy content of the universe is fixed once \(A\) is specified and remains so throughout the universe's evolution. However, throughout its evolution, the universe transits from different phases when the matter composition of the universe changes, and different components dominate at different epochs. If one considers an interacting fluid scenario, it is possible to incorporate such transitions at different phases of evolution [79]. We note from eq. (20) that as the strength of the coupling parameter \(\eta\) increases the value of decreases approaching zero. Thus for a specific value of \(\eta\), \(B\), and \(K\), the EU transits from a radiation-dominated phase to a matter-dominated phase with an increase in the DE density when, \[A=\eta\;\alpha\;e^{(\alpha-1)}. \tag{21}\] It is also noted that for any \(A\) value to begin with (leading to different compositions of matter-energy), the universe always transits into the matter-dominated epoch and gradually evolves into the present observed universe. Thus for a radiation-dominated universe, the value of interaction strength for which the universe transits into a dark energy and matter-dominated one depends on the ratio of the energy densities and is given by, \[\eta=\frac{1}{3\;\alpha\;e^{(\alpha-1)}}. \tag{22}\] For a fixed value of \(A\), the Friedmann equation (4) can be expressed in the following form using eq. (18) as, \[H^{2}(z)=H_{0}^{2}\Big{(}\Omega_{1}(1+z)^{3(1+\omega_{1}^{eff})}+\Omega_{2}(1+ z)^{\frac{3}{2}}+\Omega_{1}(1+z)^{3(1+\omega_{2}^{eff})}\Big{)}, \tag{23}\] where \(\Omega_{i}=\frac{\rho_{i}}{\rho_{c}}\) is the density parameter for the \(i^{th}\) fluid, \(\rho_{c}=\frac{3H_{0}^{2}}{8\pi G}\) is the critical density and \(H_{0}=100h\;km\;sec^{-1}\;Mpc^{-1}\) is the present day value of the Hubble parameter. For a fixed \(\eta\), the values of \(B^{\prime}=\frac{B}{\sqrt{3H_{0}}}\), and \(K^{\prime}=\frac{K}{\sqrt{3H_{0}}}\) for which EU transits from a radiation dominated phase to a DE and matter dominated phase can be obtained by fitting the model with observational data which will be done in the next section. ## IV Constraining the model parameters using observational data This section considers a flat EU with \(A=\frac{1}{3}\) and an interaction between the DE and radiation sector only. The model parameters \(B^{\prime}\), and \(K^{\prime}\) are constrained using the Hubble and Pantheon datasets for a specific value of \(\alpha\). The Hubble parameter from eq.(23) can be represented in the following functional form, \[H^{2}(H_{0},B^{\prime},K^{\prime},z)=H_{0}^{2}E^{2}(B^{\prime},K^{\prime},z), \tag{24}\] where, \[E^{2}(z)=(\Omega_{1}(1+z)^{3(1+\omega_{1}^{eff})}+\Omega_{2}(1+z)^{\frac{3}{2} }+\Omega_{3}(1+z)^{3(1+\omega_{3}^{eff})}). \tag{25}\] In the above equation, \(\Omega_{i}\) denotes the density parameter corresponding to the \(i^{th}\) fluid where \(i=1,2,3\). This expression will be employed to fit the theoretical model with observational data. ### Observed Hubble Datasets The Hubble parameter \(H\) can be measured following two different approaches at certain redshifts. The first approach extracts \(H(z)\) from the line-of-sight BAO data which includes the correlation functions of the luminous red galaxies and in the second approach \(H(z)\) is measured from the differential ages (DA) (\(\Delta t\)) of the galaxies. In terms of \(\Delta t\), the Hubble parameter can be expressed as, \[H(z)=\frac{\dot{a}}{a}=-\frac{1}{1+z}\frac{dz}{dt}\approx-\frac{1}{1+z}\frac{ \Delta z}{\Delta t}. \tag{26}\] Recently, Sharov and Vasiliev compiled a list of 57 data points for \(H(z)\) in the redshift range \(0.07\leq z\leq 2.42\)[62]. The dataset includes 31 points measured by the DA method (also known as the cosmic chronometer technique) and 26 from BAO and other measurements as shown in Table I. The \(\chi^{2}\) function can be defined as, \[\chi^{2}_{OHD}(H_{0},B^{\prime},K^{\prime},z)=\sum_{i=1}^{57}\frac{(H_{th}(H_{ 0},B^{\prime},K^{\prime},z)-H_{obs,i}(z))^{2}}{\sigma_{H,i}^{2}}, \tag{27}\] where \(H_{th}\) is the value of the Hubble parameter estimated from the theoretical model, \(H_{obs}(z)\) is the observed Hub ble parameter and \(\sigma_{H}\) is the standard error associated with the measurement. The present value of the Hubble parameter (\(H_{0}\)) is treated as a nuisance parameter in this case and its value is taken to be \(H_{0}=73.24\pm 1.74\)[19] with a fixed prior distribution for the estimation of \(\eta\), \(B^{\prime}\) and \(K^{\prime}\). The parameter \(\alpha\) denotes the ratio of the DE density and the matter density and for the present universe must be greater than one. We have assumed \(\alpha=2.5\) and \(\eta=0.03\) to obtain a reasonable estimation for the present energy budget of the universe. ### Pantheon dataset The other data set used in the study is the latest Pantheon SnIa sample which consists of spectroscopically confirmed 1048 supernovae specimens compiled by Scolnic et al. [63]. The sample consists of different supernovae surveys both in the high and low redshift regimes namely, the CfA1-CfA4 surveys [86], the PanSTARRS1 (PS1) medium deep survey [63], the Sloan Digital Sky Survey (SDSS) [87], the SuperNovae Legacy Survey (SNLS) [88], ESSENCE [89], the Carnegie Supernova project (CSP) [90] and various Hubble space telescope (HST) results [91], [92], [93]. For a detailed review and summary of these samples please refer to [94]. The Pantheon sample covers the redshift range \(0.01<z<2.26\). The theoretical apparent magnitude of the SnIa can be expressed as, \[m(z)=M+5\;log_{10}\Big{[}\frac{d_{L}(z)}{1Mpc}\Big{]}+25, \tag{28}\] where \(M\) is the corrected absolute magnitude. The luminosity distance is denoted by \(d_{L}(z)\) and for a flat universe can be expressed as, \[d_{L}(z)=c(1+z)\int_{0}^{z}\frac{dz^{\prime}}{H(z^{\prime})}, \tag{29}\] with \(z\) being the SnIa redshift in the CMB rest frame. One can now define a Hubble free luminosity distance as \(D_{L}(z)\equiv\frac{H_{0}d_{L}(z)}{c}\), and the theoretical apparent magnitude in that case becomes, \[m(z)=M+5log_{10}[D_{L}(z)]+5log_{10}\Big{(}\frac{c/H_{0}}{Mpc}\Big{)}+25. \tag{30}\] From the above equation, it is evident that there exists a degeneracy between \(M\) and \(H_{0}\) and can be combined together to define a new parameter \(\mathcal{M}\) as, \[\mathcal{M}=M+5log_{10}\Big{[}\frac{c/H_{0}}{1Mpc}\Big{]}+25=M-5log_{10}(h)+43.28. \tag{31}\] Several attempts have been made to marginalize the degenerate combination and recently [94] minimized the parameter using the Pantheon sample for a tilted universe. It is seen that the value of \(\mathcal{M}\) lies close to 23.8. One can now define the \(\chi^{2}_{SNS}\) function from the Pantheon sample of 1048 SnIa as, \[\chi^{2}_{SNS}(H_{0},\eta,B^{\prime},K^{\prime},z)=\Delta\mathcal{F}_{i}C^{-1} _{SNS}\Delta\mathcal{F}_{j}, \tag{32}\] where \(\Delta\mathcal{F}=\mathcal{F}_{obs}-\mathcal{F}_{th}\) represents the difference between the theoretical and the observed value of the apparent magnitude for each SnIa at redshift \(z_{i}\), and \(C_{SNS}\) is the total covariance matrix. The total covariance matrix in this case is constructed as a sum of the diagonal matrix containing the statistical uncertainties of the apparent magnitudes (including the photometric error, mass step correction, peculiar velocity and the redshift uncertainty, stochastic gravitational redshift, intrinsic scatter, and distance bias correction) with a non-diagonal matrix constructed from the systematic uncertainties obtained using the bias correction method. We have performed the MCMC analysis to explore the parameter space for the EU model using the python package EMCEE [95] and the chains are analyzed using the Chain Consumer [96] package which plots the posterior as obtained from the chains. The likelihood function used for the MCMC sampling has the usual functional form, \[\mathcal{L}=\exp(-\frac{\chi^{2}}{2}). \tag{33}\] \begin{table} \begin{tabular}{c c c c c c} \hline \multicolumn{6}{c}{Hubble data} \\ \hline \(z\) & \(H(z)\) & \(\sigma_{H}\) & \(z\) & \(H(z)\) & \(\sigma_{H}\) \\ \hline [MISSING_PAGE_POST] 5 & 50.4 & & & \\ \hline \end{tabular} \end{table} Table 1: \(H(z)-z\) dataset with errors estimated from DA and BAO methods The best-fitted curves for the OHD and Pantheon data sets with error bars are shown in Figs. (a)a and (b)b. To perform the joint analysis with the OHD and Pantheon data we define the joint \(\chi^{2}\) function as, \(\chi^{2}_{Joint}=\chi^{2}_{OHD}+\chi^{2}_{SNS}\). The joint \(\chi^{2}\) function is minimized to obtain the best-fit values for the model parameters. The contours of \(1-\sigma\) and \(2-\sigma\) confidence level for the parameters \(B^{\prime}\) and \(K^{\prime}\) are shown in Fig. 3. The results are summarised in Table 2 for OHD and OHD + Pantheon joint analysis. ## V Statistical inferences with AIC and BIC This section compares the EU model with the standard \(\Lambda CDM\) cosmological model using different information criteria. Although there is no particular guideline for the best choice of information criteria, we have used the Akaike Information Criterion (AIC) and Bayesian Information Criterion (BIC) which are quite popular. Figure 1: The best-fitted curves are shown in the figure. In the left panel, the best-fitted curve corresponding to the 57 Hubble data points is shown with \(B^{\prime}=0.6425\) and \(K^{\prime}=0.4885\), and in the right panel, the corresponding fit for the Pantheon data set is shown with \(B^{\prime}=0.9796\) and \(K^{\prime}=0.3397\) with \(\eta=0.03\). Figure 2: Contours of \(1-\sigma\) and \(2-\sigma\) confidence levels for the model parameters \(B^{\prime}\) and \(K^{\prime}\) using OHD. The AIC is defined as [97], \[AIC=\chi^{2}_{min}+2n, \tag{34}\] where \(n\) is the number of free parameters in the chosen model. To compare the EU model with the \(\Lambda CDM\) model we have used the AIC difference between the two models defined as \(\Delta AIC=|AIC_{\Lambda CDM}-AIC_{EU}|\). For two models under consideration, if \(\Delta AIC<2\) then there is strong evidence that the observed data favors the EU model and the model is consistent with the \(\Lambda CDM\) model. Whereas, for \(4<\Delta AIC\leq 7\) there is little evidence in favor of the EU model. If \(\Delta AIC>10\) then the model is ruled out [98]. The BIC is defined as [99], [100], \[BIC=\chi^{2}_{min}+n\;lnN, \tag{35}\] where \(N\) is the number of data points used in the MCMC analysis. It is known that the penalty in BIC is higher than AIC. We denote the BIC difference between the \(\Lambda CDM\) and EU model as \(\Delta BIC=|BIC_{\Lambda CDM}-BIC_{EU}|\). If \(\Delta BIC<2\) then there exists no strong evidence against the EU model as it shows no considerable deviation from the \(\Lambda CDM\) model. However, for \(2,\Delta BIC<6\) there is evidence against the EU model and for \(\Delta BIC>6\), the model is not favored. The difference in the AIC and BIC values and the \(\chi^{2}_{min}\) values are displayed in Table 3. We note that \(\Delta AIC=1.6\) and \(\Delta BIC=3.7\) so the model closely resembles the \(\Lambda CDM\) cosmology at the present epoch. ## VI Cosmological parameters and classical stability In this section, we check the viability of the observational constraints by investigating the evolutionary pat \begin{table} \begin{tabular}{|c|c c|c c|} \hline & \multicolumn{2}{c|}{OHD} & \multicolumn{2}{c|}{Pantheon + OHD} \\ \hline Parameters & Best fit values & Mean values \(\pm\)\(\sigma\) & Best fit values & Mean values \(\pm\)\(\sigma\) \\ \hline \(B^{\prime}\) & 0.6425 & 0.6425 \(\pm\) 0.04 & 0.9796 & 0.979 \(\pm\) 0.016 \\ \(K^{\prime}\) & 0.4885 & 0.4885 \(\pm\) 0.02 & 0.3337 & 0.334 \(\pm\) 0.012 \\ \hline \end{tabular} \end{table} Table 2: Best fit values of the model parameters Figure 3: Constraints on the model parameters from the joint analysis of OHD and Pantheon datasets. Contours of \(1-\sigma\) and \(2-\sigma\) confidence levels for the model parameters \(B^{\prime}\) and \(K^{\prime}\) are shown. tern of different cosmological parameters, namely, the deceleration parameter (\(q\)), the statefinder pair (\(r-s\)) etc. The deceleration parameter is defined as, \[q=-1-\frac{\dot{H}}{H^{2}}. \tag{36}\] The deceleration parameter depends on the derivative of the Hubble parameter (\(H\)). In Fig. 4 we have shown the variation of \(q\) with redshift (\(z\)) using the best fit values from joint MCMC analysis using both OHD and Pantheon dataset. From the figure, it is evident that the universe transits from a decelerated phase in the past to an accelerated phase. The present universe is accelerating and it remains in that phase in the near future. The transition redshift, \(i.e.\) the redshift at which the universe transits from a decelerated phase of expansion to an accelerating phase of expansion depends on the parameter \(\eta\). As \(\eta\) increases the universe transits into the accelerating phase at a later time. In the literature, different DE models were proposed to explain the present accelerating universe, namely, quintessence scalar field, phantom, tachyon, Chaplygin gas, etc. To differentiate between different DE models quantitatively Sahni \(et.al.\)[101] proposed a geometrical analysis called the statefinder diagnostics. The statefinder parameters (\(r,s\)) corresponding to different DE models trace out different geometrical trajectories qualitatively. For the \(\Lambda CDM\) model the statefinder pair corresponds to \((r,s)=(1,0)\). The parameters \(r\) and \(s\) are defined as, \[r=\frac{\ddot{u}}{aH^{3}}, \tag{37}\] \[s=\frac{r-1}{3(q-\frac{1}{2})}. \tag{38}\] We express the statefinder pair in terms of the deceleration parameter (\(q\)) as, \[r=q(z)(1+2q(z))+q^{\prime}(z)(1+z), \tag{39}\] \[s(z)=\frac{r(z)-1}{3(q(z)-\frac{1}{2})}, \tag{40}\] where "prime"(') denotes the derivative with respect to \(z\). For \(r<1,s>0\) the model represents the quintessence type of DE whereas for \(r>1,s<0\) the model represents Chaplygin gas. We have shown the variation of the statefinder pair with \(z\) in Fig. 5. From the figure it is evident that initially the EU was filled with CG type of DE. Gradually it made a transition into the quintessence regime passing through the \(\Lambda CDM\) phase and at present the universe is quintessence dominated. The change in nature of the DE may be attributed to the interaction which sets in between the cosmic fluid components at some time \(t=t_{i}\). This observation is also supported by the fact that the effective EoS for the DE has a value of \(\omega_{1}^{eff}=-0.87\) for \(\eta=0.03\). The interaction strength in this case determines the type of DE at the present epoch. Another important diagnostic tool is the \(Om(z)\) diagnostic. The \(Om(z)\) parameter is defined as, \[Om(z)=\frac{E^{2}(z)-1}{(1+z)^{3}-1}. \tag{41}\] The nature of dark energy can be determined by comparing the \(Om(z)\) values at two different points. For two different \(z\) values, namely \(z_{1}\) and \(z_{2}\) where \(z_{1}<z_{2}\), if \(Om(z_{1},z_{2})\equiv Om(z_{1})-Om(z_{2})=0\) then it represents the \(\Lambda CDM\) model. For, \(Om(z_{1},z_{2})>0\) the DE is of quintessence type. From Fig. 6 it is evident that the present universe is dominated by quintessence type DE as confirmed by the statefinder analysis also. We study the classical stability of the EU scenario against perturbations using the adiabatic sound speed (\(c_{s}^{2}=\frac{dp}{dq}\)). The hydrostatic pressure, in this case, is given by eq.(1), and the corresponding sound speed is, \[c_{s}^{2}=\frac{dp}{d\rho}=A-\frac{B}{2\sqrt{\rho}}, \tag{42}\] where \(\rho\) is the energy density given by eq. (18). For a stable cosmological model \(0<c_{s}^{2}<1\). Thus from eq. (7), it is evident that for a stable cosmological model \(B>0\) and \(A>\frac{B}{2\sqrt{\rho}}\). We plot the variation of adiabatic sound against redshift (\(z\)) in Fig. (7). It is evident that the value of \(c_{s}^{2}\) is positive for a theoretically predicted set of values \(B^{{}^{\prime}}=0.5\) and \(K^{\prime}=0.5\). However, corresponding to the bestfit values obtained using the OHD (Table(2)), \(c_{s}^{2}\) is found to flip its sign from positive to negative in the recent past and stays negative at the present epoch (\(z=0\)). Thus for the observationally predicted values of the model parameters, the EU model exhibits an instability against small perturbations. The small perturbations present in the system will gradually grow in time making the model unstable at the present epoch and near future. In this regard, the stability of various DE models against perturbations is worth looking at. It is found that the Chaplygin gas models and Tachyon models of DE remain stable against small perturbations [102; 103]. However, several holographic dark energy models with future event horizon is found to be classically unstable throughout the evolutionary history of the universe or in some cases remains stable in the past or future showing instability at the present epoch [104; 105]. A similar result is obtained for agegraphic DE models with interacting cosmic fluids in the case of both flat and non-flat geometries [106]. Recently, Ebrahimi and Sheykhi [107] studied the stability of QCD-motivated ghost DE models [108] using the square speed of sound as the determining factor. It is also found that the cosmological model remains unstable throughout for flat or non-flat geometries even in the presence of interaction between DM and DE. In the present work, we study the stability of an interacting EU scenario where the universe transits from an early radiation-dominated phase (determined by the nEoS parameter \(A\)) to a matter and DE-dominated phase. We note that although theoretically, it is possible to construct an EU model that remains stable against small perturbations, the observational bounds on the model parameters lead to an EU scenario where the model becomes unstable at the present epoch. The role of the interaction strength \(\eta\) is nominal in this case and will be investigated in detail elsewhere. ## VII Results and discussions In the paper, EU model obtained by [49] is explored in the presence of interaction to estimate the observational constraints on the model parameters and to study the classical stability of the model. It is known that the EU scenario promises to solve some of the well-known conceptual issues in Big Bang theory including technical issues not understood in the standard model. The nEoS of the cosmic fluid is given by eq. (1) where \(A\) and \(B\) are the model parameters. It is interesting to note that the nEoS is equivalent to three different fluids as described by eq. (9). The type of fluids spanning the universe depends on the parameter \(A\). In the original EU scenario, once the Figure 4: Variation of the deceleration parameter with redshift (\(z\)) using the best fit values from joint MCMC analysis. The Blue, Red and Black line correspond to \(\eta=0.06,0.03\) and \(0.015\) respectively. Figure 5: Variation of the statefinder pair \((r,s)\) with redshift (\(z\)) using the bestfit values from joint MCMC analysis. value of \(A\) is fixed, the constituents of the cosmic fluid are determined. However, for a consistent cosmological model, the universe must pass from a radiation-dominated phase to a matter-dominated one and subsequently evolve into the DE-dominated phase. This issue can however be alleviated if an interaction is assumed between the cosmic fluid components. In our analysis, we begin with a specific value of \(A\) (\(A=\frac{1}{3}\)) which leads to an early universe with DE, cosmic string, and radiation. It is shown that an EU can emerge out from the throat of a dynamical wormhole [109] which at a later epoch gives rise to a phase with radiation domination (although an insignificant contribution of DE and cosmic string is present) as described by the given nEoS. A non-linear interaction is introduced in the theory to explore the further evolution of the universe. It is noted from the analysis that as the energy exchange takes place between the fluid components the universe effectively transits from a radiation-dominated early universe to a matter and DE-dominated phase in the late time depending upon the strength of interaction. The matter sector in this case contains both baryonic matter as well as CDM. For a specific value of the interaction parameter \(\eta\) we constrain the model parameters \(B^{\prime}=\frac{B}{\sqrt{3}H_{0}}\) and \(K^{\prime}=\frac{K}{\sqrt{3}H_{0}}\) using the recent observational data namely the Hubble data which contains the cosmic chronometer as well as the BAO data and the Pantheon SnIa dataset. The model parameters must satisfy the condition \(B^{\prime}>0\) and \(K^{\prime}>0\) for a physically realistic cosmology. We have considered a flat prior for the parameters \(B^{\prime}\) and \(K^{\prime}\) in Figure 6: Variation of the \(Om(z)\) parameter with redshift (\(z\)) using the best fit values from joint MCMC analysis. Figure 7: Evolution of square sound speed (\(c_{s}^{2}\)) against redshift (\(z\)) for two different set of values of \(B^{\prime}\) and \(K^{\prime}\) with \(\alpha=2.5\) and \(\eta=0.03\). The red curve corresponds to a theoretical prediction of \(B^{\prime}=0.5\) and \(K^{\prime}=0.5\) and the blue curve corresponds to the \(B^{\prime}\) and \(K^{\prime}\) values as obtained using the OHD. the range \(0<B^{\prime}<2.5\) and \(0<K^{\prime}<2.5\) with the value of the Hubble parameter \(H_{0}=73.24\pm 1.74\). In Figs. 1a and 1b, the bestfit curves for the OHD and Pantheon dataset are shown for the EU model with interaction strength \(\eta=0.03\). The bestfit values are implemented for the MCMC analysis. In Figs. 2 and 3, the \(1-\sigma\) and \(2-\sigma\) confidence level contours for \(B^{\prime}\) and \(K^{\prime}\) are shown using the OHD and the joint OHD + Pantheon dataset. We note that in the case of joint analysis, the value of \(B^{\prime}\) is increased from that of the OHD analysis, however, the value of \(K^{\prime}\) decreases in this case. The best fit values of \(B^{\prime}\) and \(K^{\prime}\) and their mean values are displayed in Table 2. The statistical estimations for the EU model are performed following the AIC and BIC and it is shown that the best fit values of \(B^{\prime}\) and \(K^{\prime}\) are significantly better than the best fit values of \(B^{\prime}\) and \(K^{\prime}\). The best fit values of \(B^{\prime}\) and \(K^{\prime}\) are significantly better than the best fit values of \(B^{\prime}\) and \(K^{\prime}\). seen that the model is consistent with the observations. From the AIC analysis, it is clear that there is strong evidence in favor of EU. The cosmological parameters namely the deceleration parameter (\(q\)), statefinder pair (\(r,s\)) and the \(Om\) parameter are also obtained using the parameter values obtained from the MCMC analysis. It is evident from Fig. 4 that the universe transits from a decelerated phase of expansion to an accelerated one at some time in the past and remains accelerating in the near future. The transition redshift depends on the interaction strength \(\eta\) and for higher \(\eta\) values the universe transits into the accelerating phase at a later time as compared to small \(\eta\) values. The statefinder and \(Om\) diagnostic pathology applied here indicate that the DE began in the form of CG and gradually it evolves and drips away to the quintessence domain crossing the \(\Lambda CDM\) regime as shown in Figs. 5 and 6, which is a new result. It is noted that the present DE is quintessence type for \(\eta=0.03\). The classical stability of the EU model is investigated here using the expression obtained from the square speed of sound. We note that the EU model remains stable for some theoretically predicted values of the model parameters \(B^{\prime}\) and \(K^{\prime}\) at \(z=0\). However, if one considers the observational bounds on those parameters then the model becomes unstable against small perturbations near \(z=0\) for the chosen strength of interaction \(\eta\). Variation of \(\eta\) does not change the results to a significant amount. This issue is interesting and requires to be investigated further which will be taken up in future. Considering \(H_{0}\) as a free parameter along with \(B^{\prime}\) and \(K^{\prime}\) we have performed the joint MCMC analysis with the OHD + Pantheon dataset. The present day value of the Hubble parameter is estimated to be \(H_{0}=67.7\pm 0.59\) which is close to the results obtained by Chen and Ratra. The best-fitted curve for the Pantheon dataset is shown in Fig. 8 and the corresponding \(1-\sigma\), \(2-\sigma\) contours are shown in Fig. 9. The interaction strength plays a significant role in determining the value of \(H_{0}\) and we note that as \(\eta\) decreases the value of \(H_{0}\) increases by a small amount. To conclude, we note that even if the EU begins with a specific composition of cosmic fluids, later on in the course of its evolution the universe transits into a new phase of evolution with a set of compositions decided by the strength of interaction among the interacting fluids. It is shown that an interacting EU transforms to a matter-dominated phase with DE resembling the present observed universe. The observational constraints on the model parameters are estimated in addition to the other features of the observed universe with MCMC. The constraints obtained using the OHD and Pantheon leads to a classical instability at the present epoch. ## Acknowledgement BCP would like to thank DST SERB for a project (F. No. CRG/2021/000183). AC, BCR and BCP would like to thank IUCAA Centre for Astronomy Research and Development (ICARD), NBU, for extending research facilities. BCR also acknowledges the Ministry of Social Justice and Empowerment, Govt. of India and the University Grants Commission (UGC), India for providing fellowship. The work of KB was partially supported by the JSPS KAKENHI Grant Number 21K03547. ## Data Availability There is no new data associated with this article.
2309.10409
Augmenting Tactile Simulators with Real-like and Zero-Shot Capabilities
Simulating tactile perception could potentially leverage the learning capabilities of robotic systems in manipulation tasks. However, the reality gap of simulators for high-resolution tactile sensors remains large. Models trained on simulated data often fail in zero-shot inference and require fine-tuning with real data. In addition, work on high-resolution sensors commonly focus on ones with flat surfaces while 3D round sensors are essential for dexterous manipulation. In this paper, we propose a bi-directional Generative Adversarial Network (GAN) termed SightGAN. SightGAN relies on the early CycleGAN while including two additional loss components aimed to accurately reconstruct background and contact patterns including small contact traces. The proposed SightGAN learns real-to-sim and sim-to-real processes over difference images. It is shown to generate real-like synthetic images while maintaining accurate contact positioning. The generated images can be used to train zero-shot models for newly fabricated sensors. Consequently, the resulted sim-to-real generator could be built on top of the tactile simulator to provide a real-world framework. Potentially, the framework can be used to train, for instance, reinforcement learning policies of manipulation tasks. The proposed model is verified in extensive experiments with test data collected from real sensors and also shown to maintain embedded force information within the tactile images.
Osher Azulay, Alon Mizrahi, Nimrod Curtis, Avishai Sintov
2023-09-19T08:19:01Z
http://arxiv.org/abs/2309.10409v1
# Augmenting Tactile Simulators with Real-like and Zero-Shot Capabilities ###### Abstract Simulating tactile perception could potentially leverage the learning capabilities of robotic systems in manipulation tasks. However, the reality gap of simulators for high-resolution tactile sensors remains large. Models trained on simulated data often fail in zero-shot inference and require fine-tuning with real data. In addition, work on high-resolution sensors commonly focus on ones with flat surfaces while 3D round sensors are essential for dexterous manipulation. In this paper, we propose a bi-directional Generative Adversarial Network (GAN) termed SightGAN. SightGAN relies on the early CycleGAN while including two additional loss components aimed to accurately reconstruct background and contact patterns including small contact traces. The proposed SightGAN learns real-to-sim and sim-to-real processes over difference images. It is shown to generate real-like synthetic images while maintaining accurate contact positioning. The generated images can be used to train zero-shot models for newly fabricated sensors. Consequently, the resulted sim-to-real generator could be built on top of the tactile simulator to provide a real-world framework. Potentially, the framework can be used to train, for instance, reinforcement learning policies of manipulation tasks. The proposed model is verified in extensive experiments with test data collected from real sensors and also shown to maintain embedded force information within the tactile images. ## I Introduction Tactile sensing is a fundamental aspect of human perception and, therefore, is a topic for extensive research in robotics [1, 2, 3]. Such sensing plays a crucial role in enabling robots to interact with the physical world potentially with precision and dexterity. The advancement of tactile sensor technologies has led to high-dimensional and complex data representations which, in turn, requires sufficient data in order to train accurate models and policies [4]. Hence, tactile simulations and sim-to-real approaches are gaining momentum as researchers explore new ways to bridge the gap between virtual and physical worlds [5, 6]. Tactile sensors come in various technologies including capacitive transducers [7], force sensitive resistors [8] and piezo-resistors [9]. However, they usually fit to specific applications and provide low-resolution data. Optical-based tactile sensors, on the other hand, have become increasingly common due to their ability to provide high-resolution signals [10, 11, 12, 13]. In such sensors, an internal camera observes the deformation of the contact pad, typically made of a soft elastomer, during contact with an object. An image captured by the internal camera has the potential to encode essential information regarding the contact including its position with respect to the sensor's coordinate frame. Nevertheless, due to the rich data in such images, training models to estimate features in the data requires an extensive amount of samples. In order to cope with the data amount requirements, simulations of optical-based tactile sensors have been addressed and have the potential to rapidly generate large datasets of tactile images [14, 15]. However, transferring a model trained on simulated data to a real sensor, i.e., sim-to-real, may present difficulties. Primarily, real-world tactile images often exhibit substantial disparities when compared to their simulated counterparts [16, 17]. In attempt to bridge the gap between simulated tactile data and real-world tactile information, various approaches have been explored. Domain randomization was included in simulation of the TacTip sensor where some parameters are constantly randomised [18]. Similarly, a simulation was augmented with random texture perturbations in order to train a GelS Fig. 1: The sim-to-real generator from the trained SightGAN model is used to map simulated tactile images to real-like images of a 3D round tactile sensor. Since the generated image is close to reality, various models can be trained using the simulator. In this example, a position estimator can provide accurate labeling to the image making it a fine simulator for various tasks. Domain randomization, however, requires careful choices of the parameters to randomize and is limited in complex tasks such as in high-resolution tactile images. Some work have harnessed Finite Element Methods (FEM) in order to generate simulated deformation of the contact pad in tactile sim-to-real [20]. However, the computational complexity of such an approach limits real-time sensing [21]. A notable technique involves the use of Generative Adversarial Networks (GANs), a class of deep learning models that can generate new image samples from learned data distributions [22]. In tactile real-to-sim, a GAN was used to match real images of the BioTac sensor in order to target simulated ones [2]. Yet, the approach requires paired images which are difficult to extract. Hence, a variant of GAN termed CycleGAN [23] has gained traction for its ability to facilitate sim-to-real and real-to-sim transfer of tactile information without a paired dataset. CycleGAN is able to learn the mapping of data distributions from the simulated source domain to the real-world target domain and vice versa, without explicit correspondence between individual samples. In the first work to utilize CycleGAN in tactile sensing, a bidirectional sim-to-real approach was proposed for the GelSight sensor [24]. In a later work, an improved CycleGAN architecture was introduced with task-specific loss functions for enhanced structural fidelity of generated tactile images [25]. However, these approaches and others [26, 27] focused on a specific tactile sensor with a flat contact surface and without exhibiting zero-shot capability. In addition, prior work focused on tactile images where the contact trace seen in the image is rather large [1]. Consequently, CycleGAN may learn to hide information in the adapted image instead of explicitly retaining the semantics when the trace is small [28, 29]. In this paper, we tackle the sim-to-real problem for high-resolution 3D round sensors while enabling zero-shot inference of accurate contact position estimation (Figure 1). Building upon CycleGAN, we propose the _SightGAN_ model which augments CycleGAN with contact-specific consistency losses as illustrated in Figure 2. The losses reduce background disparities between simulated and real tactile images while minimizing contact position errors. For the latter, distillation with a trained contact position estimator is exerted to compare accuracies between generated tactile images, either in real or simulated domains. One of the key advantages of employing SightGAN is its bidirectional capability, allowing to seamlessly transfer knowledge both from real to simulated domains and vice versa. This versatility enables to train models for various sensors of different illumination and fabrication uncertainties. SightGAN is evaluated on the novel AllSight sensor [13] whereas it can potentially be applied to any optical-based tactile sensor. AllSight is a high-resolution and all-round tactile sensor in which a model for it requires a sufficient amount of data. Hence, our approach provides a simulated environment with real-like and accurately labeled tactile images as demonstrated in Figure 1. As such, models trained on these synthetic images exhibit a zero-shot inference capability on new, real and untrained sensors. In addition, unlike prior work, SightGAN is able to withstand small contact traces. Hence, the proposed model can be used in applications with small loads in, for instance, in-hand manipulation. The simulator of AllSight along with the Fig. 2: Scheme of the SightGAN model. The model operates on difference images in order to enhance generability to new sensors. Top and bottom rows illustrate the sim-to-real-to-sim and real-to-sim-to-real processes, respectively. The cycle consistency loss of CycleGAN is augmented by two additional losses aimed to provide pixel-level domain adaptation of the contacts. proposed SightGAN sim-to-real framework and datasets are provided open-source1 for the benefit of the community and to advance research in the field. Footnote 1: AllSight simulator, SightGAN sim-to-real framework and datasets: [https://github.com/osheraz/allsight_sim](https://github.com/osheraz/allsight_sim) ## II Optical tactile sensors ### _Design_ An optical-based tactile sensor typically uses an internal camera to track the deformation of a soft elastomer upon contact with an object. In this work, we consider the AllSight sensor proposed in [13] which has a full 360\({}^{\circ}\) round contact region clearly visible without blind spots or obscurance. Hence, the camera is covered by a tube in the shape of a cylinder with an hemispherical end as seen in Figure 3. The tube is three-layered where the inner layer is a rigid crystal-clear shell. A transparent elastomer covers the shell and is coated on its exterior by a reflective silicone paint. In such design, the camera observes the deformation of the elastomer from within upon contact with the exterior layer. The inner-surface of the shell is evenly illuminated by an annular printed circuit board (PCB) with embedded LEDs. The lighting provides better visibility and informative images. In this work we consider white lighting while any other LED setting is possible. ### _Data collection of real images_ A dataset \(\mathcal{P}_{real}\) of real images with spatial position labels is collected using an automated setup. A robotic arm with a round indenter mounted on its tip repeatedly touched the surface of AllSight in various contact locations. First, a reference image \(\mathbf{I}_{ref}\in\mathcal{I}_{r}\), where \(\mathcal{I}_{r}\) denotes the space of real images, is recorded for a sensor without any contact. During contact, an image \(\hat{\mathbf{I}}_{i}\) is taken along with its position \(\mathbf{p}_{i}\) on the contact surface. The position is represented in the sensors coordinate frame as seen in Figure 1 and calculated through the forward kinematics of the arm. Furthermore, we consider difference images in the dataset such that an image used for training is \(\mathbf{I}_{i}=\hat{\mathbf{I}}_{i}-\mathbf{I}_{ref}\). Such subtraction has been shown to enhance the learning process making the model agnostic to the background and focuses only on the color gradients that occur around the deformations [5]. Consequently, the model is expected to generalize to new sensors of different backgrounds in zero-shot. The acquisition and labeling process yields dataset \(\mathcal{P}_{real}=\{(\mathbf{I}_{i},\mathbf{p}_{i})\}_{i=1}^{N}\) of \(N\) labeled images. ### _Data collection in simulation_ TACTO is a physics-engine simulator for optical-based tactile sensors [14]. An AllSight simulation was set in TACTO as seen in Figure 1 and calibrated by including reference images from real AllSight sensors. To enhance sim-to-real pre-training of the state estimation model, we collected different reference images from different AllSight sensors and used them for augmentation. The simulated dataset \(\mathcal{P}_{sim}\) was generated by labeling \(M\) images captured in TACTO during random contacts. Here also, a reference image from a real sensor is subtracted from the contact images. An image \(\mathbf{I}_{i}\in\mathcal{I}_{s}\), where \(\mathcal{I}_{s}\) denotes the space of simulated images, is taken along with the contact position \(\mathbf{p}_{i}\) such that \(\mathcal{P}_{sim}=\{(\mathbf{I}_{i},\mathbf{p}_{i})\}_{i=1}^{M}\). ### _Contact position estimation model_ Using either datasets \(\mathcal{P}_{real}\) or \(\mathcal{P}_{sim}\), we train a contact position estimation model. The model \(f_{\theta}:\mathcal{I}\rightarrow\mathbb{R}^{3}\) maps a tactile image to the spatial position of the contact on the sensor \(\mathbf{p}\in\mathbb{R}^{3}\). Vector \(\theta\) is the trainable parameters of the model. The model is based on the ResNet-18 architecture [30]. The top layer is removed and the flattened output features are fed through two fully-connected layers of size 512 and 256. At each iteration, both reference \(\mathbf{I}_{ref}\) and contact \(\mathbf{I}_{i}\) images are down-sampled to resolution \(224\times 224\) and stacked along the channel. The stacked image is then passed through the model to get the estimated position \(\tilde{\mathbf{p}}_{i}\). ## III Method The proposed SightGAN, illustrated in Figure 2, integrates the CycleGAN architecture with additional auxiliary losses designated for tactile images. The losses aim to reduce disparities in background and contact reconstruction of images in the bidirectional transfer between simulation and real domains. In this section, we briefly outline the GAN and CycleGAN losses prior to presenting the SightGAN loss. ### _Generative Adversarial Network (GAN)_ In GAN, a mapping \(G:\mathcal{I}_{s}\rightarrow\mathcal{I}_{r}\) is trained such that a discriminator \(D_{r}\) cannot distinguish between an original image \(\mathbf{I}_{r}\in\mathcal{I}_{r}\) and a synthetic one \(\tilde{\mathbf{I}}_{r}=G(\mathbf{I}_{s})\) where \(\mathbf{I}_{s}\in\mathcal{I}_{s}\). Model \(G\) is trained to minimize the adversarial loss \[\mathcal{L}_{\text{GAN}}(G,D_{r},\mathcal{I}_{s},\mathcal{I}_{r}) =\mathbb{E}_{\mathbf{I}_{r}\sim\mu_{r}}[\log(D_{r}(\mathbf{I}_{r}))] \tag{1}\] \[+\mathbb{E}_{\mathbf{I}_{s}\sim\mu_{s}}[\log(1-D_{r}(G(\mathbf{I }_{s})))]\] where \(\mu_{r}\) and \(\mu_{s}\) are the data distributions of \(\mathcal{I}_{r}\) and \(\mathcal{I}_{s}\), respectively. ### _CycleGAN: Bidirectional Image Translation_ Our approach facilitates bidirectional mapping, as in CycleGAN, between unpaired image datasets from two domains, \(\mathcal{I}_{s}\) and \(\mathcal{I}_{r}\). This process involves the generators of \(G:\mathcal{I}_{s}\rightarrow\mathcal{I}_{r}\) and \(F:\mathcal{I}_{r}\rightarrow\mathcal{I}_{s}\) along with adversarial discriminators \(D_{s}\) and \(D_{r}\). In additional to adversarial losses for both mappings, a cycle consistency loss is included in Fig. 3: (Left) The AllSight tactile sensor with the internal view of the camera during contact with a round indenter. (Right) Structure illustration of the AllSight sensor. order to minimize the cyclic reconstruction of images given by \[\mathcal{L}_{\text{cycle}}(G,D_{r},\mathcal{I}_{s},\mathcal{I}_{r}) =\mathbb{E}_{\mathbf{I}_{r}\sim\mu_{r}}\|G(F(\mathbf{I}_{r}))- \mathbf{I}_{r}\|_{1} \tag{2}\] \[+\mathbb{E}_{\mathbf{I}_{s}\sim\mu_{s}}\|F(G(\mathbf{I}_{s}))- \mathbf{I}_{s}\|_{1}.\] Consequently, CycleGAN is trained with the following loss: \[\mathcal{L}_{\text{CycleGAN}}(G,F,D_{s},D_{r}) =\mathcal{L}_{\text{GAN}}(G,D_{r},\mathcal{I}_{s},\mathcal{I}_{r}) \tag{3}\] \[+\mathcal{L}_{\text{GAN}}(F,D_{s},\mathcal{I}_{r},\mathcal{I}_{s})\] \[+\lambda_{\text{cycle}}\mathcal{L}_{\text{cycle}}(F,G)\] where \(\lambda_{\text{cycle}}\) is some pre-defined weight. ### _SightGAN Losses_ Drawing inspiration from the perception consistency loss of RetinaGAN [31], we introduce two auxiliary contact losses, one in the image domain and the other in the contact space. In the context of optical tactile images, these images can be partitioned into two distinct regions: the background region representing the no-contact area of the tactile sensor and the foreground region embodying the tactile sensor's interaction with objects. For the background region, the _Pixel-wise Contact Region Consistency loss_ emphasizes the constraint of color similarity in the generated images while forcing no contact traces in the region. Furthermore, the _Spatial Contact Consistency Loss_ focuses on the contact estimation accuracy as minor deviations in the generated image can have great significance on the spatial positioning of the contact. This meticulous attention to foreground texture and structural nuances of the background is pivotal in optimizing sim-to-real tactile images from 3D tactile sensors. #### Iii-C1 Spatial Contact Consistency loss The spatial contact consistency loss penalizes disparities in contact localization after transferring images across domains. We define function \(\mathcal{L}_{sp}\) to compare between contact position estimations of two images, \(\mathbf{I}\) and \(\mathbf{J}\) as \[\mathcal{L}_{sp}(\mathbf{I},\mathbf{J})=\|f_{\theta}(\mathbf{I})-f_{\theta}( \mathbf{J})\|^{2}. \tag{4}\] With this function and based on the perception consistency loss of the RetinaGAN [31], we define the spatial contact consistency loss as \[\mathcal{L}_{\text{spatial}}(\mathbf{I}_{s},\mathbf{I}_{r},F,G) =\mathcal{L}_{sp}(\mathbf{I}_{s},G(\mathbf{I}_{s}))+ \tag{5}\] \[+\frac{1}{2}\mathcal{L}_{sp}(\mathbf{I}_{s},F(G(\mathbf{I}_{s}))) +\frac{1}{2}\mathcal{L}_{sp}(G(\mathbf{I}_{s}),F(G(\mathbf{I}_{s})))+\] \[+\mathcal{L}_{sp}(\mathbf{I}_{r},F(\mathbf{I}_{r})) +\frac{1}{2}\mathcal{L}_{sp}(\mathbf{I}_{r},G(F(\mathbf{I}_{r})))+\] \[+\frac{1}{2}\mathcal{L}_{sp}(F(\mathbf{I}_{r}),G(F(\mathbf{I}_{r} ))).\] The halving of losses involving the cycled images accounts for their dual comparison against the original and transferred images. #### Iii-C2 Pixel-wise Contact Region Consistency loss In order to further augment the the accuracy of contact localization in domain transfer and enhance structural fidelity, we introduce a loss related to the contact region. Minor changes in contact pixels can lead to significant alterations in the contact domain, potentially causing errors in contact estimation with \(f_{\theta}\). For either simulated or real training images, the contact position \(\mathbf{p}\) is labeled as described in Section II. For each image \(\mathbf{I}\), binary image \(\mathbf{B}\) is defined where a mask is placed on the contact region of the image. In practice, pixels in and out of the contact regions are marked with zeros and ones, respectively, in \(\mathbf{B}\). The contact loss between an image and its transfer is, therefore, defined by \[\mathcal{L}_{\text{m}}(\mathbf{I},\mathbf{B},H)=\|\mathbf{I}\ast\mathbf{B}-H( \mathbf{I})\ast\mathbf{B}\|_{1} \tag{6}\] where \(\ast\) denotes pixel-wise multiplication and, \(H\) is either \(F\) or \(G\) for real or simulated images, respectively. Hence, the pixel-wise contact region consistency loss is defined as \[\mathcal{L}_{\text{mask}}(\mathbf{I}_{s},\mathbf{I}_{r},F,G)=\mathcal{L}_{ \text{m}}(\mathbf{I}_{s},\mathbf{B}_{s},G)+\mathcal{L}_{\text{m}}(\mathbf{I}_ {r},\mathbf{B}_{r},F). \tag{7}\] Note that additional loss components in (7) similar to \(\mathcal{L}_{\text{spatial}}\) do not enhance performance as resulted in preliminary analysis and, therefore, not included. The above two auxiliary losses are applied across batches of simulated and real images with sim-to-real \(G\) and real-to-sim \(F\) generators. Hence, the overall auxiliary loss is defined by the sum of both \[\mathcal{L}_{c}(\mathbf{I}_{s},\mathbf{I}_{r},F,G) =\lambda_{\text{spatial}}\mathcal{L}_{\text{spatial}}(\mathbf{I }_{s},\mathbf{I}_{r},F,G) \tag{8}\] \[+\lambda_{\text{mask}}\mathcal{L}_{\text{mask}}(\mathbf{I}_{s}, \mathbf{I}_{r},F,G)\] where \(\lambda_{\text{spatial}}\) and \(\lambda_{\text{mask}}\) are weight parameters. Combining (3) with (8), the complete SightGAN loss is \[\mathcal{L}_{\text{SightGAN}}(G,F,D_{s},D_{r}) =\mathcal{L}_{\text{GAN}}(G,D_{r},\mathcal{I}_{s},\mathcal{I}_{r})\] \[\quad+\mathcal{L}_{\text{GAN}}(F,D_{s},\mathcal{I}_{r},\mathcal{I}_ {s})\] \[\quad+\lambda_{\text{cycle}}\mathcal{L}_{\text{cycle}}(F,G)\] \[\quad+\mathcal{L}_{c}(\mathbf{I}_{s},\mathbf{I}_{r},F,G). \tag{9}\] Once the SightGAN model has been trained, a simulator is available. In order to generate real-like images for a new sensor, one only needs to reintegrate the generated foreground image outputted from the sim-to-real generator with a real reference image of the sensor. Hence, the generated images are expected to provide zero-shot contact inference. ## IV Experiments Experiments are presented in this section in order to analyze the performance of SightGAN. While prior work often use one specific sensor, we collect a training dataset \(\mathcal{P}_{real}\) over six different AllSight sensors. The dataset is collected in an automated process where each AllSight sensor is mounted on a fixed frame. An indenter with a round tip of radius 3 mm is mounted to a robotic arm. The arm is also equipped with a Robotiq FT-300 Force/Torque (F/T) sensor for labeling contact forces. For each sample, the robot selects a point to press on the surface of the sensor and computes its location through its kinematics. An image taken during the press is labeled with its position. The collection yielded 1,000 labeled images for each sensor and a total of \(N=6,000\) labeled images in \(\mathcal{P}_{real}\). In addition, we aim to analyze the generalization abilities of the trained model for new sensors. Hence, a test set is collected over two new sensors not included in the training. The test set is comprised of 2,000 labeled images. Results are presented with white LED illumination for all sensors whereas RGB illumination was tested in preliminary experiments yielding similar results. Also, for each image, the reference image is subtracted as described in Section II-B. For \(\mathcal{P}_{sim}\), data is collected in TACTO as described in Section II-C. \(\mathcal{P}_{sim}\) consists of simulated tactile images and their associated contact positions, featuring a 3 mm radius spherical indenter at varying penetration depths. To fine-tune the simulation, we utilized reference images obtained from six different sensors. The collection yielded 1,000 labeled images for each sensor and a total of \(M=6,000\) labeled images in \(\mathcal{P}_{sim}\) ### _Contact position evaluation_ Table I summarizes the Root-Mean-Square-Errors (RMSE) for position estimation with model \(f_{\theta}\) while trained with different origins of training data. All models were evaluated on distinct data from the two test sensors. The results include the lower accuracy bound of directly training with real data (different from the test data) from the two test sensors. Also, we include accuracy when training \(f_{\theta}\) directly with \(\mathcal{P}_{real}\). Next, model \(f_{\theta}\) is trained with data \(\mathcal{P}_{sim}\) generated in the simulation without any GAN and while using the reference images of the six training sensors. Using only simulated data provides poor accuracy showing that the simulation, even with real reference images, is far from representing reality. Then, \(f_{\theta}\) with data generated by the CycleGAN (trained with \(\lambda_{\text{cycle}}=10\)) alone without additional losses is evaluated. The error with only CycleGAN is the highest due to its inability to focus and reconstruct the contact. This is a known issue when most of the image is not under contact and can even lead to a mode of collapse [28]. Adding either spatial contact consistency loss (5) with \(\lambda_{\text{spatial}}=0.1\) or pixel-wise contact region consistency loss (7) with \(\lambda_{\text{mask}}=30\) is shown to significantly reduce the error. The accuracy of SightGAN, which combines CycleGAN with both losses, provides the lowest error out of the generative models. Hence, SightGAN generates real-like images from simulated ones and enables zero-shot position estimation of contacts. We now evaluate the accuracy which the sim-to-real of SightGAN provides to images generated from simulation considering diversity in the training data. Model \(f_{\theta}\) is trained over synthetic images generated by the sim-to-real of SightGAN. Figure 4 presents the position estimation RMSE of \(f_{\theta}\) over the test sensors with regards to the number of real train sensors used to train SightGAN. The addition of more sensors in the training set increases diversity and decreases the estimation error. Hence, more train sensors improves the zero-shot capability over new real sensors. While zero-shot provides relatively accurate predictions, real data from the target sensors can improve accuracy. Figure 5 shows the error of position estimation over the test data of the two new sensors with regards to the number of new samples used to fine-tune the model. The addition of a small amount of new samples for fine-tuning further improves accuracy. With 300 additional samples, the position RMSE reaches to approximately 1 mm. Next, we evaluate the generability of a model, trained on samples with round indenters, to estimate contacts of other geometries. Test data of 2,000 labeled images was collected from one test sensor with square (edge length 6 mm) and elliptical (axis lengths 8 mm and 4 mm) indenters. Table II presents the RMSE results for position contact estimation on the test data. The first row shows a baseline for training and testing model \(f_{\theta}\) with the new test indenters. All other results in the Table are based on training data of round indenters and evaluation on the test indenters. While the second and third rows are estimation models trained on real data, the rows below are based on estimation models trained with simulated images taken from the simulator while using similar square and elliptical indenters. The simulated images are passed through the respective sim-to-real model and used to train \(f_{\theta}\). Here also, SightGAN exhibits the lowest error over all sim-to-real generated image origins. Furthermore, the results show that using SightGAN generalizes well and shows similar accuracy to directly training with real data. Hence, synthetic data from the sim-to-real generator of SightGAN generalizes well and can be used to train zero-shot estimation models. \begin{table} \begin{tabular}{l l c} \hline \hline & Origin of training data for \(f_{\theta}\) & Position RMSE (mm) \\ \hline \multirow{4}{*}{\(\mathcal{P}_{sim}\)} & Data from the 2 test sensors & 0.66 \\ & Data from 6 train sensors & 2.16 \\ & 6 sensors from simulation & 7.48 \\ \hline \multirow{2}{*}{\(\mathcal{P}_{sim}\)} & CycleGAN & 13.30 \\ & CycleGAN + spatial contact loss (5) & 3.86 \\ & CycleGAN + pixel-wise contact loss (7) & 3.70 \\ & SightGAN & 3.49 \\ \hline \hline \end{tabular} \end{table} TABLE I: Estimation accuracy of contact positions over data from two test sensors with regards to the origin of the train data for \(f_{\theta}\) Fig. 4: Position estimation error of \(f_{\theta}\) trained with synthetic data from SightGAN over the test sensor with regards to the number of train sensors used to train SightGAN. Fig. 5: Position estimation error with regards to the number of real images from the test sensor used to fine-tune model \(f_{\theta}\). Results with zero new tactile images are the zero-shot transfer errors without any fine-tuning. ### _Sim-to-real quality_ In order to evaluate the sim-to-real of SightGAN for generating realistic tactile images, we consider the Frechet Inception Distance (FID) [32] and Kernel Inception Distance (KID) [33] metrics. FID and KID are common pixel-level metrics of domain adaptation and quantify the quality and diversity of generated images. Furthermore, they do not require paired images. Table III exhibits FID and KID comparative results for direct mapping between simulated and real images, for CycleGAN and for SightGAN. Direct comparison between simulated and real images is obviously poor and emphasizes the reality gap. CycleGAN provides some improvement. Nevertheless, SightGAN is shown to be superior to CycleGAN with approximately 47% and 16% quality improvements for FID and KID, respectively. ### _Contact force evaluation_ Common tactile simulations provide force readings which are linear to the deformation of the contact pad [14, 34]. In reality, however, modeling the deflection of the contact pad is nearly impossible and the reaction forces are more complex [20, 35]. Hence, we further assess the potential of using SightGAN to provide accurate force estimations to the real-like simulated tactile images. A trained contact force estimation model \(g_{\psi}:\mathcal{I}\rightarrow\mathbb{R}^{3}\) is evaluated to approximate the corresponding contact force \(\mathbf{f}=[f_{x},f_{y},f_{z}]^{T}\in\mathbb{R}^{3}\). Vector \(\psi\) includes trainable model parameters. A real dataset was collected with the F/T sensor by labeling tactile images of the test sensors with force magnitudes during contact. The dataset includes a wide range of contact force magnitudes within ranges \(f_{z}\in[0N,12N]\) and \(f_{x},f_{y}\in[-2N,2N]\) where \(f_{z}\) is the normal force and \(f_{x},f_{y}\) are the forces tangential to the contact surface at \(\mathbf{p}\). Table IV presents a comparison of force estimation accuracy over several origins of train data for \(g_{\psi}\). For a baseline, model \(g_{\psi}\) is trained on data from the six train sensors and evaluated on different data from the same sensors. Similarly, the model was trained on data from the two test sensors and evaluated on different data from the same sensors. Obviously, having more diversity originating from more sensors provides better generability and lower errors. Next, SightGAN and \(g_{\psi}\) are trained on data from the six training sensors. Following, force accuracy is evaluated by passing new test data, either from the six train sensors or the two test sensors, through the real-to-sim and sim-to-real generators. The reconstructed images are then evaluated with model \(g_{\psi}\). The results with SightGAN show relatively similar accuracy to the baseline errors. Evidently, SightGAN maintains force information embedded within the original images and reconstructs it. Potentially, simulated images from the simulator upon contact can be mapped with the sim-to-real to the real domain and evaluated with \(g_{\psi}\). Hence, the simulator would be able to provide a comprehensive contact state in various applications. However, such approach requires further research and evaluations. We leave this for future work. ## V Conclusions In this paper, a novel generative model termed SightGAN was proposed for augmenting tactile simulators with real-like capabilities. The model is based on a bidirectional GAN and can be used to simulate 3D round tactile sensors. Using unpaired real and simulated data, SightGAN is trained to provide sim-to-real and real-to-sim mappings while minimizing the cycle consistency loss of CycleGAN and two additional loss components. The two loss components aim to minimize contact localization errors in the reconstruction of the image through distillation with a trained contact position estimator. Such approach augments a simulator with real-like performance and enables zero-shot capabilities for models trained on the simulated sensors and evaluated on real ones. Indeed, using real data to train a contact position estimator provides similar or better accuracy then doing so with synthetic data generated by the SightGAN. However, the proposed model provides a framework for augmenting a simulator with real-like capabilities and can leverage tedious policy learning processes. SightGAN was shown to embed contact force data within the generated images. Hence, future work could address the calibration of simulated linear forces to real-world contact forces. Future work may also include investigating the integration of SightGAN based policy learning into robotic in-hand manipulation, addressing the complex real-world challenges and constraints of online learning.
2309.07175
MELAGE: A purely python based Neuroimaging software (Neonatal)
MELAGE, a pioneering Python-based neuroimaging software, emerges as a versatile tool for the visualization, processing, and analysis of medical images. Initially conceived to address the unique challenges of processing 3D ultrasound and MRI brain images during the neonatal period, MELAGE exhibits remarkable adaptability, extending its utility to the domain of adult human brain imaging. At its core, MELAGE features a semi-automatic brain extraction tool empowered by a deep learning module, ensuring precise and efficient brain structure extraction from MRI and 3D Ultrasound data. Moreover, MELAGE offers a comprehensive suite of features, encompassing dynamic 3D visualization, accurate measurements, and interactive image segmentation. This transformative software holds immense promise for researchers and clinicians, offering streamlined image analysis, seamless integration with deep learning algorithms, and broad applicability in the realm of medical imaging.
Bahram Jafrasteh, Simón Pedro Lubián López, Isabel Benavente Fernández
2023-09-12T19:54:35Z
http://arxiv.org/abs/2309.07175v1
# MELAGE: A purely python based Neuroimaging software (Neonatal) ###### Abstract MELAGE, a pioneering Python-based neuroimaging software, emerges as a versatile tool for the visualization, processing, and analysis of medical images. Initially conceived to address the unique challenges of processing 3D ultrasound and MRI brain images during the neonatal period, MELAGE exhibits remarkable adaptability, extending its utility to the domain of adult human brain imaging. At its core, MELAGE features a semi-automatic brain extraction tool empowered by a deep learning module, ensuring precise and efficient brain structure extraction from MRI and 3D Ultrasound data. Moreover, MELAGE offers a comprehensive suite of features, encompassing dynamic 3D visualization, accurate measurements, and interactive image segmentation. This transformative software holds immense promise for researchers and clinicians, offering streamlined image analysis, seamless integration with deep learning algorithms, and broad applicability in the realm of medical imaging. keywords: Software, Neuroimaging, MELAGE, OpenGL, python, Neonatal, MEDICAL IMAGING + Footnote †: journal: Medical Imaging , ## 1 Introduction Every year, approximately 15 million infants are born prematurely (before 37 weeks of gestation) worldwide [1]. In Europe alone, this accounts for around 500,000 infants annually, constituting approximately 10% of all newborns [2]. The foremost challenge in caring for very low birth weight premature infants today is to optimize their neurodevelopmental outcomes while reducing the risk of future disabilities. To achieve this, early detection of brain injuries is crucial. This emphasis on early diagnosis is paramount, given that clinical examinations are complicated by the underdeveloped neurological function in neonates [3]. The development of imaging biomarkers during the neonatal period can address this pressing need for early diagnosis. Traditional clinical assessments are hindered by the limited neurological function in newborns. These biomarkers hold the potential to expedite the creation of neuroprotective strategies and the implementation of evidence-based care programs tailored to enhance brain development and promote neuroplasticity. Neuroimaging methods, notably magnetic resonance imaging (MRI) and three-dimensional ultrasonography (US), prove to be powerful diagnostic instruments in neonatal medicine. These techniques not only aid in the identification of brain injuries but also provide the means to explore diverse parameters that hold the potential to function as predictive markers for the outcomes of preterm infants. Furthermore, MRI imaging, including T1-weighted and T2-weighted imaging, finds utility in the analysis of the adult human brain as well. These methods enable the examination of various regions of interest (ROI) within the brain and the extraction of both simple and intricate parameters, including geometric characteristics. Two primary approaches are commonly employed: * Automated image segmentation methods, often utilizing deep learning or artificial intelligence techniques [4; 5]. These methods offer automation and speed, but they may struggle to generalize effectively when applied to previously unseen images. Consequently, they typically require validation by domain experts. * Manual image segmentation, which entails a skilled expert segmenting images slice by slice. Several general-purpose software tools are available for manual image segmentation, such as 3D Slicer [6] and ITK-SNAP [7]. Fully manual image segmentation is a time-intensive process that demands the expertise of medical professionals. While it is widely regarded as the gold standard, some studies have reported that automatic segmentation methods can achieve comparable or superior accuracy compared to manual segmentation by an expert [8]. However, it's worth noting that automatic segmentation often necessitates oversight and adjustments by an expert, emphasizing the importance of close collaboration between engineers and medical professionals to advance these methods. Designing a software interface that balances user-friendliness for routine clinical use with advanced features for both manual and automatic segmentation remains a significant challenge. Most clinicians are hesitant to adopt non-intuitive and complex software solutions. To address these challenges, we introduce MELAGE, a novel software platform for semi-manual segmentation, visualization, and correction. MELAGE is built entirely in Python, offering seamless integration with other Python libraries to facilitate automated segmentation while allowing for subsequent manual adjustments. In this paper, we provide a concise overview of the core functionality of the MELAGE image analysis tool. The paper is structured as follows: In Section 2, we highlight the capability of MELAGE which provides a brief discussion of the tool's key features and also advanced tools offered within MELAGE. Finally, in Section 4, we offer concluding remarks on MELAGE as an user-friendly and innovative neuroimaging tool for human brain analysis using MRI and Ultrasound imaging. ## 2 Main features of MELAGE ### Basic MELAGE has several options for image analysis. #### 2.1.1 Image enhancement At times, the presence of low-quality images, coupled with interobserver variability during manual segmentation, poses a significant challenge in achieving a dependable segmentation outcome. To mitigate this issue, we have incorporated image enhancement functionalities within our system. These capabilities encompass adjustments such as brightness and contrast modifications, the application of band-pass filters in the frequency domain, as well as Sobel edge detection [9] and Hamming filters [10]. These enhancements collectively contribute to the robustness and accuracy of the segmentation process using MELAGE. We have incorporated a widget that allows users Figure 1: MRI image from a preterm patient at PMA 34.5 weeks. Top image: Original image; coronal, sagittal and axial planes are seen from left to right, Bottom Image: enhanced image by changing the image brightness and its contrast. contrast parameters. These adjustments take effect immediately, affording users the flexibility to fine-tune parameters to their specific requirements. Once the parameters are set, they are consistently applied across all slices and planes for the currently displayed image. In Figure 2.1.1, we illustrate image enhancement techniques applied to a slice taken from an ultrasound image. #### 2.1.2 Mutual view In many research and clinical scenarios, researchers or clinicians often need to discern and analyze the disparities between two images captured at different time points or multimodal images acquired from the same patient. For instance, Figure 2: Image enhancement in a coronal slice taken from 3D image of a preterm neonate at HUPM. this could involve comparing different regions of interest in an ultrasound image to an MRI scan. Within our software (as illustrated in Figure 3(a)), we have seamlessly integrated this capability, empowering users to concurrently visualize and process two distinct images. This feature enhances the utility of MELAGE by enabling side-by-side examination and analysis of image pairs. #### 2.1.3 Linking planes Coronal, sagittal and axial planes of a 3D image can be shown at the same time in MELAGE. This ability helps the clinician to understand the exact localization of the ROI and is optimal for image segmentation as it allows visualization of the other 2 orthogonal planes when segmenting an image in one plane. In this way, an expert can work in one plane while simultaneously checking other planes to reduce the possible mistakes in segmentation. Figure 4(a) shows the point located at coronal and sagittal planes while we touch a point in the axial plane for both MRI and Ultrasound images. Figure 3: Top Image; Ultrasound image from a patient taken at PMA of 49 weeks and MRI image from the same patient. #### 2.1.4 Color Schemes Various atlases utilize distinct color schemes for highlighting regions of interest. To offer users greater flexibility, we have incorporated the feature to select a color scheme that aligns with their preferences. Additionally, MELAGE allows users to introduce new custom color schemes, enabling dynamic visualization of the segmentation results. In Figure 5, you can observe a visual comparison between two distinct color schemes applied to the same segmentation, showcasing the versatility of our system. Figure 4: Linkage between planes; Star presents the location that has been touched by mouse in the axial plane. #### 2.1.5 Instantaneous 3D visualization of the segmented ROIs Enhanced visualization through 3D imaging provides a more comprehensive understanding of brain structures. As demonstrated in Figure 6, we present an illustrative example of segmented white-matter in the coronal plane, accompanied by their corresponding 3D visualizations using consistent color coding. The real-time visualization of these segmented regions empowers users with direct control over their spatial representation, thereby enhancing segmentation precision and accuracy. This capability to interactively manipulate segmented structures in a 3D space not only aids in refining the segmentation process but also promotes a deeper insight into the intricate anatomy of the brain. Figure 5: An example of a segmented image with different color schemes. #### 2.1.6 Image Rotation, Zooming and Panning Users have the capability to perform three-dimensional image rotations along axial, sagittal, and coronal planes. Figure 7 provides an illustrative example showcasing the dynamic 3D image rotation across these planes. Subsequent to the desired rotation, the modified image can be effortlessly saved as a new file using the MELAGE platform. Additionally, MELAGE facilitates zooming and panning within the image, offering users the flexibility to segment images at their preferred zoom settings, thereby enhancing the precision and accuracy of the segmentation process. Figure 6: Instantaneous 3D visualization of the segmented ROIs during image segmentation, left: segmented left and right white matter. right: 3D visualization of the segmented image. Figure 7: An example of 3D image rotation in Axial, Coronal and Sagittal planes using MELAGE platform. #### 2.1.7 Supported image formats MELAGE showcases exceptional adaptability, providing seamless support for a multitude of image formats, including 3D NIfTI, DICOM, and NRRD. Furthermore, its versatility extends to the efficient management of 4D NIfTI images, making it an indispensable tool for comprehensive image analysis. Remarkably, MELAGE boasts the capacity to process ultrasound images, encompassing data originating from both infants and fetal sources. This expansive coverage ensures robust and reliable image analysis across a diverse spectrum of medical imaging scenarios. Additionally, MELAGE offers users the valuable functionality of accessing metadata contained within each file directly within the platform. #### 2.1.8 Measurements MELAGE offers a comprehensive set of measurement tools for precise image analysis. Users can readily determine the distance between two points on an image and calculate the angle of the line connecting these reference points. During the segmentation process, MELAGE automatically computes both the area and perimeter of the segmented regions of interest (ROIs). These measurements are conveniently cataloged within the software, allowing users to save, and export the data to facilitate further analysis and documentation. Figure 8: (a) Angle and distance measurements of an ROI or (b) area and perimeter measurement of the segmented ROI. #### 2.1.9 Other features The entire project, encompassing segmentation, measurements, and associated parameters, can be conveniently saved and later retrieved for continued work. MELAGE offers the unique capability to simultaneously display and segment two distinct types of images, facilitating tasks such as image comparison and difference visualization (as demonstrated in Figure 9). Another noteworthy feature of MELAGE is its proficiency in properly rendering ultrasound (US) planes. Given that US images often include sagittal or coronal planes, and 3D US scans can be acquired from similar orientations, MELAGE excels in accurately identifying and displaying images in their correct planes. Furthermore, MELAGE empowers users with the ability to visualize image histograms and manipulate image coordinate systems to select the most suitable ones. Notably, it integrates the N4 bias field correction from the SimpleITK package [11] to enhance image quality. MELAGE also offers advanced masking functionalities, allowing users to mask specific regions of an image in relation to segmentation. This includes operations like summation and subtraction of segmentation colors, enabling comprehensive image manipulation. ### Advanced features The advanced features are presented in software called MELAGE. Figure 9: Showing and segmenting two image at the same time. #### 2.2.1 Interactive image segmentation One of the critical aspects of manual segmentation or correction of automated segmentation is the provision of an interactive tool for image segmentation. This tool should effectively assist users in selecting the precise locations for segmentation. In response to the specific requirements of medical professionals from the "Perinatal brain damage" research group at the Biomedical Research and Innovation Institute of Cadiz ( INiBICA) and the Puerta del Mar University Hospital, Cadiz, Spain, we have designed an easily navigable menu bar within MELAGE. This menu bar incorporates tools tailored to the needs of medical doctors. We have implemented six distinct tools: three for image segmentation, one for removing unwanted segmentation, and two for applying segmentation to subsequent slices (refer to Figure 10). MELAGE incorporates several advanced capabilities, each designed to enhance the process of image segmentation and analysis: * **Polygon Construction and Filling:** One of the fundamental features in MELAGE is the ability to construct polygons using a set of user-selected points, denoted as \(\mathbf{X}\). These interconnected points collectively form a polygon, referred to as \(P\). Once the polygon is defined, MELAGE offers a straightforward method for filling it with a user-defined color. The process begins by creating a bounding box, represented as \(B\), around the selected points \(\mathbf{X}\). Subsequently, all the points located within this polygon are marked and filled with the chosen color. This intuitive feature streamlines the segmentation process, enabling users to highlight and annotate precisely regions of interest. * **Interpolation Between Slices:** MELAGE offers a valuable capability for scenarios where two or more slices are segmented from the same image plane. To ensure a seamless transition between these segmented slices, MELAGE calculates the exact Euclidean distance for each pixel within every segmented slice. The Euclidean distance, represented as \(y_{i}\), is com puted as follows: \[y_{i}=\sqrt{\sum_{i=1}^{N}(x_{i}-b_{i})^{2}} \tag{1}\] Here, \(x_{i}\) denotes the coordinates of each pixel, and \(b_{i}\) signifies a baseline point. With precise Euclidean distances determined, MELAGE employs the advanced trilinear interpolation method [12] to seamlessly bridge the gap between two segmented slices. This interpolation technique harnesses neighboring points, as illustrated in Figure 12, to estimate the values of points situated between the two segmented slices. By doing so, We ensure a continuous and smooth representation of the segmented points. This capability is particularly valuable for maintaining visual consistency and coherence when working with multiple segmented slices from the same image plane. These innovative features collectively position MELAGE as a versatile and powerful tool for image segmentation and analysis. Whether users are creating and filling polygons or seamlessly interpolating between image slices, MELAGE empowers them with a suite of tools to enhance precision and efficiency in their image processing endeavors. In the realm of medical image segmentation, there are instances where the task at hand necessitates the precise delineation of regions within an image characterized by uniform intensity levels. To address this specific requirement, we introduces a dedicated tool inside MELAGE to streamline this process within regions of interest (ROI). This tool operates by defining a circular ROI centered at a user-specified point, with the radius denoted as \(R\). Importantly, users have the flexibility to adjust this radius (\(R\)) according to their specific needs and preferences. The pixel selection process within this circular ROI is governed by a set of predefined conditions, ensuring that only pixels meeting these criteria are included in the segmentation. Specifically, the following conditions guide the selection: \[M_{c}-Std_{l}<S<M_{c}+Std_{l} \tag{2}\] This central reference point serves as the baseline for assessing intensity uniformity within the defined circular ROI. \(Std_{l}\) represents the standard deviation of intensity values among voxels situated within the circular region. This standard deviation metric plays a pivotal role in evaluating the extent of intensity variation observed within the specified ROI. By adhering to these defined parameters, MELAGE effectively identifies and selects pixels residing within the circular ROI that adhere to the specified intensity criteria. This segmentation approach facilitates the accurate isolation of regions characterized by consistent intensity levels, affording users a precise and adaptable means of defining areas of interest within medical images. Figure 11: Segmentation of thalamus in T1 image using 1) Contour segmentation Figure 10: Segmentation toolbox of MELAGE. #### 2.2.2 Editing segmented ROIs MELAGE offers users the ability to not only segment regions of interest but also edit and refine segmented ROIs as needed. Figure 14 serves as an illustrative example of the segmentation editing process. Prior to segmentation, users are required to select the appropriate label corresponding to the target region. In this particular instance, the objective is to modify the segmentation of the left cerebral white matter. Subsequently, the corresponding color, white, is designated (located at the top left of Figure 14). The subsequent step involves overwriting any extraneous gray matter with the designated white mat Figure 12: Trilinear interpolation. Figure 13: Segmentation of thalamus in a T1 image using 1) Contour segmentation ter color (right portion of Figure 14). Notably, the edited region is prominently highlighted with a distinctive red circle for clarity. #### 2.2.3 Automatic brain extraction Within the framework of MELAGE, a dedicated tool has been developed to automate the segmentation of neonatal brain structures from MRI and 3D Ultrasound images. The efficacy of this tool has undergone rigorous validation through an extensive dataset comprising images acquired from HUPM and several other independent centers. The underlying network architecture employed is U-Net, meticulously trained on a substantial dataset, encompassing over 500 MRI images sourced from HUPM and an additional 150 ultrasound (US) images from the HUPM dataset. We have incorporated the code described in [13] to perform MRI brain extraction. Users have the flexibility to adjust the threshold and review and refine the segmentation outcomes using MELAGE. Figure [15] illustrates a representative result obtained using the MELAGE software, showcasing the robustness and accuracy of the segmentation process. Further comprehensive details regarding the methodology and findings are slated for imminent publication, promising to contribute significantly to the field of Figure 14: Example of editing segmentation. ## 3 Future perspective It is easy to integrate MELAGE with deep learning algorithms and other artificial intelligence methods. We will add artificial intelligence methods for automatic analysis of 3D MRI and Ultrasound images. The automatic analysis of Diffusion Weighted Imaging (DWI) will be another capability of MELAGE in future. ## 4 Conclusion In summary, MELAGE represents a pioneering advancement in the field of neuroimaging, offering a pure Python-based software solution designed for the visualization, processing, and analysis of medical images. Originally conceived for the specific purpose of processing 3D ultrasound and MRI brain images in neonates, MELAGE exhibits remarkable versatility, extending its capabilities to encompass adult human brain imaging. Central to MELAGE's functionality is its semi-automatic brain extraction tool, fortified by an embedded deep learning module adept at extracting intricate Figure 15: Example of automatic brain segmentation. brain structures from MRI and 3D Ultrasound images. Beyond this, MELAGE offers a comprehensive suite of features, including dynamic 3D visualization, precise measurements, and interactive image segmentation. What sets MELAGE apart is its innate adaptability and ease of integration with deep learning algorithms and other artificial intelligence methods. This adaptability not only positions MELAGE at the forefront of neuroimaging innovation but also makes it a potent tool for interdisciplinary research and clinical applications. Furthermore, MELAGE's utility extends beyond its original scope, as it can effectively process various other types of medical image data, underscoring its potential as a versatile resource for the broader medical imaging community. As we look to the future, MELAGE stands as a beacon of innovation, poised to catalyze new discoveries, streamline analysis, and foster collaborative advancements in the realm of medical imaging and beyond. ## Acknowledgement This study was funded by the Cadiz integrated territorial initiative for biomedical research, European Regional Development Fund (ERDF) 2014-2020. Andalusian Ministry of Health and Families, Spain. Registration number: ITI-0019-2019. This study has been funded by Instituto de Salud Carlos III (ISCIII) through the project "DTS22/00142" and co-funded by the European Union. We extend our heartfelt gratitude to our collaborators in this project, including:Lionel Cervera Gontard, Pedro Luis Galindo Riano and Joaquin Pizarro Junquera. We also extend our appreciation to numerous technicians, master's, and Ph.D. students who helped us in the development of MELAGE: Manuel Lubian Gutierrez, Ph.D. Student, Emiliano Trimarco, Ph.D. Student, Roa'a Khaled, Ph.D. Student, Monica Crotti, Ph.D. Student, Yolanda Marin Almagro, Ph.D. Student, Macarena Roman, Technician. Your invaluable contributions have significantly enriched this project. ## Code Availability MELAGE is a free software and the open source code of the main body is available through Github1. For more information please see [https://melage.uca.es/](https://melage.uca.es/). Footnote 1: [https://github.com/BahramJafrasteh/MELAGE](https://github.com/BahramJafrasteh/MELAGE) ## Registration MELAGE has been registered 2 under identifier 2211222681375. Footnote 2: [https://www.safecreative.org/work/2211222681375-melage](https://www.safecreative.org/work/2211222681375-melage)
2309.17366
3D-Mol: A Novel Contrastive Learning Framework for Molecular Property Prediction with 3D Information
Molecular property prediction, crucial for early drug candidate screening and optimization, has seen advancements with deep learning-based methods. While deep learning-based methods have advanced considerably, they often fall short in fully leveraging 3D spatial information. Specifically, current molecular encoding techniques tend to inadequately extract spatial information, leading to ambiguous representations where a single one might represent multiple distinct molecules. Moreover, existing molecular modeling methods focus predominantly on the most stable 3D conformations, neglecting other viable conformations present in reality. To address these issues, we propose 3D-Mol, a novel approach designed for more accurate spatial structure representation. It deconstructs molecules into three hierarchical graphs to better extract geometric information. Additionally, 3D-Mol leverages contrastive learning for pretraining on 20 million unlabeled data, treating their conformations with identical topological structures as weighted positive pairs and contrasting ones as negatives, based on the similarity of their 3D conformation descriptors and fingerprints. We compare 3D-Mol with various state-of-the-art baselines on 7 benchmarks and demonstrate our outstanding performance.
Taojie Kuang, Yiming Ren, Zhixiang Ren
2023-09-28T10:05:37Z
http://arxiv.org/abs/2309.17366v3
3D-Mol: A Novel Contrastive Learning Framework for Molecular Property Prediction with 3D Information ###### Abstract Molecular property prediction offers an effective and efficient approach for early screening and optimization of drug candidates. Although deep learning based methods have made notable progress, most existing works still do not fully utilize 3D spatial information. This can lead to a single molecular representation representing multiple actual molecules. To address these issues, we propose a novel 3D structure-based molecular modeling method named 3D-Mol. In order to accurately represent complete spatial structure, we design a novel encoder to extract 3D features by deconstructing the molecules into three geometric graphs. In addition, we use 20M unlabeled data to pretrain our model by contrastive learning. We consider conformations with the same topological structure as positive pairs and the opposites as negative pairs, while the weight is determined by the dissimilarity between the conformations. We compare 3D-Mol with various state-of-the-art baselines on 7 benchmarks and demonstrate our outstanding performance in 5 benchmarks. Deep Learning Molecular representation Molecular property prediction Graph neural network Self-supervised learning Contrastive learning ## 1 Introduction Molecular property prediction accelerates drug candidate identification, saving time and resources. It helps researchers prioritize promising compounds, streamlining drug development and increasing success rates. Moreover, it aids in understanding structure-activity relationships, revealing how specific features impact properties, interactions, and biological effects. Although deep learning has achieved success in molecular property prediction, its potential is significantly constrained by the scarcity of labeled data because labeled data for molecular properties usually requires expensive and time-consuming experiments[1]. Self-supervised learning uses large amounts of unlabeled data to pretrain models to leverage unlabeled data to learn rich feature representations. Many works[2, 3, 4] improve their model performance through self-supervised learning. Early deep learning methods for molecular property prediction[5, 6, 7, 8, 9] effectively utilized NLP-based self-supervised learning methods to handle data represented by the SMILES molecular formula[10]. However, SMILES can't fully reflect the topological relationships of a molecule. Therefore, many self-supervised methods based on molecular graphs have emerged, such as PretrainGNN[11], N-Gram-graph[12], MolCLR[13], GROVER[14]. They designed unique pretraining methods based on molecular graph. Besides, lots of work use graph neural networks to capture the topological information of molecules, such as MPNN[15], AttentiveFP[16] and D-MPNN[17]. However, these methods have not dealt with the 3D spatial information of molecules, which is critical for predicting molecular properties. As shown in Figure 1, Thalidomide is divided into two forms, R-Thalidomide and S-Thalidomide, due to different 3D structures. The former can be used to treat skin diseases, while the latter has been implicated in teratogenesis. Recently, some integrating geometric information into graph structures has attracted research attention in some molecular property estimation tasks[18, 19, 20, 21, 22, 23, 24, 25, 22], they either fail to fully extract the 3D information of molecules or only use masking methods for data augmentation in self-supervised learning, and only use the stablest conformation but ignore the others. To address these issues, we propose a novel method, 3D-Mol, for molecular property prediction. Firstly, we use atom-bond graph, bond-angle graph and plane-angle graph to represent the spatial structural information of molecules, extracting the 3D spatial structure representation of molecules through the information transfer within these three graphs and their interactions. In pretraining stage, we design a unique weighted contrastive learning method, which uses the different 3D conformations of the same SMILES as weighted positive pair SMILES, and the weight is dependent on the difference between those conformations. We also employ the geometry pretraining task following GearNet[26]. We learn 3D structural feature of molecular representations from a large volume of unlabeled data, and then finetune the well-pretrained model according to downstream tasks and data to predict molecular properties. We compared our approach with several state-of-the-art baselines on 7 molecular property prediction benchmarks[27], where our method achieved the best results on 5 benchmarks. In summary, our main contributions are as follows: \(\bullet\) We propose a novel molecular representation method and design a corresponding model to fully extract the **3D spatial structural features** of molecules. \(\bullet\) We design a unique weighted contrastive learning task, using the **different 3D conformations from the same SMILES as weighted positive pair**, and the weight is dependent on the difference between the conformations, thereby deeply learning the microscopic characteristics of molecular 3D space. \(\bullet\) We have conducted thorough evaluations of the 3D-Mol model on various molecular property prediction datasets. Experimental results show that **3D-Mol significantly outperforms existing competitive models** in multiple benchmark tests. ## 2 Related Work In general, there are two strategies to improve molecular property prediction. One is to design a novel molecular encoder based on molecular information for effective latent vector extraction, the other is to design a novel pretraining method to pretrain the molecular encoder by using a large amount of unlabeled data. The following are the related works of each. ### Molecular Representation and Encoder Proposing a novel molecular representation and encoder method is usually the first option for researchers to improve the accuracy of molecular property prediction. Some early works learn representation from chemical fingerprints, such as ECFP[21] and MACCS[28], frequently used in early machine learning[21, 29, 30]. The other learns representation from molecular descriptors, such as SMILES[10]. Inspired by mature NLP models, SMILES-BERT[31] applies the BERT[2] strategy to pretrain on SMILES to extract molecular representations. However, These methods depend on feature engineering, failing to capture the complete topological structure of molecules. Recently, many works use molecular graph as molecular representation because the natural representation of molecule is molecular graph and it represents the topology information. GG-NN[15], DMPNN[17], and DeepAtomicCharge[32] employed a message passing scheme for molecular property prediction. AttentiveFP[16] uses Figure 1: Thalidomide exists in two distinct 3D stereoisomeric forms, known as R-Thalidomide and S-Thalidomide. The former is recognized for its therapeutic properties in the treatment of various skin conditions, but the latter has been implicated in teratogenesis. It shows that despite having identical 2D molecular topology, the properties of two molecules vary significantly due to their distinct 3D structures. a graph attention network to aggregate and update node information. The MP-GNN[33] merges specific-scale Graph Neural Networks and element-specific Graph Neural Networks, capturing various atomic interactions of multiphysical representations at different scales. MGCN[34] designed a GCN to capture multilevel quantum interactions from the conformation and spatial information of molecules. But these works focuses on 2D molecular representation, and extracting only 2D topological information from molecules is insufficient. Many works[35] show the necessity of using 3D spatial information of molecules. Recently, some research has also begun modeling 3D molecules to address this issue. SGCN[25] applies different weights according to atomic distances during the GCN-based message passing process. SchNet[18] models complex atomic interactions using Gaussian radial basis functions for potential energy surface predictionor to accelerate the exploration of chemical space. DimeNet[22] proposes directional message passing to fully utilize directional information within molecules. [23] develops a novel geometrically-enhanced molecular representation learning method, and employs a specifically designed geometric-based graph neural network structure. However, these methods do not fully exploit the 3D structural information of molecules and lack the ability to learn the representations of 3D conformations with the same molecular topology. ### Self-supervised Learning on Molecules Self-supervised learning has achieved enormous success in BERT[2] and GPT[36]. Inspired by these, numerous works for molecular property prediction use this approch to effectively utilize large amounts of unlabeled data for pretraining. For one-dimensional data, SMILES is frequently used to extract molecular feature in pretraining stage. ChemBERTa[9] followed RoBERTa[37] by employing masked language modeling (MLM) as a pretraining task, predicting masked tokens to restore the original sentence, aiding pretraining models to understand sequence semantics. SMILES Transformer[38] used a SMILES string as input to produce a temporary embedding, which is then restored to the original input SMILES by a decoder. As the topological information of molecular graphs is gaining more attention, many pretraining methods aimed at graph data have been proposed. Shengchao Liu et al[12] used the n-gram method in NLP to extract and represent features of molecules. PretrainGNN[11] proposed a new pretraining strategy, include node-level and graph-level self-supervised pretraining tasks. GraphCL[39], MOCL[40] and MolCLR[13] performed molecular contrastive learning via graph neural networks, proposed new molecule augmentation methods. MPG[41] and GROVER[14] focused on node level and graph level representation and corresponding pretraining tasks for node level and graph level. iMolCLR[42], Sugar[43] and ReLMole[44] focused on the substructure of molecule, and designed the substructure pretraining task by using substructure information. However, the aforementioned pretraining strategies are only targeted at learning the topology information of the molecule. With the 3D information of molecules proven to aid in the prediction of molecular representations, recent works have focused on pretraining task for the 3D structure information of molecules. 3DGCN[45] introduced a relative position matrix that includes 3D positions between atoms to ensure translational invariance during convolution. GraphMVP[46] proposed a SSL method involving contrastive learning and generative learning between 3D and 2D molecular views. [23] proposed a self-supervised framework using molecular geometric information. They constructed a new bond angle graph, where the chemical bonds within a molecule are considered as nodes instead of edges, and the angle formed between two bonds is considered as the edge between them. Uni-Mol[47] employed the transformer to extract molecular representation by predicting atom distance. Although these works have used the spatial information of molecules, but they have not fully utilized the spatial information of molecules, nor have they enabled the model to learn the representation of geometric isomers. To address these issues, we use RDKit to generate multiple conformations from same topology structure as positive pairs and design a weighted contrastive learning task for self-supervised training. ## 3 Method ### Molecular Representation We deconstruct molecular conformation into three graph, denoted as \(Mol=\{G_{a-b},G_{b-a},G_{p-a}\}\). In most databases, molecular raw data is represented by SMILES. To extract topological and spatial structure information from molecules, we need to use RDKit to transform the SMILES representation into molecular conformations. In our method, we decompose molecular conformation into three graphs. The first graph, named atom-bond graph, is the commonly used as 2D molecular graph, and is represented as \(G_{a-b}=\{V,E,P_{atom},P_{bond}\}\), where \(V\) is the set of atoms and \(E\) is the set of bonds. \(P_{atom}\in R^{|V|*d_{atom}}\), and are the attributes of atoms, and \(d_{atom}\) is the number of atom attributes. \(P_{bond}\in R^{|E|*d_{bond}}\), and is the attributes of bonds, and \(d_{bond}\) is the number of bond attributes. The second graph, named bond-angle graph, is represented as \(G_{b-a}=\{E,P,Ang_{\theta}\}\), where \(P\) is a set of the plane that is comprised w 3 connected atoms. \(Ang_{\theta}\) is the set of corresponding bond angle \(\theta\). The third graph, named the plane-angle graph, is represented as \(G_{p-a}=\{P,C,Ang_{\phi}\}\). \(C\) represents the set of two connected planes, which connect with a bond. \(Ang_{\phi}\) represents the corresponding dihedral angle \(\phi\). The first graph allows the model to learn the topological information of molecules, and the second and third graphs allow the model to learn the spatial structure information of molecules. ### Attribute Embedding Layer The 3D information of the molecule, such as the length of bonds and the angle between bonds, carries key chemical information. Firstly, we convert float numbers, like angle and bond length, to latent vectors. Referring to the previous work (Shui and Karypis 2020)[24], we employed several RBF layers to encode different geometric factors: \[F_{l}^{k}=exp(-\beta_{l}^{k}(exp(-l)-\mu_{l}^{k})^{2})*W_{l}^{k} \tag{1}\] where \(F_{l}^{k}\) is the k-dimensional feature of bond length \(l\), and \(\mu_{l}^{k}\) and \(\beta_{l}^{k}\) are the center and width of \(l\) respectively. \(\mu_{l}^{k}\) is 0.1\(k\) and \(\beta_{l}^{k}\) is 10. Similarly, the k-dimensional feature of \(F_{\theta}^{k}\) and \(F_{\phi}^{k}\) of x is computed as: \[F_{\theta}^{k}=exp(-\beta_{\theta}^{k}(-\theta-\mu_{\theta}^{k})^{2})*W_{ \theta}^{k} \tag{2}\] \[F_{\phi}^{k}=exp(\beta_{\phi}^{k}(-\phi-\mu_{\phi}^{k})^{2})*W_{\phi}^{k} \tag{3}\] where \(\mu_{\theta}^{k}\) and \(\beta_{\theta}^{k}\) are the center and width of \(\theta\), and \(\mu_{\phi}^{k}\) and \(\beta_{\phi}^{k}\) are the center and width of \(\phi\). Centers of bond angle and dihedral angle are represented as \(\mu_{\phi}^{k}\) and \(\mu_{\theta}^{k}\) respectively, and the numerical value of them are \(\pi\)/K, where K is the number of feature dimensions. For the other property of atom and bond, we represent them with \(P_{atom}\) and \(P_{bond}\) respectively. Inspired by NLP, we embed them with the word embedding function. The initial features of atom and bond are represented as \(F_{atom}^{0}\) and \(F_{bond}^{0}\) respectively. Figure 2: **The overview of the 3D-Mol model framework.** a) In the pretraining stage, we employ weighted contrastive learning to effectively pretrain our model. In addition to using the mask strategy for graph data augmentation, we consider conformations stemming from the same topological structure as positive pairs, with their weight determined by the dissimilarity between the conformations. Conversely, distinct topological structures are treated as negative pairs, and we further utilize fingerprint differences to compute the weight of negative pairs. b) In the finetuning stage, we refine the well-pretrained encoder using diverse downstream datasets, followed by supervised learning. ### 3D-Mol Layer Inspired by GEM[23], we use message passing strategy to let node send and receive messages with edge in \(\{G_{a-b}^{i},G_{b-a}^{i},G_{p-a}^{i}\}\). For the \(i_{th}\) layer in 3D-Mol, the information of \(\{G_{a-b}^{i},G_{b-a}^{i},G_{p-a}^{i}\}\) will be updated by message passing neural network orderly. The message passing of the latter graph need the information of the former graph. The overview is shown in figure 3, and the details are as follow: First, we use \(GNN_{a-b}^{i}\) to aggregate the message and update the atom and bond latent vectors in \(G_{a-b}^{i}\). Given an atom v, its representation vector \(F_{v}^{i}\) is formalized by: \[a_{v}^{i,a-b}=Agg_{a-b}^{(i)}(F_{v}^{i-1},F_{u}^{i-1},F_{uv}^{i-1}|u\in N(v)) \tag{4}\] \[F_{v}^{i}=Comb_{a-b,n}^{(k)}(F_{v}^{i-1},a_{v}^{i}) \tag{5}\] \[F_{uv}^{i,temp}=Comb_{a-b,e}^{(k)}(F_{uv}^{i-1},F_{u}^{i-1},F_{v}^{i-1}) \tag{6}\] where \(N(v)\) is the set of neighbors of atom v in \(G_{a-b}^{i}\), and \(Agg_{a-b}^{(i)}\) is the aggregation function for aggregating messages from an atom's neighborhood in \(G_{a-b}^{i}\). \(Comb_{a-b,n}^{(k)}\) is the update function for updating the atom latent vectors in \(G_{a-b}^{i}\), and \(Comb_{a-b,e}^{(k)}\) is the update function for updating the bond latent vectors in \(G_{a-b}^{i}\). \(a_{v}^{i(a-b)}\) is the information from the neighboring atom and the corresponding bond after be aggregated in \(G_{a-b}^{i}\). \(F_{uv}^{i,temp}\) is the temporary bond latent vectors of bond \(uv\) in \(i_{th}\) layer. Processing \(G_{a-b}^{i}\) with \(GNN_{a-b}^{i}\) make features of atom and bond can be updated by information from their neighbor, and the model will learn the topology information of the molecule. \(F_{uv}^{i,temp}\) is part of bond feature in \(G_{b-a}^{i}\). Then, we use \(GNN_{b-a}^{i}\) to aggregate the message and update the bond and plane vector in \(G_{b-a}^{i}\). Given a bond \(uv\), its latent vector \(F_{uv}^{i}\) is formalized by: \[a_{uv}^{i,b-a}=Agg_{b-a}^{(i)}(\{F_{uv}^{i-1},F_{vw}^{i-1},F_{uvw}^{i-1}|u\in N (v)\cap\in N(v)\cap u\neq w\}) \tag{7}\] \[F_{uv}^{i}=Comb_{b-a,n}^{(k)}(F_{uv}^{i-1},F_{uv}^{i,temp},a_{uv}^{i}) \tag{8}\] \[F_{uvw}^{i-1,temp}=Comb_{b-a,e}^{(k)}(F_{uvw}^{i-1},F_{uv}^{i-1},F_{vw}^{i-1}) \tag{9}\] Figure 3: **Overview of the 3D-Mol encoder layer.** The 3D-Mol encoder layer comprises three steps. Firstly, employing a message passing strategy, nodes in each graph exchange messages with their connected edges, leading to the updating of edge and node latent vectors. Secondly, the edge latent vector from the lower-level graph is transmitted to the higher-level graph as part of the node latent vector. Finally, the iteration is performed n times to derive the \(n_{th}\) node latent vector, from which we extract the molecular latent vectors. where \(Agg^{(i)}_{b-a}\) is the aggregation function for aggregating messages from a bond's neighborhood in \(G^{i}_{b-a}\), and \(Comb^{(k)}_{b-a,n}\) is the update function for updating the bond latent vector in \(G^{i}_{b-a}\), and \(Comb^{(k)}_{b-a,e}\) is the update function for updating the plane latent vector in \(G^{i}_{b-a}\). \(a^{i,b-a}_{vx}\) is the information from the neighboring bond and the corresponding bond angle after being aggregated. Processing \(G^{i}_{b-a}\) with \(GNN^{i}_{b-a}\) make feature of bond and bond angle can be update based on information from their neighbor, and the model will learn the 3D information of the molecular. \(F^{i-1,temp}_{uvw}\) is part of plane feature in \(G^{i}_{p-a}\). After process the \(G^{i}_{b-a}\), we use \(GNN^{i}_{p-a}\) to aggregate the message and update the plane latent vector in \(G^{i}_{p-a}\). Given a plane which is constructed by node u, v, w and bond \(uv\), \(vw\), its latent vector \(F^{i}_{uvw}\) is formalized by: \[a^{i,p-a}_{uvw}=Agg^{(i)}_{p-a}(\{F^{i-1}_{uvw},F^{i-1}_{vvh},F^{i-1}_{uvw}|u \in N(v)\cap v\in N(w)\cap v\in N(h)\cap u\neq v\neq w\neq h\}) \tag{10}\] \[F^{i}_{uvw}=Comb^{(k)}_{p-a,n}(F^{i-1}_{uv},F^{i-1}_{uv},F^{i-1}_{vw}) \tag{11}\] \[F^{i-1}_{uvw}=Comb^{(k)}_{b-a,e}(F^{i-1}_{uvw},F^{i-1}_{uv},F^{i-1}_{vw}) \tag{12}\] where \(agg^{(i)}_{p-a}\) is the aggregation function for aggregating messages from a plane's neighborhood in \(G^{i}_{p-a}\), and \(Comb^{(k)}_{p-a,n}\) is the update function for updating the plane latent vector in \(G^{i}_{p-a}\), and \(Comb^{(k)}_{p-a,e}\) is the update function for updating the dihedral angle latent vector in \(G^{i}_{p-a}\). \(a^{i}_{uv}\) is the information from the neighboring plane and the corresponding dihedral angle after be aggregated. Processing \(G^{i}_{p-a}\) with \(GNN^{i}_{p-a}\) makes feature of plane and dihedral angle can be updated by information from their neighbor, and the model will learn the 3D information of the molecular, and model the bond interaction. The representation vectors of the atoms at the final iteration are integrated to gain the molecular representation vector \(F_{mol}\) by the Readout function, which is formalized as: \[F_{mol}=Readout(F^{n}_{u}|u\in N(v)) \tag{13}\] where \(F^{n}\) is the last 3D-Mol layer output. The molecular latent vector \(F_{mol}\) is used to predict molecular properties. We extract the \(F_{mol}\) from all atom latent vector in the last layer. ### Pretrain Strategy To improve the performance of 3D-Mol encoder, we employ the constructive learning as pretraining method, using different conformations of the same topological structure as positive pair. We also combine our pretraining method with geometry task in [26] to pretrain 3D-Mol with a large amount of unlabeled data. The overview of our pretraining method is shown in figure 3, and the following are the details of our pretraining method. #### 3.4.1 Weighted contrastive learning task Our objective is to facilitate the learning of the consistency and difference between the most stable molecular conformation, denoted as \(Mconf_{i}\), and another randomly selected conformation, denoted as \(Mconf_{j}\). To accomplish this, we employ weighted contrastive learning using a batch of molecular representations, with the loss function defined as follows: \[L^{conf}_{i,j}=-log\frac{exp(w^{conf}_{i,j}sim(F_{i},F^{mk}_{j})/\tau)}{\Sigma^ {2N}_{k=1}\{k\neq i\}exp(w^{fp}_{i,k}sim(F_{i},F_{k})/\tau)} \tag{14}\] \[w^{conf}_{i,j}=1-\lambda_{conf}*ConfSim(Mconf_{i},Mconf_{j}) \tag{15}\] \[w^{fp}_{i,k}=1-\lambda_{fp}*FPSim(Mconf_{i},Mconf_{k}) \tag{16}\] where \(F_{i}\) is the latent vector extracted from \(Mconf_{i}\), and \(sim(F_{i},F_{j})\) is the similarity between two latent vectors \((F_{i}\), \(F_{j})\), and penalized by a weight coefficient \(w^{conf}_{i,j}\). \(w^{conf}_{i,j}\) is computed by \(ConfSim(Mconf_{i},Mconf_{j})\), the difference between \(Mconf_{i}\) and \(Mconf_{j}\), which can be computed by RDKit. \(\lambda_{conf}\in[0,1]\) is the hyperparameter that determines the scale of penalty for the difference between two conformation. Except using different conformations as the positive pair, we also use node and subgraph masking as the molecular data augmentation strategy. We mask \(Mconf_{i}\) 15\(\%\) nodes and corresponding edges, and the mask latent vector conformation is denoted as \(F^{mk}_{j}\). Following iMolCLR[42], the similarity measurement between two latent vectors \(F_{i}\), \(F_{k}\) from a negative molecule pair (\(Mconf_{i},Mconf_{j}\)) is penalized by a weight coefficient \(w^{fp}_{i,k}\), which computed by the Fingerprint similarity between \(Mconf_{i}\) and \(Mconf_{k}\). \(FPSim(Mconf_{i},Mconf_{j})\) evaluates the fingerprint similarity of the given two molecules \(Mconf_{i},Mconf_{j}\), and \(\lambda_{fp}\in[0,1]\) is the hyperparameter that determines the scale of penalty for faulty negatives. #### 3.4.2 Geometry task 3D information have been shown to be important features[22], so we employ geometry task as pretraining method. For bond angle and dihedral angle prediction, we sample adjacent atoms to better capture local structural information. Since angular values are more sensitive to errors in protein structures than distances, we use discretized values for prediction. \[L_{i,j}^{dist}=(f_{dist}(Fn_{n,i}^{mk},Fn_{n,j}^{mk})-dist_{i,j})^{2} \tag{17}\] \[L_{i,j}^{l}=(f_{l}(Fn_{n,i}^{mk},Fn_{n,j}^{mk})-l_{i,j})^{2} \tag{18}\] \[L_{i,j,k}^{\theta}=CE(f_{\theta}(Fn_{n,i}^{mk},Fn_{n,j}^{mk},Fn_{n,k}^{mk}), bin(\theta_{i,j,k})) \tag{19}\] \[L_{i,j,k,p}^{\phi}=CE(f_{\phi}(Fn_{n,i}^{mk},Fn_{n,j}^{mk},Fn_{n,k}^{mk},Fn_{ n,p}^{mk}),bin(\phi_{i,j,k,p})) \tag{20}\] where \(f_{\phi}(.)\), \(f_{\theta}(.)\), \(f_{dist}(.)\) and \(f_{l}\) are the MLPs for each task, and \(L_{i,j}^{dist}\), \(L_{i,j}^{l}\), \(L_{i,j,k}^{\theta}\), \(L_{i,j,k,p}^{\phi}\) and \(L_{i}^{FP}\) are the loss functions for each task. \(CE(.)\) is cross entropy loss, and \(bin()\) is used to discretize the bond angle and dihedral angle. \(Fn_{n,i}^{mk}\) is the latent vector of node i after masking the corresponding sampled items in each task. In addition to the aforementioned pretraining tasks to capture global molecular information, we leverage masked molecular latent vectors for fingerprint (FP) prediction, effectively incorporating latent representations to enrich the predictive capability. \[L_{i}^{FP}=BCE(f_{FP}(Fm^{mk}),FP_{i}) \tag{21}\] where \(f_{FP}\) is the MLPs for global geometric task, and \(L_{i}^{FP}\) is the loss function. \(BCE(.)\) is binary cross entropy loss. \(Fm^{mk}\) is the latent vector of the masking molecule. ## 4 Experiment In this section, we conduct experiments on 7 benchmark datasets in MoleculeNet to demonstrate the effectiveness of 3D-Mol for molecular property prediction. Firstly we use a large amount of unlabeled data and our pretraining method to pretrain the 3D-Mol model, then we use the downstream task to finetune well-pretrained model and predict the molecular property. We compared it with a variety of state-of-the-art methods. Also we conduct several ablation studies to confirm the 3D-Mol model and our pretraining method is useful. ### Datasets and Setup #### 4.1.1 Pretraining stage We use 20 million unlabeled molecules to pretrain 3D-Mol. The unlabeled data is extracted from ZINC20 and PubChem, both of which are publicly accessible databases containing drug-like compounds. To ensure consistency with prior research[23], we randomly selected 90\(\%\) of these molecules for training purposes, while the remaining \(\%\) was set aside for evaluation. The raw data obtained from ZINC20 and PubChem was provided in the SMILES format. In order to convert the SMILES representations into molecular conformations, we employed RDKit and applied the ETKDG method. For our model, we use Adam optimizer with a learning rate of 1e-3. The batch size is set to 256 for pretraining and 32 for finetuning. The hidden size of all models is unspecified. The geometric embedding dimension K is 64, and the number of angle domains is 8. The hyperparameters \(\lambda_{conf}\) and \(\lambda_{fp}\) are both set to 0.5. \begin{table} \begin{tabular}{c c c c} \hline \hline Dataset & \(\#\) Tasks & Task Type & \(\#\) Molecules \\ \hline BACE & 1 & Classifcation & 1513 \\ Sider & 27 & Classifcation & 1427 \\ Tox21 & 12 & Classifcation & 7831 \\ ToxCast & 617 & Classifcation & 8597 \\ ESOL & 1 & Regression & 1128 \\ FreeSolv & 1 & Regression & 643 \\ Lipophilicity & 1 & Regression & 4200 \\ \hline \hline \end{tabular} \end{table} Table 1: Statistics information of datasets #### 4.1.2 Finetuning stage We use 7 molecular datasets obtained from MoleculeNet to demonstrate the effectiveness of 3D-Mol. These datasets encompass a range of biophysics datasets such as BACE, physical chemistry datasets like ESOL, and physiology datasets like Tox21. In the fine-tuning stage, we employed nine molecular datasets obtained from MoleculeNet. These datasets encompass a range of biophysics datasets such as BACE, physical chemistry datasets like ESOL, and physiology datasets like Tox21. Table 1 provides a summary of the statistical information for these datasets, while the remaining details are outlined below: \(\bullet\) BACE. The BACE dataset provides both quantitative (IC50) and qualitative (binary label) binding results for a set of inhibitors targeting human \(\beta\)-secretase 1 (BACE-1). \(\bullet\) Tox21. The Tox21 initiative aims to advance toxicology practices in the 21st century and has created a public database containing qualitative toxicity measurements for 12 biological targets, including nuclear receptors and stress response pathways. \(\bullet\) Toxcast. ToxCast, an initiative related to Tox21, offers a comprehensive collection of toxicology data obtained through in vitro high-throughput screening. It includes information from over 600 experiments and covers a large library of compounds. \(\bullet\) SIDER. The SIDER database is a compilation of marketed drugs and their associated adverse drug reactions (ADRs), categorized into 27 system organ classes. \(\bullet\) ESOL. The ESOL dataset is a smaller collection of water solubility data, specifically providing information on the log solubility in mols per liter for common organic small molecules. FreeSolv. The FreeSolv database offers experimental and calculated hydration-free energy values for small molecules dissolved in water. \(\bullet\) Lipo. Lipophilicity is a crucial characteristic of drug molecules that affects their membrane permeability and solubility. The Lipo dataset contains experimental data on the octanol/water distribution coefficient (logD at pH 7.4). Following the previous works[23], We partitioned the datasets into train/validation/test sets in an 80/10/10 ratio for downstream tasks, and We use scaffold splitting and report the mean and standard deviation by the results of 3 random seeds. ### Metric Consistent with prior studies, we adopt the average ROC-AUC as the evaluation metric for the classification datasets (BACE, SIDER, Tox21 and ToxCast), which is a widely used metric for assessing the performance of binary classification tasks. For the regression datasets (ESOL, FreeSolv and Lipophilicity), we utilize the RMSE as the evaluation metric. ### Result a) **To validate the efficacy of our proposed method, we compare it with several baseline methods.** The baseline methods are as follows: N-Gram[12] generates a graph representation by constructing node embeddings \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{model} & \multicolumn{3}{c|}{Classification (ROC-AUC \(\%\), higher is better \(\uparrow\) )} & \multicolumn{3}{c|}{Regression (RMSE, lower is better \(\downarrow\))} \\ \cline{2-7} & BACE & SIDER & Tox21 & ToxCast & ESOL & FreeSolv & Lipophilicity \\ \hline N-Gram\({}_{\text{RF}}\) & \(0.779_{0.015}\) & \(0.668_{0.007}\) & \(0.743_{0.004}\) & \(-\) & \(1.074_{0.107}\) & \(2.688_{0.085}\) & \(0.812_{0.028}\) \\ N-Gram\({}_{\text{XGB}}\) & \(0.791_{0.013}\) & \(0.655_{0.007}\) & \(0.758_{0.009}\) & \(-\) & \(1.083_{0.107}\) & \(5.061_{0.744}\) & \(2.072_{0.030}\) \\ PretrainGNN & \(0.845_{0.007}\) & \(0.627_{0.008}\) & \(0.781_{0.006}\) & \(0.657_{0.006}\) & \(1.100_{0.006}\) & \(2.764_{0.002}\) & \(0.739_{0.003}\) \\ GROVER\({}_{\text{base}}\) & \(0.826_{0.007}\) & \(0.648_{0.006}\) & \(0.743_{0.001}\) & \(0.654_{0.004}\) & \(0.983_{0.090}\) & \(2.176_{0.052}\) & \(0.817_{0.008}\) \\ GROVER\({}_{\text{large}}\) & \(0.810_{0.014}\) & \(0.654_{0.001}\) & \(0.735_{0.001}\) & \(0.653_{0.005}\) & \(0.895_{0.017}\) & \(2.272_{0.051}\) & \(0.823_{0.010}\) \\ MolCLR & \(0.824_{0.009}\) & \(0.589_{0.014}\) & \(0.750_{0.002}\) & \(-\) & \(1.271_{0.040}\) & \(2.594_{0.249}\) & \(0.691_{0.004}\) \\ \hline 3DInfomax & \(0.797_{0.015}\) & \(0.606_{0.008}\) & \(0.644_{0.011}\) & \(0.745_{0.007}\) & \(0.894_{0.028}\) & \(2.337_{0.107}\) & \(0.695_{0.012}\) \\ GraphMVP & \(0.812_{0.009}\) & \(0.639_{0.012}\) & \(0.759_{0.005}\) & \(0.631_{0.004}\) & \(1.029_{0.033}\) & \(-\) & \(0.681_{0.010}\) \\ GEM & \(0.856_{0.011}\) & \(\mathbf{0.672}_{0.004}\) & \(0.781_{0.005}\) & \(0.692_{0.004}\) & \(0.798_{0.029}\) & \(1.877_{0.094}\) & \(0.660_{0.008}\) \\ Uni-Mol & \(0.857_{0.005}\) & \(0.659_{0.013}\) & \(\mathbf{0.796}_{0.006}\) & \(0.696_{0.001}\) & \(0.788_{0.029}\) & \(1.620_{0.035}\) & \(0.603_{0.010}\) \\ \hline 3D-Mol\({}_{\text{wcl}}\) & \(\mathbf{0.875}_{0.004}\) & \(0.656_{0.002}\) & \(0.786_{0.003}\) & \(\mathbf{0.697}_{0.003}\) & \(\mathbf{0.783}_{0.009}\) & \(\mathbf{1.617}_{0.050}\) & \(\mathbf{0.598}_{0.018}\) \\ \hline \end{tabular} \end{table} Table 2: Comparison of performance on the 7 molecular property prediction tasks and the methods below are pretraining method. We mark the best results in bold and underline the second best. based on short walks. PretrainGNN[11] implements several types of self-supervised learning tasks. 3D Infomax[48] maximizes the mutual information between learned 3D summary vectors and the representations of a graph neural network. MolCLR[13] is a 2D-2D view contrastive learning model that involves atom masking, bond deletion, and subgraph removal. GraphMVP[46] introduces 2D-3D view contrastive learning approaches. GROVER[14] focused on node level and graph level representation and corresponding pretraining tasks for node level and graph level. GEM[23] employs predictive geometry self-supervised learning schemes that leverage 3D molecular information. Uni-Mol[47] enlarge the application scope and representation ability of molecular representation learning by using transformer. As result in Table 2, Our method gets the best result in 5 datasets and gets the second best result in 1 dataset. Furthermore, our method achieves overwhelming performance on BACE by a large margin. That shows that our method is better at extracting the molecular information. To do the ablation study, We compare the results of 3D-Mol and 3D-Mol without pretraining. It shows that the former achieve overwhelming performance and our pretraining method can improve the 3D-Mol model performance. **b) To validate the efficacy of our proposed model 3D-Mol encoder, we compare it with several baseline molecular encoder without pretraining.** The baseline molecular encoders are as follows: DMPNN[17] employed a message passing scheme for molecular property prediction. AttentiveFP[16] is an attention-based GNN that incorporates graph-level information. MGCN[34] designed hierarchical graph neural network to directly extract features from the conformation and spatial information followed by the multilevel interactions. HMGNN[24] leverages global molecular representations through an attention mechanism. SGCN[25] applies different weights according to atomic distances during the GCN-based message passing process. DimeNet[22] proposes directional message passing to fully utilize directional information within molecules. GEM[23] employs message passing strategy to extract 3D molecular information. We present the experimental result to show the efficiency of our 3D-Mol model, as can be seen in Table 3. From Table 4, 3D-Mol encoder significantly outperforms all the baselines on both types of tasks and improves the performance over the best baselines with \(2\%\) and \(11\%\) for classification and regression tasks respectively, since 3D-Mol incorporates geometrical parameters. c) **To validate the efficacy of our proposed pretraining task, we compare the performance of no pretraining 3DGNN, pretraining by geometrical tasks and pretraining by geometrical tasks and weighted contrastive loss, and the result shows in Table 4. The result shows that geometry tasks signifcantly improve the performance of 3DGNN. Compared with the pretraining method combined with weighted contrastive learning. In general, The combined pretraining method is better to improve the 3DGNN performance. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{model} & \multicolumn{3}{c|}{Classification (ROC-AUC \(\%\), higher is better \(\uparrow\) )} & \multicolumn{3}{c|}{Regression (RMSE, lower is better \(\downarrow\))} \\ \cline{2-7} & BACE & SIDER & Tox21 & ToxCast & ESOL & FreeSolv & Lipophilicity \\ \hline \(3\mathrm{DGNN}\) & **0.875\({}_{0.004}\)** & 0.656\({}_{0.002}\) & 0.786\({}_{0.003}\) & **0.697\({}_{0.003}\)** & **0.783\({}_{0.009}\)** & 1.617\({}_{0.050}\) & **0.598\({}_{0.018}\)** \\ \(3\mathrm{DGNN}_{\mathrm{wo.pre}}\) & 0.832\({}_{0.005}\) & 0.624\({}_{0.013}\) & 0.780\({}_{0.004}\) & 0.682\({}_{0.007}\) & 0.794\({}_{0.027}\) & 1.769\({}_{0.039}\) & 0.674\({}_{0.007}\) \\ \(3\mathrm{DGNN}_{\mathrm{wo.cl_{weighted}}}\) & 0.874\({}_{0.006}\) & **0.661\({}_{0.005}\)** & **0.790\({}_{0.003}\)** & 0.693\({}_{0.005}\) & 0.795\({}_{0.014}\) & **1.557\({}_{0.003}\)** & 0.607\({}_{0.006}\) \\ \hline \end{tabular} \end{table} Table 4: Ablation study. We compare the performance of no pretraining 3DGNN, pretraining by geometrical tasks and pretraining by geometrical tasks and weighted contrastive loss, and mark the best results in bold and underline the second best. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{model} & \multicolumn{3}{c|}{Classification (ROC-AUC \(\%\), higher is better \(\uparrow\) )} & \multicolumn{3}{c|}{Regression (RMSE, lower is better \(\downarrow\))} \\ \cline{2-7} & BACE & SIDER & Tox21 & ToxCast & ESOL & FreeSolv & Lipophilicity \\ \hline DMPNN & 0.809\({}_{0.006}\) & 0.570\({}_{0.007}\) & 0.759\({}_{0.007}\) & 0.655\({}_{0.3}\) & 1.050\({}_{0.008}\) & 2.082\({}_{0.082}\) & 0.683\({}_{0.016}\) \\ AttentiveFP & 0.784\({}_{0.000}\) & 0.606\({}_{0.032}\) & 0.761\({}_{0.005}\) & 0.637\({}_{0.002}\) & 0.877\({}_{0.029}\) & 2.073\({}_{0.183}\) & 0.721\({}_{0.001}\) \\ MGCN & 0.734\({}_{0.008}\) & 0.587\({}_{0.019}\) & 0.741\({}_{0.006}\) & \(-\) & \(-\) & \(-\) & \(-\) \\ \hline SGCN & \(-\) & 0.559\({}_{0.005}\) & 0.766\({}_{0.002}\) & 0.657\({}_{0.003}\) & 1.629\({}_{0.001}\) & 2.363\({}_{0.050}\) & 1.021\({}_{0.013}\) \\ HMGNN & \(-\) & 0.615\({}_{0.005}\) & 0.768\({}_{0.002}\) & 0.672\({}_{0.001}\) & 1.390\({}_{0.073}\) & 2.123\({}_{0.179}\) & 2.116\({}_{0.473}\) \\ DimeNet & \(-\) & 0.612\({}_{0.004}\) & 0.774\({}_{0.006}\) & 0.637\({}_{0.004}\) & 0.878\({}_{0.023}\) & 2.094\({}_{0.118}\) & 0.727\({}_{0.019}\) \\ GEM & 0.828\({}_{0.012}\) & 0.606\({}_{0.010}\) & 0.773\({}_{0.007}\) & 0.675\({}_{0.005}\) & 0.832\({}_{0.010}\) & 1.857\({}_{0.071}\) & 0.666\({}_{0.015}\) \\ \hline \(3\mathrm{D-Molw}_{\mathrm{wo.pre}}\) & **0.832\({}_{0.005}\)** & **0.624\({}_{0.013}\)** & **0.780\({}_{0.004}\)** & **0.682\({}_{0.007}\)** & **0.794\({}_{0.027}\)** & **1.769\({}_{0.039}\)** & **0.674\({}_{0.007}\)** \\ \hline \end{tabular} \end{table} Table 3: Comparison of performance on the 9 molecular property prediction tasks, and the methods below are no pretraining. We mark the best results in bold and underline the second best. ## 5 Conclusion In this paper, we innovatively propose a novel 3D molecular model framework, 3D-Mol, to extract 3D molecular features. Furthermore, to effectively utilize a large number of unlabeled molecules and molecular conformations for feature extraction, we have designed a new self-supervised pretraining strategy. Our approach has been validated through numerous experiments and compared with multiple competitive benchmarks, demonstrating superior performance over other methods across various benchmarks. ## Acknowledgment The research was supported by the PengCheng Laboratory and by PengCheng Laboratory Cloud-Brain.
2309.15571
Evolution of a Water-rich Atmosphere Formed by a Giant Impact on an Earth-sized Planet
The atmosphere of a terrestrial planet that is replenished with secondary gases should have accumulated hydrogen-rich gas from its protoplanetary disk. Although a giant impact blows off a large fraction of the primordial atmosphere of a terrestrial planet in the late formation stage, the remaining atmosphere can become water-rich via chemical reactions between hydrogen and vaporized core material. We find that a water-rich post-impact atmosphere forms when a basaltic or CI chondrite core is assumed. In contrast, little post-impact water is generated for an enstatite chondrite core. We investigate the X-ray- and UV-driven mass loss from an Earth-mass planet with an impact-induced multi-component H$_2$$-$He$-$H$_2$O atmosphere for Gyrs. We show that water is left in the atmosphere of an Earth-mass planet when the low flux of escaping hydrogen cannot drag water upward via collisions. For a water-dominated atmosphere to form, the atmospheric mass fraction of an Earth-mass planet with an oxidizing core after a giant impact must be less than a few times 0.1%. We also find that Earth-mass planets with water-dominated atmospheres can exist at semimajor axes ranging from a few times 0.1 au to a few au around a Sun-like star depending on the mass loss efficiency. Such planets are important targets for atmospheric characterization in the era of JWST. Our results indicate that efficient mixing between hydrogen and rocky components during giant impacts can play a role in the production of water in an Earth-mass planet.
Kenji Kurosaki, Yasunori Hori, Masahiro Ogihara, Masanobu Kunitomo
2023-09-27T10:54:51Z
http://arxiv.org/abs/2309.15571v1
# Evolution of a Water-rich Atmosphere Formed by a Giant Impact on an Earth-sized Planet ###### Abstract The atmosphere of a terrestrial planet that is replenished with secondary gases should have accumulated hydrogen-rich gas from its protoplanetary disk. Although a giant impact blows off a large fraction of the primordial atmosphere of a terrestrial planet in the late formation stage, the remaining atmosphere can become water-rich via chemical reactions between hydrogen and vaporized core material. We find that a water-rich post-impact atmosphere forms when a basaltic or CI chondrite core is assumed. In contrast, little post-impact water is generated for an enstatite chondrite core. We investigate the X-ray- and UV-driven mass loss from an Earth-mass planet with an impact-induced multi-component H\({}_{2}\)-He-H\({}_{2}\)O atmosphere for Gyrs. We show that water is left in the atmosphere of an Earth-mass planet when the low flux of escaping hydrogen cannot drag water upward via collisions. For a water-dominated atmosphere to form, the atmospheric mass fraction of an Earth-mass planet with an oxidizing core after a giant impact must be less than a few times 0.1%. We also find that Earth-mass planets with water-dominated atmospheres can exist at semimajor axes ranging from a few times 0.1 au to a few au around a Sun-like star depending on the mass loss efficiency. Such planets are important targets for atmospheric characterization in the era of JWST. Our results indicate that efficient mixing between hydrogen and rocky components during giant impacts can play a role in the production of water in an Earth-mass planet. Planetary interior (1248), Exoplanet atmospheric evolution (2308) 0000-0002-4002]Kenji Kurosaki 0000-0002-4880-7886]Yasunori Hori 0000-0002-4883-0888]Masahiro Ogihara 0000-0002-0788-0888]Masanobu Kunitomo ## 1 Introduction Exoplanets as small as Earth are now being discovered by ground-based and spaceborne telescopes. In the proximity of a central star with orbital periods of less than about 100 days, small planets with radii of \(\leq\)3 \(R_{\oplus}\) are more common than Neptune-sized or larger planets. (Petigura et al., 2013; Dressing and Charbonneau, 2015). Statistical studies on the population of small planets from _Kepler_ found that the intriguing transition between bare rocky planets and planets with atmospheres of typically \(<\)10 wt% (Weiss and Marcy, 2014; Rogers, 2015), known as the radius gap, lies around 1.5-2 \(R_{\oplus}\)(Fulton and Petigura, 2018; Hirano et al., 2018). Also, the mass-radius relationships of exoplanets with radii smaller than 1.5 \(R_{\oplus}\) suggests that they may have rocky compositions similar to that of Earth (e.g. Zeng et al., 2019). This study focuses on Earth-sized planets. Earth-sized rocky planets possess only thin or nonexistent atmospheres (Rogers, 2015), although they may have accumulated dense atmospheres in the stage of planet formation. The primordial atmosphere of a terrestrial planet originates from nebular gas rich in H\({}_{2}\)/He-rich (Ikoma and Hori, 2012; Lee et al., 2018). A large fraction of the H\({}_{2}\)/He atmosphere can be lost by a giant impact event. A head-on collision causes atmospheric blow-off when the velocity of the ground motion resulting from a strong shock exceeds the escape velocity on the planetary surface (Ahrens, 1993; Genda and Abe, 2003; Schlichting et al., 2015). In contrast, oblique impacts that frequently oc cur in the giant impact stage remove the primordial atmosphere less efficiently (Kegerreis et al., 2020; Denman et al., 2022). Giant impacts that induce strong turbulence also lead to atmospheric pollution by vaporizing rocky material. In particular, head-on collisions with high impact velocities drive violent mixing between the primordial atmosphere and core material (e.g., Kurosaki and Inutsuka, 2023). The consumption of hydrogen by FeO in the rocky vapor forms a H\({}_{2}\)-H\({}_{2}\)O atmosphere at a high temperature of \(\gtrsim\)3000 K. Moreover, if the melt is not metal-saturated after a giant impact, the disk-oriented atmosphere can produce H\({}_{2}\)O by reducing a molten mantle during the magma ocean period (Lange and Ahrens, 1982; Matsui and Abe, 1986; Ikoma and Genda, 2006; Lupu et al., 2014; Itcovitz et al., 2022; Lichtenberg et al., 2022; Maurice et al., 2023). The atmospheric mass and composition of a planet change during the planet's long-term evolution. After a giant impact, an Earth-sized planet with H\({}_{2}\)-H\({}_{2}\)O atmosphere would be expected to be exposed to strong X-ray and extreme-UV (XUV) radiation from its host star. An incident XUV flux drives the atmospheric escape from an Earth-sized planet (Zahnle et al., 1988; Hamano et al., 2013). The hydrodynamic escape in the multi-component atmosphere is driven by major species. As a result, the processes of XUV-driven mass loss lead to mass-dependent fractionation in the atmosphere (e.g., Hunten et al., 1987; Johnstone, 2020; Yoshida and Kuramoto, 2021; Yoshida et al., 2022). If the hydrodynamic blow-off of light gases does not carry along heavy gases, the hydrodynamic escape of a planet with a H\({}_{2}\)-H\({}_{2}\)O atmosphere results in an increase of H\({}_{2}\)O in the atmosphere. Note that photochemical reactions via UV photons promote disequilibrium chemistry, such as the production of hydrocarbons, if carbon-bearing molecules such as CH\({}_{4}\) are abundant in the impact-generated atmosphere of an Earth-sized planet. The phenomenon of energy-limited hydrodynamic escape has been well examined for sub-/super-Earths having H\({}_{2}\)/He atmospheres (e.g., Owen and Wu, 2013; Lopez and Fortney, 2014; Hori and Ogihara, 2020) and H\({}_{2}\)O-rich atmospheres (e.g., Kurosaki et al., 2014). Recently, Yoshida et al. (2022) calculated the mass loss of the H\({}_{2}\)-H\({}_{2}\)O atmosphere of an Earth-mass planet near the habitable zone around TRAPPIST-1-like stars coupled with chemical reactions and radiative cooling by H\({}_{2}\)O, OH, H\({}_{3}^{+}\), and OH\({}^{+}\). They found that the atmospheric escape rate of H\({}_{2}\) decreases as the initial H\({}_{2}\)O/H\({}_{2}\) ratio increases. However, it is not obvious that an Earth-mass planet can retain a water-rich atmosphere over Gyrs after giant impacts. In this study, we calculate the long-term evolution of the water mass fraction in an impact-generated atmosphere of an Earth-mass planet around a Sun-like star under various initial conditions, such as different initial atmospheric masses and semimajor axes. We consider the multi-component hydrodynamic escape of a H\({}_{2}\)/He-H\({}_{2}\)O atmosphere by photoevaporation under stellar XUV irradiation. We also perform the Smoothed Particle Hydrodynamic (SPH) simulation of giant impacts on an Earth-mass planet with a primordial H\({}_{2}\)/He atmosphere and then calculate chemical reactions between the remaining hydrogen and rocky vapor to determine the atmospheric compositions of the planet after giant impacts. This paper is structured as follows. Section 2 describes our numerical models of mass loss from planetary atmospheres. In Section 3, we show the atmospheric compositions of an Earth-mass planet after giant impacts and the long-term atmospheric evolution through photoevaporation. In Section 4, we calculate the chemical reaction between the rock vapor and the primordial atmosphere and show the SPH simulation results. We also derive analytically the conditions necessary for the water-dominated atmosphere of an Earth-mass planet after the hydrodynamic escape. We summarize our results in the last section. ## 2 Model Description We examine a one-dimensional, long-term evolution of an Earth-mass planet with the impact-generated atmosphere after giant impacts using the following numerical models. The planetary atmosphere is in radiative-convective equilibrium. (see Kurosaki and Ikoma, 2017, for details). * The planetary interior is a one-dimensional, fully convective, layered core-envelope structure in hydrostatic equilibrium. * The planetary atmosphere is in a one-dimensional plain-parallel, radiative-convective equilibrium. The following subsections summarize our models. Figure 1 shows a schematic of our interior model. ### Interior structure We consider an Earth-mass planet that initially has a H\({}_{2}\)-He atmosphere atop a rocky core. Each layer in the interior is homogeneously mixed and fully convective. We determine the interior structure by solving the following equations, \[\frac{\partial P}{\partial M_{r}} = -\frac{GM_{r}}{4\pi r^{4}}, \tag{1}\] \[\frac{\partial r}{\partial M_{r}} = \frac{1}{4\pi r^{2}\rho}, \tag{2}\] \[\frac{\partial T}{\partial M_{r}} = -\frac{GM_{r}}{4\pi r^{4}}\frac{T}{P}\nabla, \tag{3}\] \[\frac{\partial L_{r}}{\partial M_{r}} = -T\frac{\mathrm{d}S}{\mathrm{d}t}, \tag{4}\] where \(r\) is the planetocentric distance, \(M_{r}\) is the mass contained in the sphere of radius \(r\), \(P\) is the pressure, \(\rho\) is the density, \(T\) is the temperature, \(G\) is the gravitational constant, \(S\) is the specific entropy, \(L_{r}\) is the total energy flux passing through a sphere of radius \(r\), and \(t\) is time. The symbol \(\nabla\) is the adiabatic temperature gradient with respect to pressure, namely, \[\nabla=\nabla_{\mathrm{ad}}=\left(\frac{\partial\ln T}{\partial\ln P}\right)_ {S}. \tag{5}\] As for the equations of state (EoS), we use Saumon et al. (1995) for hydrogen and helium and the M-ANEOS (Thompson & Lauson, 1972; Melosh, 2007; Thompson et al., 2019) of basalt for a rocky core (Pierazzo et al., 2005). A giant impact onto an Earth-mass planet causes the dynamic mixing of a H\({}_{2}\)-He atmosphere with vaporized core material. Chemical reactions between hydrogen and rocky vapor produce hydrogen-bearing molecules, such as H\({}_{2}\)O. We use the M-ANEOS of H\({}_{2}\)O, which is one of the dominant components in the atmosphere after a giant impact (see Section 3.2 for details). We adopt the solar composition of the mass fraction of hydrogen \(X=0.72\) and helium \(Y=0.28\) as the initial envelope composition when the envelope has no rocky vapor (Lodders & Palme, 2009). Integrating equation (4), we obtain the intrinsic luminosity \(L_{\mathrm{int}}\) as \[L_{\mathrm{int}}=-\left[\frac{\mathrm{d}S_{\mathrm{env}}}{\mathrm{d}t}\int_{ M_{e}}^{M_{p}}TdM_{r}+\frac{\mathrm{d}S_{\mathrm{c}}}{\mathrm{d}t}\int_{0}^{M_{e}} TdM_{r}\right], \tag{6}\] where \(S_{\mathrm{env}}\) and \(S_{c}\) are the specific entropies of the envelope and the core, respectively, \(M_{\mathrm{p}}\) is the total mass of a planet, and \(M_{e}\) and \(M_{\mathrm{env}}\) are the masses of the core and envelope, respectively. The intrinsic luminosity \(L_{\mathrm{int}}\) is also written as \(L_{\mathrm{int}}=4\pi R_{p}^{2}F_{\mathrm{top}}\) where \(R_{p}\) is the planetary radius and \(F_{\mathrm{top}}\) is the outgoing flux from the top of the atmosphere (e.g., Kurosaki & Ikoma, 2017). ### Atmospheric structure We assume a plane-parallel atmosphere in radiative-convective equilibrium. As shown in Section 3.2, the rock-polluted atmosphere after a giant impact consists mainly of H\({}_{2}\), He, and H\({}_{2}\)O. The bottom of the atmosphere is assumed to be at \(P_{\mathrm{btm}}=1000\,\mathrm{bars}\). Note that the condensation of molecular species occurs below \(P_{\mathrm{btm}}\). The temperature profile in the radiative atmosphere (i.e., the stratosphere) follows the analytical formula derived by Matsui & Abe (1986): \[\sigma T^{4}=F_{\mathrm{top}}\left(\frac{\tau+1}{2}\right)+\frac{\sigma T_{ \mathrm{eq}}^{4}}{2}\left[1+\frac{\kappa_{\mathrm{th}}}{\kappa_{\mathrm{v}}}+ \left(\frac{\kappa_{\mathrm{v}}}{\kappa_{\mathrm{th}}}-\frac{\kappa_{\mathrm{ th}}}{\kappa_{\mathrm{v}}}\right)e^{-\tau_{\mathrm{v}}}\right], \tag{7}\] where \(F_{\mathrm{top}}\) is the net flux, \(\sigma\) is the Stefan-Boltzmann constant, \(T_{\mathrm{eq}}\) is the equilibrium temperature, and \(\kappa_{\mathrm{th}}\) (\(\mathrm{cm^{2}\,g^{-1}}\)) and \(\kappa_{\mathrm{v}}\) (\(\mathrm{cm^{2}\,g^{-1}}\)) are the mean opacities for long- and short-wavelength radiation, respectively. The optical depths for long- and short-wavelength radiation are denoted by \(\tau\) and \(\tau_{\mathrm{v}}\), respectively. The optical depth \(\tau\) is defined as \(d\tau=-\rho\kappa_{\mathrm{th}}dr\). Here we adopt the Rosseland opacity for \(\kappa_{\mathrm{th}}\). The H\({}_{2}\) and He opacities are calculated from the data table given in Freedman et al. (2008). The opacity of H\({}_{2}\)O is calculated from line profiles (Rothman et al., 1998) using the HITRAN 2020 database (Gordon et al., 2022). We assume that \(\kappa_{\mathrm{v}}=0.1\kappa_{\mathrm{th}}\)(Guillot, 2010). We consider the absorption of the radiation under the assumptions of zero reflectivity and scattering. Then, \(\kappa_{\mathrm{th}}\) in the H\({}_{2}\)-He-H\({}_{2}\)O Figure 1: Schematic of our planetary model. The planetary interior is composed of two parts: a spherical symmetry convective interior and a plane-parallel radiative-convective (R-C) equilibrium atmosphere. The interior structure in spherical symmetry is composed of a rocky core and H\({}_{2}\)-He-H\({}_{2}\)O envelope. The plane-parallel atmosphere is composed of H\({}_{2}\)-He-H\({}_{2}\)O (see also Section 3.2). atmosphere is given by \[\kappa_{\rm th}=A_{\rm H_{2}}\kappa_{\rm H_{2}}+A_{\rm He}\kappa_{\rm He}+A_{\rm W }\kappa_{\rm H_{2}O}, \tag{8}\] where \(A_{\rm H_{2}}\), and \(A_{\rm He}\), and \(A_{\rm W}\) are the mole fractions of H\({}_{2}\), He, and H\({}_{2}\)O, respectively. The pseudo-moist adiabatic gradient determines the \(T\)-\(P\) profile in the convective region (i.e., the troposphere). For \(N\) kinds of species, including \(j\) kinds of non-condensable ones, the pseudo-moist adiabatic temperature gradient (Ingersoll, 1969; Atreya, 1986; Abe & Matsui, 1988) is given by \[\left(\frac{d\ln T}{d\ln P}\right)=\nabla_{\rm dry}\left[\frac{1+\sum_{i=j+1}^ {N}\frac{x_{i}}{1-x_{i}}\frac{d\ln\ p_{i}^{*}}{d\ln\frac{T}{L}}}{1+\sum_{i=j+1} ^{N}\frac{R_{g}}{C_{p}}\frac{x_{i}}{1-x_{i}}\frac{d\ln\ p_{i}^{*}}{d\ln\frac{T }{L}})^{2}}\right], \tag{9}\] where \(\nabla_{\rm dry}\) is the adiabatic temperature gradient without condensation (i.e., the dry adiabatic), \(C_{p}=\sum_{i=1}^{N}x_{i}C_{p,i}\) is the mean heat capacity, \(A_{i}\) and \(p_{i}^{*}\) are the mole fraction and vapor pressure of the \(i\)-th condensable species (\(i=j+1,\cdots,N\)). The heat capacities of H\({}_{2}\)O at constant pressure are \(3.5R_{g}\) under the ideal gas approximation, where \(R_{g}\) is the gas constant. The non-ideal effect of the condensates is negligible near the photosphere. We integrate the radiation transfer equations by using the Eddington approximation (e.g., Abe & Matsui, 1988). The upward and downward radiation flux densities \(F_{\rm IR}^{+}\) and \(F_{\rm IR}^{-}\) can be written as \[F_{\rm IR}^{+}(\tau) = \pi B(\tau) \tag{10}\] \[- \int_{\tau_{\rm h}}\frac{d}{d\tau^{\prime}}(\pi B(\tau^{\prime}) )\exp\left[-\frac{3}{2}(\tau^{\prime}-\tau)\right]d\tau^{\prime}\] \[F_{\rm IR}^{-}(\tau) = \pi B(\tau)\] (11) \[- \int_{0}^{\tau}\frac{d}{d\tau^{\prime}}(\pi B(\tau^{\prime})) \exp\left[-\frac{3}{2}(\tau-\tau^{\prime})\right]d\tau^{\prime}\] \[- \pi B(0)\exp\left(-\frac{3}{2}\tau\right),\] and the net flux as \[F_{\rm net}=F_{\rm rad}+F_{c}, \tag{12}\] where \(B(\tau)\) is the black body radiation intensity, \(F_{c}\) is the convective flux1, \(F_{\rm irr}\) is the direct solar flux, and \(F_{\rm rad}\) is the net radiative flux given by \(F_{\rm rad}=F_{\rm IR}^{+}-F_{\rm IR}^{-}-F_{\rm irr}\). Note that \(F_{\rm top}=F_{\rm IR}^{+}(\tau=0)\). We assume that the net flux is constant at all altitudes. Footnote 1: The convective flux is given by \(F_{\rm C}=F_{\rm net}-F_{\rm rad}\). ### Atmospheric escape A planet undergoes atmospheric escape driven by a stellar X-ray and extreme UV (XUV) irradiation. The atmospheric mass loss rate is defined as \[\left|\frac{dM_{\rm esc}}{dt}\right|=\sum_{i=1}^{N}4\pi R_{p}^{2}m_{i}f_{i}, \tag{13}\] where \(M_{\rm esc}\) is the mass loss and \(m_{i}\) and \(f_{i}\) are the molecular weight and the escape flux for the \(i-\)th component, respectively. The atmosphere of a planet after a giant impact is composed mainly of H\({}_{2}\), He, and H\({}_{2}\)O. Water molecules are transported upward in the atmosphere through diffusion processes. Hydrogen and helium can escape simultaneously because of the small mass difference between H and He, compared to O. In this study, we consider hydrodynamic escape from a H\({}_{2}\)-H\({}_{2}\)O atmosphere, that is, under the assumption of \(A_{\rm H}=A_{\rm H_{2}}+A_{\rm He}\). Thus, we obtain the following approximate total escape flux: \[\left|\frac{dM_{\rm esc}}{dt}\right|=4\pi R_{p}^{2}\left(m_{\rm H_{2}}^{*}f_{ \rm H}+m_{\rm W}f_{\rm W}\right), \tag{14}\] where \(m_{\rm H2}^{*}=m_{\rm H_{2}}A_{\rm H_{2}}+m_{\rm He}A_{\rm He}/(A_{\rm H_{2}}+ A_{\rm He})\), the subscript W means water and \(m\) is the molecular mass. The escape of H\({}_{2}\)O is driven by collisions between hydrogen and water molecules. Once water molecules are dredged up to the location where the photodissociation of H\({}_{2}\)O occurs, oxygen can escape if the escape flux of hydrogen is high enough to drag oxygen upward via collisions. The upward transport of water molecules in the atmosphere controls the H\({}_{2}\)O escape flux from the atmosphere. When the flow of escaping hydrogen drags He and O up to a high altitude via collisions, the H\({}_{2}\)O escape flux (e.g., Chamberlain & Hunten, 1987) is described as \[f_{\rm W}=\begin{cases}\frac{A_{\rm W}}{A_{\rm H}}f_{\rm H}-\frac{bg}{k_{B}T}A _{\rm W}\Delta m\left(f_{\rm H}>f_{\rm H}^{\rm crit}\right),\\ 0\hskip 28.452756pt(f_{\rm H}\leq f_{\rm H}^{\rm crit}),\end{cases} \tag{15}\] where \[f_{\rm H}^{\rm crit}=\frac{bg}{k_{\rm B}T}A_{\rm H}\Delta m, \tag{16}\] \(\Delta m=m_{\rm W}-m_{\rm H_{2}}\), \(g\) is the gravitational acceleration, \(T\) is the temperature, and \(k_{B}\) is the Boltzmann constant. The binary diffusion coefficient \(b\) is written as \[b=\frac{3}{64Q}\sqrt{\frac{2\pi k_{\rm B}T(m_{\rm H_{2}}^{*}+m_{\rm W})}{m_{ \rm H_{2}}^{*}m_{\rm W}}} \tag{17}\] where \(Q=\pi(\sigma_{\rm H}+\sigma_{\rm W})^{2}/16\) and \(\sigma\) is the effective collision diameters of the molecules. Combined with equations (14) and (15), we find the H\({}_{2}\) escape flux as follows: \[f_{\rm H}=A_{\rm H}\frac{\frac{1}{4\pi R_{p}^{2}}\left|\frac{dM_{\rm esc}}{ dt}\right|+\frac{bg}{k_{B}T}m_{\rm W}A_{\rm W}\Delta m}{m_{\rm H_{2}}^{*}A_{\rm H}+m_{ \rm W}A_{\rm W}}. \tag{18}\] We also obtain an upper limit to \(A_{\rm H}\) for \(f_{\rm W}>0\) using equation (14); \[A_{\rm H}<\frac{k_{\rm B}T}{4\pi Gbm_{\rm H_{2}}^{*}\Delta m}\left|\frac{\dot{M}_{ \rm esc}}{M_{\rm p}}\right|, \tag{19}\] where \(M_{\rm atm}\) is the atmospheric mass and \(\dot{M}_{\rm atm}/M_{\rm p}=dX_{\rm H}/dt\), assuming that \(X_{\rm H}\) is the sum of the mass fraction of H\({}_{2}\) and He in the atmosphere. When the atmospheric escape occurs in an energy-limited fashion, the total mass loss rate follows \[\frac{dM_{\rm esc}}{dt}=-\frac{\varepsilon F_{\rm XUV}R_{p}\pi R_{\rm XUV}^{2 }}{GM_{p}K_{\rm tide}}, \tag{20}\] where \(\varepsilon\) is the XUV-heating efficiency in the upper atmosphere, \(F_{\rm XUV}\) is the incident XUV flux from the host star, \(K_{\rm tide}\) is the potential energy reduction factor due to the stellar tide and \(R_{\rm XUV}\) is the effective radius at which the planet receives the incident XUV flux. We use the formula for \(K_{\rm tide}\) derived by Erkaev et al. (2007) \[K_{\rm tide}=\frac{(\eta-1)^{2}(2\eta+1)}{2\eta^{3}}, \tag{21}\] where \(\eta\) is the ratio of the Roche-lobe (or Hill) radius to the planetary radius, \(R_{p}\). In equation (20), we assume \(R_{\rm XUV}=R_{p}\), which is a good approximation for close-in planets of interest (Lammer et al., 2013). It is noted that Lammer et al. (2013) focused on a H\({}_{2}\)-He atmosphere. If a planet has a H\({}_{2}\)O-rich atmosphere, the scale height of the water vapor atmosphere is less than that of a H\({}_{2}\)-He atmosphere at a given temperature. Nevertheless, \(R_{\rm XUV}\simeq R_{p}\) is a good approximation even for the vapor atmosphere. The XUV-heating efficiency in a non-H\({}_{2}\)-He atmosphere remains poorly understood because of non-negligible contributions of minor gases such as CO\({}_{2}\) to radiative cooling. For the photoevaporation of hot Jupiters, \(\varepsilon\) was estimated to be on the order of 0.1 (Yelle et al. (2008) and references therein). Thus, we adopt \(\varepsilon=0.1\) as a fiducial value and investigate the sensitivity of our results to \(\varepsilon\). We suppose that the host star is a G-type star. We adopt the empirical formula derived by Ribas et al. (2005) for \(F_{\rm XUV}\) erg s\({}^{-1}\) cm\({}^{-2}\): \[F_{\rm XUV}=\left\{\begin{aligned} 504\left(\frac{a}{1 \rm AU}\right)^{-2}&(t<0.1\rm Gyr)\\ 29.7\left(\frac{t}{1\rm Gyr}\right)^{-1.23}&\left( \frac{a}{1\rm AU}\right)^{-2}&(t\geq 0.1\rm Gyr).\end{aligned}\right. \tag{22}\] ## 3 Results In this section, we present the long-term evolution of an Earth-mass planet with the impact-generated atmosphere after the giant impact. First, we demonstrate SPH simulations of giant impacts that allow us to determine the mass fraction of rocky material in the remaining atmosphere in Section 3.1. Second, we calculate chemical reactions between H\({}_{2}\)-He gas in the remaining atmosphere and vaporizing rocky material. We show that an Earth-mass planet with an oxidizing core can have the H\({}_{2}\)-He-H\({}_{2}\)O atmosphere after the giant impact in Section 3.2. Third, we show the mass loss evolution of the H\({}_{2}\)-H\({}_{2}\)O atmosphere in Section 3.3. Finally, we summarize results of long-term evolution of an Earth-mass planet with 10\({}^{-3}\)-10 wt% of the H\({}_{2}\)-He-H\({}_{2}\)O atmosphere at 0.1-3 au after the giant impact in Section 3.4. ### Rock mass content in the atmosphere after a giant impact In the atmosphere of an Earth-mass planet after a giant impact, the mixing ratio of the primordial H\({}_{2}\)-He atmosphere to vaporized rocky material determines the initial H\({}_{2}\)O content (see also Section 3.2). To derive the water abundance in the atmosphere after a giant impact, we carried out SPH simulations of a head-on (non-oblique) collision with an Earth-mass planet with a 10 wt% H\({}_{2}\)/He atmosphere. The conditions for the impact simulations are as follows: * The atmospheric mass fraction is 1.0% or 10% by mass. * The impact velocity is 1.0-2.6 times the mutual escape velocity. * A head-on collision is considered. * The impactor mass is 0.1 \(M_{\oplus}\) or 1 \(M_{\oplus}\). In our SPH simulations, we consider that the inner part of the largest remnant whose mass fraction of rocky material is larger than 0.99 is the core-dominated region. We estimate the atmospheric composition after a giant impact from the mass fractions of hydrogen and rocky material in the outer part above the "core". Figure 2 shows an example of our giant impact simulations for 1 \(M_{\oplus}\) equivalent planets with a 10 wt% H\({}_{2}\)/He atmosphere. The impact velocity is equal to the mutual escape velocity. First, the atmosphere flows out from the impact point just after the collision. Once the core of the impactor collides with the core of the target, the atmosphere starts to eject from the antipode. The ejection of the atmosphere occurs at the impact point during the oscillation of the merged core. We check the mass fraction of rock to hydrogen after the impact. The mixing profile is stable from 10 to 50 \(\tau_{\rm ff}\), where \(\tau_{\rm ff}\) is the free-fall timescale of the merged object. The loss fractions for the atmosphere and core are 84% and 2.1%, respectively. The rock mass fraction in the atmosphere is 30% by mass. Our result suggests that the mixing state of the dredged-up rock vapor and atmospheric components is stable over the simulation timescale of this study because the rock vapor and atmosphere remain in hydrostatic equilibrium and are dynamically stable. We calculate the water mole fraction in the atmosphere of an Earth-mass planet after the giant impact under various impact conditions, using the mixing ratio of rocky vapor to hydrogen in the remaining atmosphere. Figure 3 shows the mass faction of core material (\(Z_{\rm atm}\)) in the atmosphere after a giant impact. A more energetic giant impact can blow off more primordial H\({}_{2}\)-He atmosphere and cause the effective mixing of H\({}_{2}\)-He gas with rocky vapor. Consequently, \(Z_{\rm atm}\) increases with decreasing atmospheric mass fraction (\(X_{\rm atm}\)) after the impact. Bearing in mind that the atmospheric loss induced by a giant impact should be controlled by the impact energy and energy loss due to a giant impact, we obtain the following semi-empirical formula from our SPH simulations: \(Z_{\rm atm}=8.2\times 10^{-2}X_{\rm atm}^{-0.41}\). ### Atmospheric compositions after giant impacts During a giant impact, vaporized rocky material reacts with the primordial atmosphere. In this section, we focus on chemical reactions between a hydrogen-rich atmosphere and the core material vaporized by the shock of a giant impact. We used ggchem(Woitke et al., 2018) to calculate chemical reactions between the primordial H\({}_{2}\)-He gas and rocky vapor, including dust condensation. The primordial atmosphere has the same compositions as the protosolar abundances (Asplund et al., 2021). We consider three kinds of core compositions: basalt(Pierazzo et al., 2005), CI chondrite (Lodders and Palme, 2009), and EH chondrite (Javoy et al., 2010). The elemental abundances of basalt, CI chondrite, and EH chondrite are listed in Table 1. Figure 4 shows the mole fractions of major atomic and molecular species in the primordial atmosphere uniformly mixed with rocky vapor, where the mixing ratio is assumed to be 50%. We also show the mass fractions of six condensates such as MgSiO\({}_{3}\) (enstatite), CaMgSi\({}_{2}\)O\({}_{6}\) (diopside), CaAl\({}_{2}\)Si\({}_{2}\)O\({}_{8}\) (anorthite), and NaAlSi\({}_{3}\)O\({}_{8}\) (albite). The atmospheric compositions of an Earth-mass planet in chemical equilibrium have two kinds of regimes: (i) a mineral atmosphere composed of vaporized metal atoms and metal oxides at a high temperature of \(\gtrsim\)3000 K; and (ii) a volatile-rich atmosphere at a lower temperature. The major components in the low-\(T\) atmosphere are H\({}_{2}\), He, and H\({}_{2}\)O. Water molecules are produced by reducing FeO by hydrogen at \(T\lesssim 3000\) K. The abundance of H\({}_{2}\)O is higher than that of CO or CH\({}_{4}\) by two orders of magnitude when the core is composed of basalt or CI chondrite. For the EH-chondrite-like core, the H\({}_{2}\)O abundance is comparable to that of CO or CH\({}_{4}\) because oxygen is distributed to dust particles. An Earth-mass planet is likely to have a mineral atmosphere right after a giant impact. As the planet rapidly cools, its atmosphere becomes H\({}_{2}\)O-rich because all the metals become dust below \(\sim\)3000 K and settle into the interior. The mass fractions of H\({}_{2}\) and H\({}_{2}\)O vary with the mixing ratio of rocky vapor (\(Z_{\rm atm}\)) in the atmosphere of an Earth-mass planet (see Figure 5). As the \(Z_{\rm atm}\) in the atmosphere becomes high, H\({}_{2}\) in the atmosphere is preferentially replaced for H\({}_{2}\)O. We also find that the mass fraction of H\({}_{2}\)O strongly depends on the core compositions. The water fraction in the impact-generated atmosphere becomes high unless an Earth-mass planet has a reducing core such as EH chondrites. \begin{table} \begin{tabular}{c|c|c|c|c} Element & Protosolar & Basalt & CI & EH \\ \hline H & 7.11E-01 & - & 1.96E-02 & - \\ He & 2.71E-01 & - & 9.16E-09 & - \\ Li & 9.89E-09 & - & 1.46E-06 & - \\ Be & 1.76E-10 & - & 2.09E-08 & - \\ B & 4.47E-09 & - & 7.74E-07 & - \\ C & 2.86E-03 & - & 3.47E-02 & - \\ N & 7.74E-04 & - & 2.94E-03 & - \\ O & 6.41E-03 & 5.98E-01 & 4.58E-01 & 3.11E-01 \\ F & 2.05E-07 & - & 5.81E-05 & - \\ Ne & 1.90E-03 & - & 1.79E-10 & - \\ Na & 3.11E-05 & 1.73E-02 & 4.98E-03 & - \\ Mg & 7.09E-04 & 3.76E-02 & 9.67E-02 & 1.21E-01 \\ Al & 5.93E-05 & 6.13E-02 & 8.49E-03 & 9.22E-03 \\ Si & 7.50E-04 & 1.68E-01 & 1.06E-01 & 1.90E-01 \\ P & 6.50E-06 & - & 9.66E-04 & - \\ S & 3.47E-04 & - & 5.34E-02 & - \\ Cl & 5.93E-06 & - & 6.97E-04 & - \\ Ar & 7.18E-05 & - & 1.32E-09 & - \\ K & 3.76E-06 & - & 5.43E-04 & - \\ Ca & 1.73E-03 & 5.77E-02 & 9.21E-03 & 9.72E-03 \\ Sc & 5.07E-08 & - & 5.89E-06 & - \\ Ti & 3.67E-06 & 5.32E-03 & 4.50E-04 & 5.01E-04 \\ V & 3.31E-07 & - & 5.42E-05 & - \\ Cr & 1.79E-05 & - & 2.64E-03 & 3.61E-03 \\ Mn & 1.18E-05 & - & 1.92E-03 & - \\ Fe & 1.33E-03 & 5.27E-02 & 1.84E-01 & 3.31E-01 \\ Co & 4.19E-06 & - & 5.05E-04 & 1.00E-03 \\ Ni & 7.69E-05 & - & 1.07E-02 & 2.00E-02 \\ \hline \end{tabular} \end{table} Table 1: Elemental abundances of basalt, CI chondrite, and EH chondrite. The hyphen means the species that are not included in our composition models. ### Mass loss of the H\({}_{2}\)-H\({}_{2}\)O atmosphere In this subsection, we discuss the long-term evolution of a H\({}_{2}\)-H\({}_{2}\)O atmosphere. The upward transport of H\({}_{2}\)O (i.e., \(f_{\rm W}>0\)) in the atmosphere via collisions between hydrogen and water occurs while \(A_{\rm H}\) satisfies equation (16). The planet can retain the initial H\({}_{2}\)O content if the escape of H\({}_{2}\)O is suppressed by inefficient diffusion processes in the atmosphere. Figure 6 demonstrates the atmospheric evolution of an Earth-mass planet initially having a 6.3 wt% H\({}_{2}\)-H\({}_{2}\)O atmosphere with a water mole fraction of \(A_{\rm W}=10\%\) at \(a=1\,\)au around a Sun-like star. After a giant impact, the planet cools rapidly and reaches the radiative equilibrium state2. The atmospheric escape occurs in two stages in \(\gtrsim\)10 Myrs. Both hydrogen and water molecules escape as long as the escape flux of hydrogen is high enough to drag oxygen. After \(\sim\)1000 Myrs, less heavy hydrogen preferentially escapes from the atmosphere by the XUV-driven photoevaporation, whereas H\({}_{2}\)O molecules stay in the lower atmosphere. Consequently, as the atmospheric mass decreases, the mole fraction of heavier molecules (H\({}_{2}\)O) apparently increases. Thus, the final state of the planetary atmosphere should be enriched in water molecules. In this simulation, the final atmospheric mass fraction is 6.15%, which includes \(A_{\rm H}=89.7\) mol% and \(A_{\rm W}=10.3\) mol%. Footnote 2: The radiative cooling of a planet is characterized by the Kelvin-Helmholtz timescale; \[\tau_{\rm KH}=\frac{GM_{\rm p}^{2}}{2R_{\rm p}L}\sim 10^{3}\left(\frac{M_{ \rm p}}{1\,M_{\oplus}}\right)\left(\frac{R_{\rm p}}{2\,R_{\oplus}}\right)^{3} \left(\frac{T}{3000\,{\rm K}}\right)^{-4}\,{\rm yrs},\] (23) where \[L\] is the luminosity. A hot, inflated Earth-mass planet gravitationally contracts for \[\sim\] 10 3 yrs. ### Long-term evolution of the impact-generated atmosphere formed by a giant impact In Section 3.1, we obtained a semi-empirical scaling law for the initial rock vapor mass fraction in the impact-generated atmosphere (see Figure 3). The water mass fraction (\(X_{\rm W}\)) is determined as a function of \(X_{\rm atm}\) shown in Figure 5. We perform the same long-term simulations of an Earth-mass planet with the H\({}_{2}\)-H\({}_{2}\)O atmosphere using the scaling law as those in Section 3.3. We discuss how the results of Figure 8 change if the \(X_{\rm W}\) is used as the initial condition of the impact-generated atmosphere of an Earth-mass planet. Note that \(X_{\rm W}\) is given by \(Z_{\rm atm}\) shown in figure 5 which is given by the scaling relation of \(Z_{\rm atm}=8.2\times 10^{-2}X_{\rm atm}^{-0.41}\). Figure 3: Mass fraction of core material in the remaining atmosphere (\(Z_{\rm atm}\)) of an Earth-mass planet after a giant impact. We consider head-on collisions between two Earth-mass planets. The semi-empirical formula of the \(Z_{\rm atm}-X_{\rm atm}\) relationship obtained from our SPH simulations of giant impacts, \(Z_{\rm atm}=8.2\times 10^{-2}X_{\rm atm}^{-0.41}\) is also drawn (red line). Figure 2: Snapshots of the SPH simulation of a head-on collision with the impact velocity of the mutual escape velocity. From left to right, snapshots show the time at \(t=0,1\tau_{\rm ff},3\tau_{\rm ff}\), and \(50\tau_{\rm ff}\). Both the target and the impactor have a \(1M_{\oplus}\) core with 10 wt% of the H\({}_{2}\) atmosphere. Green and blue particles denote the atmospheric and core components bound on the post-impact planet, respectively. Red and black particles denote the escaping atmospheric and core components, respectively. Both horizontal and vertical axes are normalized by the radius of the target, \(R_{\rm T}\). The left, middle, and right panels of Figure 7 show the relationship between the initial (\(X_{\rm atm,0}\): top) and final (\(X_{\rm atm}\): bottom) atmospheric mass fraction of an Earth-mass planet and the water mole fraction in the remaining atmosphere after the atmospheric loss through photoevaporation over 4.6 Gyr for core models with the basalt, CI, and EH compositions, respectively. A water-dominated atmosphere is more likely to form in a less massive atmosphere after a giant impact; the mole fraction of water remaining in the atmosphere exceeds 10 % when \(X_{\rm atm}\leq 4\times 10^{-3}\) for the case of a basalt core model and \(X_{\rm atm}\leq\times 10^{-3}\) for the case of a CI-chondritic core one. In contrast, such a water-rich atmosphere is difficult for an Earth-mass planet with a reducing core like an EH chondrite to form. We find that an Earth-mass planet with a \(X_{\rm atm}\)-dominated atmosphere is more likely to form in a less massive atmosphere after a giant impact. The mole fraction of water remaining in the atmosphere exceeds 10 % when \(X_{\rm atm}\leq 4\times 10^{-3}\) for the case of a basalt core model and \(X_{\rm atm}\leq\times 10^{-3}\) for the case of a CI-chondritic core one. In contrast, such a water-rich atmosphere is difficult for an Earth-mass planet with a reducing core like an EH chondrite to form. We find that an Earth-mass planet with a \(X_{\rm atm}\)-dominated atmosphere is more likely to form in a less massive atmosphere after a giant impact. The mole fraction of water remaining in the atmosphere exceeds 10 % when \(X_{\rm atm}\leq 4\times 10^{-3}\) for the case of a basalt core model and \(X_{\rm atm}\leq\times 10^{-3}\) for the case of a CI-chondritic core one. In contrast, such a water-rich atmosphere is difficult for an Earth-mass planet with a reducing core like an EH chondrite to form. We find that an Earth-mass planet with a \(X_{\rm atm}\)-dominated atmosphere is more likely to form in a less massive atmosphere after a giant impact; the mole fraction of water remaining in the atmosphere exceeds 10 % when \(X_{\rm atm}\leq 4\times 10^{-3}\) for the case of a basalt core model and \(X_{\rm atm}\leq\times 10^{-3}\) for the case of a CI-chondritic core one. In contrast, such a water-rich atmosphere is difficult for an Earth-mass planet with a reducing core like an EH chondrite to form. We find that an Earth-mass planet with a \(X_{\rm atm}\)-dominated atmosphere is more likely to form in a less massive atmosphere after a giant impact. The mole fraction of water remaining in the atmosphere exceeds 10 % when \(X_{\rm atm}\leq 4\times 10^{-3}\) for the case of a basalt core model and \(X_{\rm atm}\leq\times 10^{-3}\) for the case of a CI-chondritic core one. In contrast, such a water-rich atmosphere is difficult for an Earth-mass planet with a reducing core like an EH chondrite to form. planet requires an oxidizing core to produce a water-rich atmosphere in the giant impact scenario, followed by the atmospheric escape. ## 4 Discussion We see the dependence of the final atmospheric water fraction on the semi-major axis and the initial water content (Section 4.1). We also present the analytical condition for the water-dominated atmosphere (Section 4.2), and discuss the caveats of our models (Section 4.3). Dependence of the final atmospheric water fraction on the semi-major axis and the initial water content We examine how mass loss affects the water fractions in the atmospheres of Earth-mass planets after giant impacts with various semimajor axes and initial water content. We consider that the initial atmospheric mass fraction relative to the total mass of a planet ranges from \(10^{-3}\) to \(10^{-1}\). The initial H\({}_{2}\)O mole fractions in the atmosphere \(A_{\rm W}\) are assumed to be 1%, 3%, and 10%. Figure 8 shows the initial and final atmospheric mass fractions and the final water mole fraction in the atmosphere of an Earth-mass planet with a given equilibrium temperature after \(t=4.6\) Gyrs. There are three key parameters for the survival of the water-rich atmosphere of an Earth-mass planet: the semimajor axis of the planet, \(a\), the initial water mole fraction in the atmosphere, \(A_{\rm W}\), and the initial atmospheric mass, \(X_{\rm atm}\). When the planet is close to its host star, it loses almost all of its H\({}_{2}\)-H\({}_{2}\)O atmosphere and becomes a bare rocky planet. If the planet is distant from its star and exposed to less intensive XUV irradiation, neither its atmospheric mass \(X_{\rm atm}\) nor its water mole fraction \(A_{\rm W}\) changes significantly. In this case, the final water mole fraction in the atmosphere reflects the initial water content that the planet has formed. Water molecules tend to remain in the atmosphere if the atmospheric mass fraction is less than about 0.3%. The initial atmosphere with a higher initial \(A_{\rm W}\) is likely to survive because a low escape flux of hydrogen in the water-rich atmosphere reduces the efficiency of the upward transport of water molecules. The XUV-driven atmospheric escape of an Earth-mass planet yields a high H\({}_{2}\)O abundance in the atmosphere for planets with \(X_{\rm atm}^{\rm initial}\lesssim 0.3\%\) (\(X_{\rm atm}^{\rm final}<0.1\%\)) when it receives \(F_{\rm a}>2F_{\oplus}\). Such a planet should have a steam atmosphere because its insolation exceeds the critical flux for a runaway greenhouse effect around a Sun-like star (e.g. Kopparapu et al., 2014; Kodama et al., 2019). The less efficient heating of hydrogen by UV photons (e.g., \(\varepsilon=0.01\)) moves the outer edge of a "steamy zone" closer to a central star (see also Figure 9). One of the key findings of the present study is that the enhancement of H\({}_{2}\)O in the atmosphere of an Earth-mass planet occurs if \(f_{\rm W}=0\), i.e., in the case that only hydrogen escapes. Unless a planet loses its atmosphere completely, a low escape flux of hydrogen in a less massive H\({}_{2}\)-H\({}_{2}\)O atmosphere that satisfies \(f_{\rm W}=0\) forms a water-dominated atmosphere in Gyrs. In Section 4.2, we analytically derive the condition for the formation of a water-dominated atmosphere of an Earth-mass planet. Also, as we will see in Section 3.4, even when the results of giant impact simulations are taken into account, there is no change in this finding. ### Analytical condition for the water-dominated atmosphere In Section 3.3, we demonstrate that the Earth-mass planet with a small atmospheric mass fraction at \(a=1\) au has a water-dominated atmosphere in Gyrs. We analytically derive the condition in which Earth-mass planets have water-dominated atmospheres. The outflow of escaping hydrogen cannot drive the escape of water molecules if \(f_{\rm W}=0\), that is, \(f_{\rm H}\leq f_{\rm H}^{\rm crit}\) (see equation (16)). The mass loss process increases the H\({}_{2}\)O mole fraction when \(f_{\rm H}\leq f_{\rm H}^{\rm crit}\). Integrating equation (19) from \(t=0\) to \(t=\Delta t\), we obtain the upper-limit mass fraction of H\({}_{2}\) and He in the impact-generated atmosphere in which all water molecules can persist for \(\Delta t\); \[X_{\rm H}^{\rm max}\equiv\frac{M_{\rm esc}}{M_{\rm p}(t=0)}=\frac{4\pi Gbm_{ \rm H_{2}}\Delta m}{k_{\rm B}T}A_{\rm H}(t=0)\Delta t, \tag{24}\] where \(M_{\rm esc}\) is the net mass loss of a H\({}_{2}\)-He atmosphere. This value is given by \(M_{\rm esc}=\int\frac{dM_{\rm esc}}{dt}dt\approx 3\varepsilon\bar{F}\Delta t/(4G \bar{\rho}_{\rm p})\), where \(\bar{F}=\int F_{\rm XUV}dt\) and \(\bar{\rho}_{\rm p}\) is the planetary mean density. An Earth-mass planet can have a water-dominated atmosphere if it initially has an impact-generated atmosphere of \(X_{\rm H}\leq X_{\rm H}^{\rm max}\). The condition for a water-dominated atmosphere of a planet, \(X_{\rm H}^{\rm max}\), well explains our numerical results as shown in Figure 8. In fact, Earth-mass planets with water-dominated atmospheres appear just below the criterion of \(X_{\rm H}^{\rm max}\) after 4.6 Gyrs unless such planets lose their atmospheres completely by photoevaporation after a giant impact. Next, we consider the range of semimajor axes in which Earth-mass planets with water-dominated atmospheres can exist. We can obtain the approximate semimajor axis within which a planet loses all of its the H\({}_{2}\)/He, \(a_{\rm esc}\): \[a_{\rm esc}=\left(\frac{3}{4G}\frac{\overline{\epsilon L_{\rm XUV}}\Delta t}{M _{\rm p}\rho_{p}X_{\rm atm}}\right)^{\frac{1}{2}}, \tag{25}\] where \(\Delta t\) is the time elapsed after a giant impact, \(M_{\rm atm}\) is the atmospheric mass of a planet, \(\rho_{\rm p}\) is the mean density of a planet, and \(\overline{\epsilon L_{\rm XUV}}\) is the time-averaged XUV luminosity that a planet receives. Note that \(K_{\rm tide}\sim 1\) for an Earth-mass planet in this study. A large hydrogen flux causes the escaping flow of water molecules, namely when \(f_{\rm H}>f_{\rm H}^{\rm crit}\) (see equation (16)). The farther a planet is from a central star, the lower the escape rate of hydrogen becomes under stellar XUV irradiation. The critical semimajor axis (\(a_{\rm W}\)) beyond which an Earth-mass planet prevents the threat of H\({}_{2}\)O escape is defined by \(f_{\rm H}=f_{\rm H}^{\rm crit}\); \[a_{\rm W}=\sqrt{\frac{3}{4G}\frac{\overline{\epsilon L_{\rm XUV}}}{\rho_{p}} \frac{k_{B}T_{\rm atm}}{4\pi m_{H}^{2}A_{H}GM_{p}b(T_{\rm atm})}}, \tag{26}\] where \(T_{\rm atm}\) is the atmospheric temperature. If \(a_{\rm W}<a<a_{\rm esc}\), the planet undergoes the hydrodynamic escape of hydrogen, but it preserves the initial amount of water in the atmosphere. Figure 9 shows the habitat of an Earth-mass planet that has a water-dominated atmosphere (i.e., \(a_{\rm W}<a<a_{\rm esc}\), hereafter the steamy zone"). We set \(\overline{L_{\rm XUV}}=3.1\times 10^{28}\) erg. s\({}^{-1}\) for a Sun-like star and \(T_{\rm atm}=500\) K. Since the atmospheric temperature at the upper stratosphere is higher than the equilibrium temperature, we assume that \(T_{\rm atm}\) is determined by the planet's atmosphere at typically \(10^{8}\) years old. The XUV heating efficiency, \(\varepsilon\), determines the inner and outer edges of a steamy Earth-mass planet. The formation of a steam atmosphere requires the initial atmosphere mass fraction \(X_{\rm atm}\lesssim 2\times 10^{-3}\) regardless of \(\varepsilon\). These "steamy zones" are in good agreement with the results shown in Figure 8. ### Caveats This study considered a head-on giant impact on an Earth-mass planet. In reality, grazing impacts occur in the final stage of terrestrial planet formation (e.g., Kokubo & Genda, 2010; Ogihara et al., 2020). Less efficient mixing of a H\({}_{2}\)/He atmosphere and vaporized core material by oblique collisions decreases the production of H\({}_{2}\)O. We also assumed simple rock compositions for core material. If the core of an Earth-mass planet is a mixture of icy and rocky material, the chemical compositions of an impact-generated atmosphere become different. Moreover, dust clouds may form in the polluted Figure 7: Initial and final atmospheric mass fraction \(X_{\rm atm}\) and final water mole fraction (color bar) after the atmospheric loss through photoevaporation (\(\varepsilon=0.1\)) for 4.6 Gyrs. The rocky core is composed of basalt (left panel), CI (middle panel), and EH (right panel). An initial water mass fraction is determined by the empirical relation derived from the giant impact simulations, \(X_{\rm W}=8.2\times 10^{-2}X_{\rm atm}^{-0.41}\). The color contour shows the final H\({}_{2}\)O mole fraction in the remaining atmosphere. atmosphere unless dust particles fall into the deep interior. The atmospheric compositions generated by the impact are still debated. Lupu et al. (2014) showed that a post-impact planetary atmosphere is dominated by CO\({}_{2}\) as well as H\({}_{2}\)O. We should underestimate the abundance of carbon-bearing species since the rock material is assumed to be carbon-depleted. The mass loss efficiency in our study may be overestimated because CO\({}_{2}\) helps to cool the upper atmosphere (Yoshida and Kuramoto, 2020). Another caveat involves the mixing rate of rock material in the primordial atmosphere. We considered only head-on collisions, which cause the most efficient mixing between the atmosphere and the rock vapor, as indicated by the Moon-forming impact simulations on the proto-Earth (Nakajima and Stevenson, 2015). The present study focused on the most optimistic case, where a giant impact into an Earth-sized planet forms a water-rich atmosphere. Detailed atmospheric compositions of Earth-sized planets after a non-oblique giant impact will be discussed in our future work. ## 5 Conclusion We showed that the mixing of hydrogen in the atmosphere and rocky material vaporized by a giant impact into an Earth-sized planet results in the production of H\({}_{2}\)O via chemical reactions. We simulated the long-term evolution of the H\({}_{2}\)-H\({}_{2}\)O atmospheres of Earth-mass planets around a Sun-like star for Gyrs after giant impacts. We examined the mass loss from an Earth-sized planet that initially has a H\({}_{2}\)-H\({}_{2}\)O atmosphere of Figure 8: Same as Figure 7 but with different initial water mole fractions (1%, 3%, and 10% in the left, middle, and right panels, respectively). The rocky core is assumed to be basalt in all simulations. Figure 9: Habitat of an Earth-mass planet with a water-rich atmosphere that orbits a Sun-like star. Dashed and solid lines represent \(a_{\rm esc}\) (Eq. 25) and \(a_{\rm W}\) (Eq. 26) in two models with \(\varepsilon=0.1\) (blue) and \(0.01\) (green), respectively. Blue and green shaded regions show the conditions where an Earth-mass planet can retain water in the atmosphere for Gyrs. 0.1 wt% - 10 wt% at a semimajor axis from 0.1 to 3 au. Our key findings are summarized as follows. * The composition of the impact-generated atmosphere is dominated by H\({}_{2}\), He, and H\({}_{2}\)O when it has cooled below 3000 K. * Earth-mass planets can avoid water loss through photoevaporation if the escape flux of hydrogen is too low to drag oxygen via collisions. * The post-impact atmosphere of an Earth-mass planet evolves into a water-dominated atmosphere if its atmospheric mass fraction is less than a few times 0.1% provided that the core is oxidizing material such as the basalt or CI chondrite. * If the rocky core is composed of a reducing material such as EH chondrite, the post-impact atmosphere should not be water-dominated, even if the impact-induced mixing between H\({}_{2}\)-He gas and rocky vapor is efficient. An Earth-mass planet in a habitable zone can retain the initial amount of H\({}_{2}\)O in its atmosphere if \(X_{\rm atm}^{\rm final}\lesssim 10^{-3}\) after a giant impact, although part of the remaining water exists as "oxygen" in the atmosphere due to photodissociation. Note that the water-rich atmosphere of an Earth-mass planet can be found on the core composed of basalt and CI chondrite in the giant impact scenario, while the water-poor atmosphere forms on the core composed of EH chondrite. The advanced capability of the James-Webb space telescope (JWST) allows it to observe the atmospheres of Earth-sized planets around nearby stars, e.g., TRAPPIST-1 planets (Lustig-Yaeger et al., 2019). In the late 2020s, _PLATO_(Rauer et al., 2014), _Earth 2.0_(Ge et al., 2022), and the Roman space telescope (Penny et al., 2019) are expected to find hundreds of Earth-sized planets in a habitable zone around Sun-like stars and M-dwarfs. Terrestrial planets with water-dominated atmospheres, as considered in this study, should be interesting targets for JWST observations. As shown in Figure 7, such terrestrial planets can exist in orbits of semimajor axes ranging from a few 0.1 au to a few au, depending on the photoevaporation efficiency (\(\varepsilon\)). This constraint on the locations of terrestrial planets with water-dominated atmospheres helps elucidate the mass-loss efficiency. We thank the anonymous referee for improving our manuscript. K.K. is supported by JSPS KAKENHI Grants-in-Aid for Scientific Research No. 20J01258, 21H00039, 23H01231. Y.H. is supported in part by a JSPS KAKENHI Grant Numbers 18H05439. M.O. is supported by JSPS KAKENHI Grant Numbers JP18K13608 and JP19H05087. Numerical computations of SPH simulations were carried out on a Cray XC50 at the Center for Computational Astrophysics, National Astronomical Observatory of Japan. GGchem (Woitke et al., 2018)
2309.16050
Electronic Properties of Ultra-Wide Bandgap B$_x$Al$_{1-x}$N Computed from First-Principles Simulations
Ultra-wide bandgap (UWBG) materials such as AlN and BN hold great promise for future power electronics due to their exceptional properties. They exhibit large bandgaps, high breakdown fields, high thermal conductivity, and high mechanical strengths. AlN and BN have been extensively researched, however, their alloys, B$_x$Al$_{1-x}$N, are much less studied despite their ability to offer tunable properties by adjusting $x$. In this article, we predict the electronic properties of 17 recently predicted ground states of B$_x$Al$_{1-x}$N in the $x=0-1$ range using first-principles density functional theory and many-body perturbation theory within $GW$ approximation. All the B$_x$Al$_{1-x}$N structures are found to be UWBG materials and have bandgaps that vary linearly from that of wurtzite-phase ($w$) AlN (6.19 eV) to that of $w$-BN (7.47 eV). The bandstructures of B$_x$Al$_{1-x}$N show that a direct-to-indirect bandgap crossover occurs near $x = 0.25$. Furthermore, we find that B$_x$Al$_{1-x}$N alloys have much larger dielectric constants than the constituent bulk materials (AlN=$9.3~\varepsilon_0$ or BN=$7.3~\varepsilon_0$), with values reaching as high as $12.1~\varepsilon_0$. These alloys are found to exhibit large dielectric breakdown fields in the range 9--35 MV/cm with a linear dependence on $x$. This work provides the much needed advancement in the understanding of the properties of B$_x$Al$_{1-x}$N to aid their application in next-generation devices.
Cody L. Milne, Tathagata Biswas, Arunima K. Singh
2023-09-27T22:13:07Z
http://arxiv.org/abs/2309.16050v3
# Electronic Properties of B\({}_{x}\)Al\({}_{1-x}\)N Computed from \(Gw\) Simulations ###### Abstract Ultra-wide bandgap (UWBG) materials such as AlN and BN hold great promise for future power electronics due to their exceptional properties. They exhibit large band gaps, high breakdown fields, high thermal conductivity, and high mechanical strengths. AlN and BN have been extensively researched, however, their alloys, B\({}_{x}\)Al\({}_{1-x}\)N, are much less studied despite their ability to offer tunable properties by adjusting \(x\). In this article, we predict the electronic properties of 17 recently predicted ground states of B\({}_{x}\)Al\({}_{1-x}\)N in the \(x=0-1\) range using first-principles density functional theory and many-body perturbation theory within \(GW\) approximation. All the B\({}_{x}\)Al\({}_{1-x}\)N structures are found to be UWBG materials and have band gaps that vary linearly from that of wurtzite-phase (_w_) AlN (6.19 eV) to that of _w_-BN (7.47 eV). The bandstructures of B\({}_{x}\)Al\({}_{1-x}\)N show that a direct-to-indirect bandgap crossover occurs near \(x=0.25\). Furthermore, we find that B\({}_{x}\)Al\({}_{1-x}\)N alloys have much larger dielectric constants than the constituent bulk materials (AlN=9.3 \(\varepsilon_{0}\) or BN=7.3 \(\varepsilon_{0}\)), with values reaching as high as 12.1 \(\varepsilon_{0}\). These alloys are found to exhibit large dielectric breakdown fields in the range 9-35 MV/cm with a linear dependence on \(x\). This work provides the much needed advancement in the understanding of the properties of B\({}_{x}\)Al\({}_{1-x}\)N to aid their application in next-generation devices. keywords: Ultra wide band gap, GW, DFT, Boron aluminum nitride, insulator, power electronics + Footnote †: journal: ## 1 Introduction Ultra-wide bandgap (UWBG) materials are traditionally defined as materials that have a bandgap larger than that of GaN (3.4 eV) [1]. Recently, these materials have received burgeoning interest due to their exciting applications in optoelectronics, radio frequency devices, and high-voltage/power electronics. Many performance parameters depend heavily on the bandgap, \(E_{g}\), for example, the drift layer thickness and the specific on-resistance in power electronic devices. In diodes, higher bandgaps lead to a reduction of impact ionization rates and tunneling effects. UWB materials also display large breakdown fields [2; 3; 4], on the order of MV/cm, that lead to a reduction in leakage currents and are expected to enable significant miniaturization of devices. AlN and BN in their wurtzite-phase (_w_-phase) are UWB materials with bandgaps of 6.2 eV and 5.44-7.70 eV, respectively [5]. B\({}_{x}\)Al\({}_{1-x}\)N alloys are expected to display similarly high bandgaps and much higher bandgaps than the widely used Al\({}_{x}\)Ga\({}_{1-x}\)N alloys [5; 6; 7]. In the recent past, B\({}_{x}\)Al\({}_{1-x}\)N has been grown in the thin-film form with boron fractions up to \(x=0.30\) and film thicknesses up to 300 nm [7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18]. In spite of the recent interest in _w_-B\({}_{x}\)Al\({}_{1-x}\)N, only a few studies have investigated their structure, electronic, and dielectric properties [7; 19; 20; 21; 22; 23]. In our recent work, we have shown that [24] B\({}_{x}\)Al\({}_{1-x}\)N can have thermodynamically stable structures in the entire \(x=0-1\) range. In that study, we predicted the structure and formation energies of ground-states of B\({}_{x}\)Al\({}_{1-x}\)N alloys in the entire \(x=0-1\) range using the density-functional theory (DFT)-based cluster expansion method. This approach allowed the prediction of B\({}_{x}\)Al\({}_{1-x}\)N atomic structures by considering a wider range of structural configurations and space group symmetries [25; 26] in comparison to the previous studies where cation substitution in the _w_-AlN lattice has been the prevalent approach [6; 7; 19]. We obtained 17 ground-state structures of B\({}_{x}\)Al\({}_{1-x}\)N that displayed high formation energies, had predominantly \(sp^{3}\) bonded but distorted tetrahedra, and wurtzite-like lattices. In this study, we present the _ab initio_ computed bandgaps, band structures, effective masses, dielectric constants, and electric breakdown fields of the 17 B\({}_{x}\)Al\({}_{1-x}\)N ground state structures. We compute the bandgaps and band structures using excited state \(GW\) simulations. Note that DFT simulations that have been employed to study B\({}_{x}\)Al\({}_{1-x}\)N in the past [7; 19; 20; 21; 22; 23; 27; 28] severely underestimate the bandgap of semiconductors and insulators [29]. B\({}_{x}\)Al\({}_{1-x}\)N has also been studied using hybrid functionals that provide better agreement with experimentally measured bandgaps [6; 30]. However, hybrid functionals rely on empirical parameters (the fraction of exact exchange) and thus parameter-free many-body perturbation theory methods such as \(GW\) simulations are best suited to predict the electronic structure of large gap materials like B\({}_{x}\)Al\({}_{1-x}\)N [31; 32]. Our \(GW\) simulations show that B\({}_{x}\)Al\({}_{1-x}\)N structures have bandgaps that linearly vary from that of _w_-AlN (6.19 eV) to \(w\)-BN (7.47 eV). Further, unfolded band structures show a direct-to-indirect transition at \(x=0.25\). We find that all the B\({}_{x}\)Al\({}_{1-x}\)N materials display effective masses that are comparable to that of AlN and BN, however, the structure that has \(sp^{2}\) bonded cations displays significantly higher effective masses. We compute static dielectric constants of the alloys using DFT simulations and find that they are highest at intermediate boron fraction, with values reaching as high as \(12.1\varepsilon_{0}\). We observe large bowing parameters of \(b\approx-7.2\varepsilon_{0}\) and \(b\approx-9.8\varepsilon_{0}\) for \(\varepsilon_{0}^{\perp}\) and \(\varepsilon_{0}^{\parallel}\) respectively. We compute the electric breakdown fields of B\({}_{x}\)Al\({}_{1-x}\)N using a machine-learned model [33] and find that the breakdown fields of the alloys vary linearly from 9 MV/cm to 35 MV/cm. Thus, we show that B\({}_{x}\)Al\({}_{1-x}\)N exhibit tunable and large bandgaps, high dielectric constants, and high breakdown fields. ## 2 Computational methods All _ab initio_ simulations reported in this study were performed using the Vienna Ab Initio Simulation Package (VASP) [34; 35; 36; 37] package. Quasiparticle energies of all the B\({}_{x}\)Al\({}_{1-x}\)N, AlN, and BN were computed using many-body perturbation theory simulations within the \(GW\) approximation. To obtain the quasiparticle bandstructures we used wannier interpolation via maximally localized wannier functions as implemented in the Wannier90 [38; 39] package. To obtain unfolded band structures, the BandUP code [40; 41] was used. We unfolded the electronic bands for each supercell onto the Brillouin Zone of its corresponding primitive cell. The unfolded bandstructures were used to study the element- and orbital-contributions of different bands at high-symmetry _k_-points and to determine the \(x\)-value at which the direct-indirect transition occurs in B\({}_{x}\)Al\({}_{1-x}\)N. The effective masses were calculated by fitting parabolas near \(\Gamma\), K, and other valence band maxima (VBM) and conduction band minima (CBM) locations in the unfolded bandstructures. A high-throughput workflow package, _py_GWBSE [31; 32] was used to perform the \(GW\) calculations. _py_GWBSE enables automated \(GW\) calculations, including convergence of the parameters that are specific to the \(GW\) calculations such as the plane-wave cutoff for screened Coulomb potential, the number of empty bands included in the self-energy calculation, and the number of imaginary frequency grid points used in the \(GW\) calculation. The \(GW\) simulations were performed using a plane-wave energy cutoff of 500 eV, a screened Coulomb energy cutoff of 200 eV, 70 imaginary frequency grid points, and a \(\Gamma\)-centered _k_-grid of 100 _k_-points per reciprocal lattice vector. These screened Coulomb energy cutoff and frequency grid points resulted in bandgaps that con verged within 0.01 eV. The number of bands for each structure was converged by doubling from an initial choice based on the number of atoms in the cell until bandgap convergence within 0.1 eV was reached. For partially self-consistent \(GW_{0}\) calculations, we performed iteration of the Green's function \(G\) until the bandgap converged to within 0.1 eV. For all of the B\({}_{x}\)Al\({}_{1-x}\)N structures, _w_-AlN, and _w_-BN, convergence was reached on the fourth iteration. The static dielectric constants were calculated using density-functional perturbation theory (DFPT) [42; 43; 44] by incorporating both the electronic as well as ionic contributions. An energy cutoff of 520 eV and 100 _k_-points per reciprocal lattice vector was used for the DFPT calculations. The Projector-Augmented-Wave (PAW) formalism [45] and the PBE [45; 46; 47] exchange-correlation functional was employed. The intrinsic breakdown field was computed using a machine-learned (ML) model by Kim et al. [33] of the form, \[E_{b}=0.2444e^{0.315\sqrt{E_{g}\omega_{\text{cutoff}}}}, \tag{1}\] where \(E_{b}\) is the electric breakdown field in units of MV/cm, \(E_{g}\) is the bandgap energy in units of eV, and \(\omega_{\text{cutoff}}\) or the phonon cutoff frequency is the maximum phonon frequency at \(\Gamma\) in units of THz. This ML model of Kim et al. is based on a least absolute shrinkage and selection operator (LASSO) based least-squares fit [48] and was trained on 82 insulating and semiconducting materials. These materials included a wide range of experimental bandgaps (0.2 - 13.6 eV) and _ab initio_ computed intrinsic breakdown fields (0.106 - 48.3 MV/cm). The phonon cutoff frequency and bandgap were found to be the most predictive parameters amongst all the considered features - including experimental bandgaps, and _ab initio_ predicted phonon cut-off frequency, nearest neighbor distance, mean phonon frequency, dielectric constant, density, and bulk modulus. This model from Kim et al. resulted in a \(R^{2}\) value of 0.81 for the training set and 0.72 for the test set. In this work, we have used \(E_{g}\) values as obtained from our \(GW_{0}\) calculations and the \(\omega_{\text{cutoff}}\) were obtained from our previously computed DFT phonon spectra [24]. ## 3 Results and Discussion ### Crystal Structure and Tetrahedrality In our previous work, the structure of B\({}_{x}\)Al\({}_{1-x}\)N was predicted using the cluster expansions formalism [49; 50; 51; 52; 53] for the entire \(x=0-1\) range. We found seventeen ground state structures of the \(\mathrm{B}_{x}\mathrm{Al}_{1-x}\mathrm{N}\) with formation energies between 0.11 and 0.25 eV atom\({}^{-1}\). The phonon spectra of all the \(\mathrm{B}_{x}\mathrm{Al}_{1-x}\mathrm{N}\) showed non-imaginary phonon frequencies indicating that the structures are dynamically stable. We found that the structures maintained the tetrahedral bonding environment similar to AlN, however the large lattice mismatch (\(\sim\)17%) between AlN and _w_-BN led to large distortion and rotation of these tetrahedra. The unit cells of the \(\mathrm{B}_{x}\mathrm{Al}_{1-x}\mathrm{N}\) themselves had space group symmetries that differed from that of the wurtzite lattice, and they were largely the same for each structure at boron fraction \(x\) and \(1-x\). The structural deviations of the \(\mathrm{B}_{x}\mathrm{Al}_{1-x}\mathrm{N}\) from the wurtzite structure were quantified in terms of the average bond lengths, bond angles, and the 'tetrahedrality' of the structure. This 'tetrahedrality' score was computed for all the alloys based on the bond angles, bond lengths, and solid angle distributions at all the sites for each \(\mathrm{B}_{x}\mathrm{Al}_{1-x}\mathrm{N}\) structure [54; 55]. A tetrahedrality score of 1 indicates a perfect tetrahedral bonding environment and 0 indicates a completely non-tetrahedral bonding environment. Our calculations showed that the tetrahedral distortion maximizes near \(x=0.6\), along with the formation energies, and is especially high in the \(x=0.583\) and \(x=0.6\) ground-state structures [24]. ### Electronic Properties #### 3.2.1 Band Gaps Table 1 compares our \(GW_{0}\) and DFT computed band gaps of _w_-AlN, _w_-BN, and the \(\mathrm{B}_{x}\mathrm{Al}_{1-x}\mathrm{N}\) alloys with those reported in the literature. Since _w_-BN exists only as a high-pressure metastable phase with no existing experimental measurements of its bandgap, we also make comparisons for the other polymorphs of BN for which experimental measurements exist, i.e., hexagonal (_h_-BN) and cubic (_c_-BN). As expected, the DFT computed bandgaps are much smaller than those predicted from the \(GW_{0}\) simulations. We find that \(GW_{0}\) calculated values are in excellent agreement with experimental values found in the literature. For _w_-BN the values are in good agreement with previously reported computed band gaps. We find that the partial self-consistent \(GW_{0}\) calculation (6.19 eV) is necessary to reproduce the experimental bandgap of AlN (6.2 eV) [5; 6], as the one-shot \(G_{0}W_{0}\) calculation underestimates the bandgap for AlN to be 5.51 eV. A comparison of \(G_{0}W_{0}\) and \(GW_{0}\) computed bandgap for all the \(\mathrm{B}_{x}\mathrm{Al}_{1-x}\mathrm{N}\) alloys can be found in Figure S1 and Table S1. Figure 1 shows \(GW_{0}\) and DFT computed bandgaps of all the \(\mathrm{B}_{x}\mathrm{Al}_{1-x}\mathrm{N}\) structures. The \(GW_{0}\) and DFT gaps are shown by circle and star symbols, respectively. The size of the overlaying boxes is proportional to the formation energies of the structures and their color transparency denotes the tetrahedrality of the structure. With the exception of three outlying boron fractions, at \(x=0.417\), 0.583, and 0.6, the bandgap of B\({}_{x}\)Al\({}_{1-x}\)N alloys increases almost linearly with increasing \(x\). The dotted lines in Figure 1 show the Vegard's law fit for the \(GW_{0}\) and DFT bandgaps. When the outlying bandgaps at \(x\)=0.417, 0.583, and 0.600 are excluded from the fit, the root-mean-square error (RMSE) for the \(GW_{0}\) bandgaps and the DFT bandgaps are 0.088 eV and 0.15 eV, respectively. The Vegard's law fits that includes all the data points yields very large RMSE value of 0.40 eV and 0.38 eV for \(GW_{0}\) and DFT, respectively. At \(x=0.417\) the anomalously low bandgap can be attributed to the presence of \(sp^{2}\) bonding in one of the B-atoms in the lattice. For the outliers at \(x=0.583\) and \(x=0.6\), the reason for low bandgaps is not immediately obvious, but we find that these structures are among the ones with the lowest tetrahedrality score, and thus a high distortion in tetrahedral bonds and high standard deviation of N-Al-N bond angles. Thus, it is expected that experimentally grown high-quality _w_-B\({}_{x}\)Al\({}_{1-x}\)N alloys will display an increase in bandgap as the boron content increases. However, the presence of polycrystalline phases with mixed \(sp^{3}\) and \(sp^{2}\) bonded BN and/or defects could dramatically alter the bandgap with respect to the pure wurtzite structure. \begin{table} \begin{tabular}{c c c c c} \hline \hline & DFT-PBE & \(GW_{0}\) & Calc. & Exp. \\ \hline _w_-AlN & 4.06 & 6.19 & 6.18 [6]\({}^{b}\), 6.2 [19]\({}^{a}\) & 6.1 [27], 6.12–6.19 [6], \\ & & & & 6.25 [5] \\ _w_-BN & 5.39 & 7.47 & 6.39 [28]\({}^{c}\) 6.8 [19]\({}^{a}\), & – \\ & & & 6.84 [6]\({}^{b}\), 5.44–7.70 [5] & \\ _c_-BN & 4.45 & 6.52 & 5.43 [28]\({}^{c}\) & 6.0, 6.1 [28], 6.1–6.4 [27] \\ _h_-BN & 4.23 & 6.34 & 4.83 [28]\({}^{c}\), 5.98 [6]\({}^{b}\) & 5.4–5.8 [27, 28], 6.08 [56] \\ _w_-B\({}_{0.25}\)Al\({}_{0.75}\)N & 4.47 & 6.52 & 5.4 [19]\({}^{a}\) & – \\ _w_-B\({}_{0.5}\)Al\({}_{0.5}\)N & 4.95 & 6.83 & 5.8 [19]\({}^{a}\) & – \\ _w_-B\({}_{0.75}\)Al\({}_{0.25}\)N & 5.15 & 7.14 & 6.5 [19]\({}^{a}\) & – \\ \hline \hline \end{tabular} \end{table} Table 1: \(GW_{0}\) and DFT computed bandgaps of AlN, three polymorphs of BN, and three B\({}_{x}\)Al\({}_{1-x}\)N structures compared with existing experimental and computed values reported in the literature. \({}^{a}\) LDA with scissor operation, \({}^{b}\) HSE, \({}^{c}\) GGA form proposed by Engel and Vosko (GGA-EV) #### 3.2.2 Element and orbital projected band structures The bandstructures of B\({}_{x}\)Al\({}_{1-x}\)N alloys can not be directly compared with that of \(w\)-AlN or \(w\)-BN bandstructures since the B\({}_{x}\)Al\({}_{1-x}\)N structures are supercells of the primitive wurtzite lattice, albeit with small deviations from the wurtzite lattice. In the B\({}_{x}\)Al\({}_{1-x}\)N bandstructure, the electronic bands of the primitive wurtzite-like lattice get folded into the smaller Brillouin zone of the supercell. Thus we unfolded the B\({}_{x}\)Al\({}_{1-x}\)N bandstructures onto the wurtzite primitive cell Brillouin zone and obtained the spectral weights which give the probability that a supercell band has the same Bloch character as the primitive cell band [57]. Further, we examined the element and orbital contributions to the unfolded bandstructures. This analysis was restricted to the DFT bandstructures for computational tractability. Note that the nature of the bands remains practically invariant between DFT and \(GW_{0}\) because the wavefunctions from DFT change negligibly at the \(GW_{0}\) level of correction [32]. The folded \(GW_{0}\) bandstructures of all the B\({}_{x}\)Al\({}_{1-x}\)N can be found in Figures S2-S5. Figure 1: Black circle and star symbols indicate \(GW_{0}\) and DFT computed bandgaps, respectively, of the B\({}_{x}\)Al\({}_{1-x}\)N structures, \(w\)-AlN, and \(w-\)BN. The sizes of the blue and orange squares are proportional to the formation energy of the structures [24]. Their transparency is proportional to the average tetrahedrality of the B\({}_{x}\)Al\({}_{1-x}\)N [24] with the tetrahedrality scale denoted by the color bar. The solid gray horizontal line denotes the experimental bandgap of AlN (6.2 eV) [5] and the purple bar denotes the range for computed bandgaps of \(w\)-BN [5]. The dotted lines show the Vegard’s law fit for the \(GW_{0}\) and DFT bandgaps excluding the bandgaps at \(x=0.417\), \(0.583\), and \(0.600\). Figure 2: Element-projected and unfolded DFT bandstructures and the orbital-projected DOS for AlN, B\({}_{0.167}\)Al\({}_{0.833}\)N, B\({}_{0.417}\)Al\({}_{0.583}\)N, and \(w\)-BN. The colorbar represents the elemental contribution to the number of bands in the primitive cell, i.e. the spectral weight, and ranges from 0% to 100%. In the DOS, the blue lines represent the contribution from \(s\) states and the orange lines represent the contribution from all the \(p\) states. \(E_{\rm vbm}\) is the energy of the valence band maximum, which is subtracted from the band structure such that the VBM is at 0 eV. Figure 2 shows the unfolded and element-projected DFT bandstructures and the orbital-projected densities of states (DOS) of _w_-AlN, B\({}_{0.167}\)Al\({}_{0.833}\)N, B\({}_{0.417}\)Al\({}_{0.583}\)N, and _w_-BN. The unfolded element projected bandstructures and the orbital-resolved densities of states of all other B\({}_{x}\)Al\({}_{1-x}\)N alloys are available in the Supplementary Materials Figures S6-S10. Figures 2a-d show that AlN exhibits a direct bandgap at the \(\Gamma\)-point. Figure 2a shows that the Al-atom contributions to the bands are dominant in the L- and K-valleys. At the conduction band in the \(\Gamma\)-valley, the lowest energy bands are found to be dominated by the \(s\) orbitals of N atoms, see Figure 2d. Figures 2q-t show that _w_-BN exhibits a substantially higher bandgap, 7.47 eV, in comparison to AlN and is also indirect in nature from \(\Gamma\)-K. While _w_-BN has a similar energy and contribution from the cation \(p\) states in the K- and L-valley as AlN, the conduction band \(\Gamma\)-valley has a much higher energy, which causes the primary difference in the _w_-BN and AlN bandstructures, namely a direct bandgap in AlN and an indirect bandgap in _w_-BN. In Figures 2e-j, we can see that the low B-fraction alloy, B\({}_{0.167}\)Al\({}_{0.833}\)N, maintains a direct bandgap at \(\Gamma\) and the N-dominated conduction band \(\Gamma\)-valley behavior is similar to that observed for AlN. Interestingly, the valence band minima (VBM) is not substantially altered due to the incorporation of B, but both the K- and the L-valleys lower in energy for the conduction bands. The B-contribution is found to be predominant near K, M, and L-valleys in the lower conduction bands. Figure 2k-p shows the band structure for the \(x=0.417\) structure, which we noted earlier for its drastically diminished bandgap and presence of \(sp^{2}\) bonded B-sites. We find two flatter low-lying bands near the conduction band minima. They are primarily due to the boron atoms. A substantial contribution is from the \(sp^{2}\) orbitals of B-atoms that are observed in two of the five B-atoms in this structure [24]. The contribution of the \(sp^{2}\) and \(sp^{3}\) bonded B-sites and N-sites to the bandstructures are available in the Supplemental Materials Figure S11. Additionally, two flatter valence bands can be seen, which are a result of \(sp^{2}\) bonded nitrogen. Above the two low-lying conduction bands, one can see a dominant L-valley, and the \(\Gamma\)-valley. Consequently, the low bandgap can be directly linked to the two \(sp^{2}\) bonded boron sites in this structure. This finding is particularly noteworthy as it suggests that the existence of \(sp^{2}\) bonded boron in _w_-B\({}_{x}\)Al\({}_{1-x}\)N has the potential to significantly reduce the bandgap of _w_-B\({}_{x}\)Al\({}_{1-x}\)N and increase the effective masses of holes and electrons. #### 3.2.3 Direct-to-indirect transition Figure 3 shows the energy gap of all the B\({}_{x}\)Al\({}_{1-x}\)N alloys for transitions between the \(\Gamma\), M, and K \(k\)-points. Energy gaps are also shown for the A and L \(k\)-points when the energy gaps at these \(k\)-points are lower than all the other energy gap values. The energy gap transitions are denoted as \(k_{\rm V}-k^{\prime}_{\rm C}\) where \(k_{\rm V}\) and \(k^{\prime}_{\rm C}\) are the location of the VBM and CBM \(k\)-points in the Brillouin zone, respectively. All the unfolded bandstructures that were used to obtain the energy gaps at the various \(k_{\rm V}\) and \(k^{\prime}_{\rm C}\) are available in the Supplemental Materials Figures S6-S10. Figure 3 shows that for all B\({}_{x}\)Al\({}_{1-x}\)N the VBM is located at the \(\Gamma\)-point. The location of the CBM is at the \(\Gamma\)-point for \(x<0.250\), for \(0.250\leq x<0.417\) the location varies, and for \(x>0.417\), the CBM is clearly at the K-point. Thus we can distinctly see that a direct-to-indirect transition occurs at \(x=0.25\). Previous studies that considered B\({}_{x}\)Al\({}_{1-x}\)N structures formed by random cation substitution in the AlN-lattice report the direct-to-indirect gap transition in B\({}_{x}\)Al\({}_{1-x}\)N alloys to be between \(x=0.12\) and \(x=0.28\)[6; 19; 58], but one study reported it to be at \(x=0.66\)[20]. As found in our work, Shen et al. [6] and Zhang et al. [7] also reported CBM to be at the K-point for higher fractions of B. Interestingly, the energy gap at \(\Gamma_{\rm V}-\Gamma_{\rm C}\) rises steeply with \(x\), whereas the energy gap at \(\Gamma_{\rm V}-\rm M_{C}\) and \(\Gamma_{\rm V}-\rm K_{C}\) remain relatively constant. One can see the outlying bandgaps at \(x=0.417\), 0.583, and 0.600, which can be attributed to the high tetrahedral distortion in the bonding environments of their respective structures, as discussed in Section 3.2.1. #### 3.2.4 Effective Masses Table 2 shows the effective masses of _w_-AlN and _w_-BN. The heavy hole effective masses of AlN are -2.99 \(m_{e}\) and they decrease to -1.11 \(m_{e}\) for _w_-BN. These values agree well with a previous report by Zhang et al. [19]. On the other hand, the electron effective masses stay relatively constant in the range 0.40 - 0.60 \(m_{e}\). Therefore, one may expect that B\({}_{x}\)Al\({}_{1-x}\)N alloy effective masses will be in this range. Tables S2 and S3 in the SI show that this is largely the case, however, there are outliers that can be attributed to the high tetrahedral distortion in the intermediate alloy structures, which can distort the bandstructure from the expected wurtzite symmetry. Additionally, the very high effective masses seen in the \(x=0.417\) structure can be explained by the presence of \(sp^{2}\) bonding that causes low-lying flat bands. ### Static dielectric constants \begin{table} \begin{tabular}{c c c c} \hline & \(\Gamma\)–M & \(\Gamma\)–K & \(\Gamma\)–A \\ Hole effective masses & & & \\ \hline AlN & -2.96 & -2.99 & -0.27 \\ _w_-BN & -1.45 & -1.11 & -1.22 \\ Electron effective masses & & & \\ AlN & 0.55 & 0.45 & 0.40 \\ \hline & K–M & K–A & \\ _w_-BN & 0.52 & 0.61 & \\ \end{tabular} \end{table} Table 2: Effective masses for AlN and _w_-BN structures are shown which are taken from parabolic fits to the band edges. Figure 4 shows the total static dielectric tensor components of the B\({}_{x}\)Al\({}_{1-x}\)N, _w_-AlN, _w_-BN. The out-of-plane dielectric constants, \(\varepsilon_{0}^{\perp}\), are shown as black circles and are defined as, \(\varepsilon_{0}^{\perp}=\frac{\varepsilon_{0}^{xx}+\varepsilon_{0}^{yy}}{2}\). The in-plane dielectric constants, \(\varepsilon_{0}^{\parallel}\), are indicated by black star symbols and are equivalent to \(\varepsilon_{0}^{zz}\). We can see that some B\({}_{x}\)Al\({}_{1-x}\)N structures have much larger dielectric constants than the constituent bulk materials, _w_-AlN=9.3 \(\varepsilon_{0}\) or _w_-BN=7.3 \(\varepsilon_{0}\), with values reaching as high as 12.1 \(\varepsilon_{0}\). The static dielectric constant is known to scale as the summation over \(i\) of \(\tilde{Z}^{2}/\lambda_{i}^{2}\)[63], where \(\tilde{Z}\) is the mode effective charge, and \(\lambda\) is \(i_{\rm th}\) infra-red (IR) active phonon mode. Thus, the dramatic increases in \(\varepsilon_{0}^{\perp}\) and \(\varepsilon_{0}^{\parallel}\) values could be due to two possible effects, 1) an increase in ionicity due to a change in local charge distribution, or 2) a decrease in structural stability and softening of IR-active phonon modes. A more detailed analysis of the vibrational, dielectric, and piezoelectric properties of these alloys is necessary to determine how these effects result in the dramatic increase in lattice dielectric constant in B\({}_{x}\)Al\({}_{1-x}\)N. We also observe large bowing in \(\varepsilon_{0}^{\perp}\) and \(\varepsilon_{0}^{\parallel}\) with bowing parameters of \(b\approx-7.2\)\(\varepsilon_{0}\) and \(b\approx-9.8\)\(\varepsilon_{0}\), respectively. In contrast, in the case of other group-III-nitride like Al\({}_{x}\)Ga\({}_{1-x}\)N Figure 4: Static dielectric constants, \(\varepsilon_{0}^{\perp}\) (black circles) and \(\varepsilon_{0}^{\parallel}\) (black stars) of B\({}_{x}\)Al\({}_{1-x}\)N, _w_-AlN, and _w_-BN. Red hexagons show experimentally measured dielectric constants of AlN [59; 60; 61], B\({}_{0.07}\)Al\({}_{0.93}\)N and B\({}_{0.10}\)Al\({}_{0.90}\)N [16; 14] reported in the literature. DFT computed values obtained from the Materials Project database [62] are shown as blue plus symbols. The size of green and lavender squares is proportional to the formation energies of the B\({}_{x}\)Al\({}_{1-x}\)N and indicates the \(\varepsilon_{0}^{\perp}\) and \(\varepsilon_{0}^{\parallel}\), respectively. The transparency of the squares is proportional to the tetrahedrality of B\({}_{x}\)Al\({}_{1-x}\)N, where the color bar indicates the tetrahedrality value. and In\({}_{x}\)Al\({}_{1-x}\)N [64; 65] the bowing is insignificant. However, some alloys do exhibit bowing trends similar to that of B\({}_{x}\)Al\({}_{1-x}\)N, for example the highly mismatched Sc\({}_{x}\)Al\({}_{1-x}\)N [65], wurtzite Zn(O, S), and wurtzite Zn(O, Se) alloys [66]. We also compute the high-frequency dielectric constants or the electronic part of the dielectric constants. Interestingly, the high-frequency dielectric constants remain nearly invariant across the boron fraction and are between 4 to 5 \(\varepsilon_{0}\). All high-frequency dielectric constants can be found in Supplementary Materials Figure S12. There are very few existing reports of the dielectric constants of B\({}_{x}\)Al\({}_{1-x}\)N. DFT predicted dielectric constants of B\({}_{0.25}\)Al\({}_{0.75}\)N (mp-1019380) and B\({}_{0.75}\)Al\({}_{0.25}\)N (mp-1019379) are available in the open-source Materials Project (MP) database [62]. While the MP values, shown in Figure 4 as blue pluses, are in good agreement with those obtained in our study, the lattices and atomic arrangements in the MP structures differ from those studied in this work. A recent study by Zhu et al. [14; 16] measured the permittivity of epitaxially grown low boron content (\(x=0.07,0.10\)) B\({}_{x}\)Al\({}_{1-x}\)N and showed that they have high relative permittivity at 50 K (12 \(\varepsilon_{0}\) for \(x=0.10\) and 12.5 \(\varepsilon_{0}\) for \(x=0.07\)) with low dielectric loss as a function of temperature at 10 kHz, 100 kHz, and 1 MHz, substantially higher than AlN (9.6\(\varepsilon_{0}\)). Similar permittivity, 12.2 \(\varepsilon_{0}\), has been reported in polycrystalline B\({}_{0.10}\)Al\({}_{0.90}\)N films [67]. The dielectric constants computed in our study are not as high as those seen experimentally [14; 16; 67], but we do observe a substantial increase in dielectric constant at intermediate boron fractions of B\({}_{x}\)Al\({}_{1-x}\)N. We postulate that this disagreement between theory and experimentally measured values could be due to the high strain in the fabricated thin films due to the lattice-mismatch with substrate, defects, grain boundaries, and/or mixed phases. ### Electric breakdown fields Experimental data of the breakdown field, \(E_{b}\), in B\({}_{x}\)Al\({}_{1-x}\)N alloys are scarce in the literature, however, a breakdown field of \(\sim 6.4\) MV/cm with weak temperature dependence has been measured in B\({}_{0.07}\)Al\({}_{0.93}\)N [16]. Traditionally, first-principles simulations with empirical deformation potentials have been applied to theoretically estimate the intrinsic breakdown field in semiconductors and insulators. Recently, fully _ab initio_ methods of calculating the dielectric breakdown field have been developed, which involve computationally expensive DFPT calculations of the electron-phonon scattering rates [68]. Here, we employ a machine-learned model developed by Kim et al. [33] to predict the breakdown field at a vastly reduced computational cost. This model, Equation 2, requires bandgap energy and \(\Gamma\)-point phonon frequencies to estimate the \(E_{b}\). We utilize the bandgaps obtained from our \(GW_{0}\) simulations and the \(\Gamma\)-point phonon frequencies from the phonon spectra reported in our previous work [24]. Figure 5 shows the \(E_{b}\) of B\({}_{x}\)Al\({}_{1-x}\)N, _w_-AlN, and _w_-BN as a function of \(x\). The \(E_{b}\) for B\({}_{x}\)Al\({}_{1-x}\)N alloys ranges from 9.0 MV/cm to 35 MV/cm. The predictions of the AlN \(E_{b}\) are in excellent agreement with fully _ab initio_ prediction of the \(E_{b}\) for AlN from Kim et al. [33]. Note that the experimentally measured values of \(E_{b}\) of AlN [60; 69; 70; 71; 72], shown as a green band in Figure 5 are substantially lower, 1-5 MV/cm, than those predicted in this work, 8.8 MV/cm. Experimentally measured \(E_{b}\) vary heavily in the literature for a given material, due to the varying growth process and the different types of experimental measurements [68; 69; 73]. Thus, experimentally measured \(E_{b}\) in B\({}_{x}\)Al\({}_{1-x}\)N may be much lower than those shown in Figure 5 due to the presence of defects Figure 5: Electric breakdown fields, \(E_{b}\), of B\({}_{x}\)Al\({}_{1-x}\)N, _w_-AlN, and _w_-BN are shown as circles and were computed using a model by Kim et al. [33]. The size of the circles is proportional to the \(GW_{0}\) bandgap energy. Their color denotes the magnitude of the \(\Gamma\)-point phonon frequency and is noted by the color bar. The green bar represents experimentally measured range of \(E_{b}\), including measurements done at AC and DC voltages, for AlN [60; 69; 70; 71; 72] and B\({}_{0.07}\)Al\({}_{0.93}\)N [16]. The star symbols indicates the fully _ab initio_ prediction of the \(E_{b}\) for AlN [33]. and impurities. Therefore, our results provide a theoretical maximum for what could be achieved in wurtzite B\({}_{x}\)Al\({}_{1-x}\)N. It is worth noting that the predicted \(E_{b}\) of B\({}_{x}\)Al\({}_{1-x}\)N are extremely high \(E_{b}\)'s, much higher than has been seen in diamond (10-21.5 MV/cm) [4; 68], which has one of the highest measured breakdown fields. We can also see a nearly linear dependence of \(E_{b}\) with \(x\). Interestingly, the reduced bandgap in some of the B\({}_{x}\)Al\({}_{1-x}\)N structures also correlates with an increase in the predicted phonon cutoff frequency. This causes the B\({}_{x}\)Al\({}_{1-x}\)N which exhibited outlying bandgaps to follow a linear trend for the \(E_{b}\). Applying a linear regression fit to our electric breakdown values, the linear dependence can be described by: \[E_{b}(x)=25.71x+8.47\ \text{MV/cm}, \tag{2}\] where \(x\) is the boron fraction in B\({}_{x}\)Al\({}_{1-x}\)N. ## 4 Conclusion In summary, we predict the electronic properties, dielectric properties, and breakdown fields of the recently predicted 17 ground states of B\({}_{x}\)Al\({}_{1-x}\)N in the \(x=0-1\) range using first-principles density functional theory and many-body perturbation theory within \(GW\) approximation. We find that the bandgaps of B\({}_{x}\)Al\({}_{1-x}\)N vary linearly from that of _w_-AlN (6.19 eV) to that of _w_-BN (7.47 eV). We observe few outliers to the trend where B\({}_{x}\)Al\({}_{1-x}\)N with any \(sp^{2}\)-bonded B or low tetrahedrality display reduced bandgap. Unfolded element-projected bandstructures show that a direct-to-indirect gap transition occurs at \(x\approx 0.25\), which agrees well with previous predictions in the literature. We find that hole effective masses decrease overall as a function of B-fraction, while the electron effective masses do not vary substantially. We see that the presence of \(sp^{2}\) bonding can lead to localized states with high effective masses. DFT simulations of dielectric constants of B\({}_{x}\)Al\({}_{1-x}\)N reveal that they exhibit large dielectric constants that bow heavily as a function of the boron content. Electric breakdown fields predicted using the model of Kim et al. [33], show a linear increase in electric breakdown field, from 9.0 to 35 MV/cm as the boron content increases. Thus, B\({}_{x}\)Al\({}_{1-x}\)N alloys present an exciting opportunity for tunable gap, dielectric, and high-breakdown field materials for a variety of applications. **Supplementary Materials** Supplementary Materials are available from Materials Today or from the author. Our Supplementary Materials provide more information on the \(G_{0}W_{0}\) bandgaps, \(GW_{0}\) and elemental projected bandstructures, effective masses, and dielectric constants. **Acknowledgements** This work was supported by ULTRA, an Energy Frontier Research Center funded by the U.S. Department of Energy (DOE), Office of Science, Basic Energy Sciences (BES), under Award No. DESC0021230. The authors acknowledge the San Diego Supercomputer Center under the NSF-XSEDE and NSF-ACCESS Award No. DMR150006, the NSF-FuSE Award No. 2235447, and the Research Computing at Arizona State University for providing HPC resources. This research used resources of the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231. The authors have no competing interests to declare.
2309.09170
A Unifying Privacy Analysis Framework for Unknown Domain Algorithms in Differential Privacy
There are many existing differentially private algorithms for releasing histograms, i.e. counts with corresponding labels, in various settings. Our focus in this survey is to revisit some of the existing differentially private algorithms for releasing histograms over unknown domains, i.e. the labels of the counts that are to be released are not known beforehand. The main practical advantage of releasing histograms over an unknown domain is that the algorithm does not need to fill in missing labels because they are not present in the original histogram but in a hypothetical neighboring dataset could appear in the histogram. However, the challenge in designing differentially private algorithms for releasing histograms over an unknown domain is that some outcomes can clearly show which input was used, clearly violating privacy. The goal then is to show that the differentiating outcomes occur with very low probability. We present a unified framework for the privacy analyses of several existing algorithms. Furthermore, our analysis uses approximate concentrated differential privacy from Bun and Steinke'16, which can improve the privacy loss parameters rather than using differential privacy directly, especially when composing many of these algorithms together in an overall system.
Ryan Rogers
2023-09-17T05:47:33Z
http://arxiv.org/abs/2309.09170v2
# A Unifying Privacy Analysis Framework for Unknown Domain Algorithms in Differential Privacy ###### Abstract There are many existing differentially private algorithms for releasing histograms, i.e. counts with corresponding labels, in various settings. Our focus in this survey is to revisit some of the existing differentially private algorithms for releasing histograms over unknown domains, i.e. the labels of the counts that are to be released are not known beforehand. The main practical advantage of releasing histograms over an unknown domain is that the algorithm does not need to fill in missing labels because they are not present in the original histogram but in a hypothetical neighboring dataset could appear in the histogram. However, the challenge in designing differentially private algorithms for releasing histograms over an unknown domain is that some outcomes can clearly show which input was used, clearly violating privacy. The goal then is to show that the differentiating outcomes occur with very low probability. We present a unified framework for the privacy analyses of several existing algorithms. Furthermore, our analysis uses approximate concentrated differential privacy from Bun and Steinke [1], which can improve the privacy loss parameters rather than using differential privacy directly, especially when composing many of these algorithms together in an overall system. ## 1 Introduction Releasing histograms, counts over some set of items, is one of the most fundamental tasks in data analytics. Given a dataset, we want to compute the number of rows that satisfy some condition, grouped together by some column(s) of interest, say the number of rows in each country. Despite the commonality of this task, there are many different differentially private algorithms to use, depending on the setting we are in, e.g. do we want to show only the top-\(k\), do we know how many items a user can have in any row, do we know all possible values that a column can take, how often is the data refreshed? We will focus on the setting where we do not have, or even know, the counts of all possible items in a histogram. This is typical in practice, because SQL queries will only return items that have positive counts, otherwise how would it know items not present in the dataset? Unfortunately, differential privacy requires considering a hypothetical neighboring dataset, which might contain items that were previously unseen, which makes the privacy analysis challenging. Either we need to populate all possible items that _could_ appear in the dataset and fill in the missing counts as zero, or we need to design better, more practical, DP algorithms. In the latter case, we would want to be able to ensure DP whether we were computing the top-10 skills in a dataset or the top-10 credit card numbers in the dataset. The latter should return no results, but the algorithm should ensure privacy in both cases. We will refer to the scenario where the DP algorithm does not know the domain of items that could be in a dataset beforehand as the _Unknown Domain_ setting. In differential privacy there are typically two different parameters \(\varepsilon,\delta\) that are used to quantify the privacy of an algorithm. The parameter \(\varepsilon\) is typically referred to as the amount of _privacy loss_ that an algorithm ensures, while \(\delta\) is commonly referred to as the probability in which the algorithm can have larger privacy loss than \(\epsilon\). In fact, when \(\delta>0\), we say an algorithm satisfies _approximate_ differentially privacy, while we say _pure_ DP with \(\delta=0\). The parameter \(\delta\) then can be the chance that the algorithm returns a result that clearly violates privacy, e.g. returning an individual's data record in the data set. Not every approximate DP algorithm satisfies this interpretation, which has resulted in variants of DP, based on Renyi divergence. In particular, adding Gaussian noise with fixed standard deviation \(\sigma\) to a statistic of interest cannot be _pure_ DP, but can be \((\varepsilon(\delta),\delta)\)-DP for any \(\delta>0\). Hence the probability of failure need not be fixed in advance in the algorithm. However, there are many different algorithms that are shown to be \((\varepsilon,\delta)\)-differentially private, for a particular \(\delta>0\). In particular, designing DP algorithms for releasing histograms in the Unknown Domain setting will require setting a failure probability in advance. Consider the example where we want to know the most popular words typed in an email client. The set of all potential words is massive, and can include slang, typos, and abbreviations, so we only want to take words that are present in the dataset, rather than the set of all possible words. So one approach would be to add noise to all word counts and sort them to return the top ones. However, there could be a word like "RyanRogersSSN123456789". Even if there is noise in the counts, the mere presence of this record is a clear violation of privacy. To prevent this, we can introduce a threshold on the noisy counts, but then there is a chance that the noise is especially large, making a small count go above the threshold. This is where the \(\delta>0\) probability comes in, so we want to make sure that the threshold is set high enough to ensure that small counts with noise can only appear above the threshold with probability \(\delta\). However, it does not suffice to just bound the probability of _bad_ outcomes to prove an algorithm is approximate differentially private. We present a general framework that can be used in the privacy analysis of several algorithms that aims to release counts from an unspecified domain, which would only return results that are present in the original dataset, although the framework can be applied to other scenarios. The main idea is to show a mechanism \(A\) satisfies three conditions: (1) there are small chance events that can lead to _differentiating_ outcomes, where it is clear whether one dataset was used, (2) there is another mechanism \(A^{\prime}\) that can know both neighboring datasets, and is equal to \(A\) on all non-differentiating outcomes, and (3) \(A^{\prime}\) is pure DP. Although the algorithms we cover were previously shown to satisfy approximate differential privacy, we revisit their analyses, following our general framework. Furthermore, we show that providing a privacy analysis in terms of approximate _concentrated DP_ (CDP) can lead to improved privacy parameters, especially with algorithms based on Gaussian noise or the Exponential Mechanism. We advocate for presenting privacy guarantees of new or existing algorithms in terms of approximate CDP, rather than approximate DP, as composing the privacy parameters for CDP parameters is straightforward, while composing approximate DP parameters can be complicated and loose. Note that each algorithm is likely to be part of a more general privacy system where the overall privacy guarantee of the system can be in terms of approximate DP. It has become common to compose algorithms using an analysis based on approximate CDP and then converting to an overall approximate DP guarantee at the end. Leaving privacy guarantees of individual algorithms in terms of approximate DP will lead to loose bounds in converting the DP parameters to CDP parameters, composing CDP parameters, then lastly converting back to DP at the end. ## 2 Preliminaries We now define approximate differential privacy which depends on neighboring datasets \(x,x^{\prime}\in\mathcal{X}\), denoted as \(x\sim x^{\prime}\) that differ in the presence or absence of one user's records. **Definition 2.1** (Dwork et al. [10, 9]).: _An algorithm \(A:\mathcal{X}\to\mathcal{Y}\) is \((\epsilon,\delta)\)-differentially private if, for any measurable set \(S\subseteq\mathcal{Y}\) and any neighboring inputs \(x\sim x^{\prime}\),_ \[\Pr[A(x)\in S]\leq e^{\epsilon}\Pr[A(x^{\prime})\in S]+\delta. \tag{1}\] _If \(\delta=0\), we say \(A\) is \(\varepsilon\)-DP or simply pure DP._ One of the classical pure DP algorithms is the Laplace mechanism, which adds Laplace noise to a statistic. However, to determine the scale of noise to add to the statistic to ensure DP, we must know its _sensitivity_. We then define the \(\ell_{p}\)-sensitivity of a statistic \(f:\mathcal{X}\to\mathbb{R}^{d}\) that takes a dataset \(x\in\mathcal{X}\) to a real vector in \(\mathbb{R}^{d}\) as the following where the max is taken over neighboring \(x,x^{\prime}\in\mathcal{X}\) \[\Delta_{p}(f)=\max_{x\sim x^{\prime}}\left\{||f(x)-f(x^{\prime})||_{p}\right\}.\] We then have the following privacy guarantee for the Laplace mechanism. **Theorem 1** (Dwork et al. [10]).: _Let \(f:\mathcal{X}\to\mathbb{R}^{d}\) have \(\ell_{1}\)-sensitivity \(\Delta_{1}(f)\), then the mechanism \(M:\mathcal{X}\to\mathbb{R}^{d}\) where \(M(x)=f(x)+(Z_{1},\cdots,Z_{d})\) with \(\{Z_{i}\}\stackrel{{ i.i.d.}}{{\sim}}\mathrm{Lap}(\Delta_{1}(f)/\varepsilon)\) is \(\varepsilon\)-DP for \(\varepsilon>0\)._ Another classical pure DP mechanism is the Exponential Mechanism, which takes a quality score \(q:\mathcal{X}\times\mathcal{Y}\to\mathbb{R}\) mapping a dataset and outcome to a real value and the goal is to return outcomes that have a high quality score. We will define the _range_ of a quality score to be the following where the max over datasets is over neighbors \(x,x^{\prime}\in\mathcal{X}\),1 Footnote 1: The original Exponential Mechanism was presented in terms of a quality scores _sensitivity_, while recent work from [5] showed that the Exponential Mechanism is more naturally defined in terms of the quality score’s range. See also Jinshuo Dong’s blog post [https://dongjs.github.io/2020/02/10/ExpMech.html](https://dongjs.github.io/2020/02/10/ExpMech.html) \[\Delta(q)=\max_{y,y^{\prime}\in\mathcal{Y}}\max_{x\sim x^{\prime}}\{(q(x,y)-q( x^{\prime},y))-(q(x,y^{\prime})-q(x^{\prime},y^{\prime}))\}.\] We then have the following privacy guarantee of the Exponential Mechanism. **Theorem 2** (McSherry and Talwar [15]).: _Let \(q:\mathcal{X}\times\mathcal{Y}\to\mathbb{R}\) be a quality score with sensitivity \(\Delta(q)\), then the mechanism \(M:\mathcal{X}\to\mathcal{Y}\) is \(\varepsilon\)-DP for any \(\varepsilon>0\) where_ \[\Pr[M(x)=y]\propto\exp\left(\frac{\varepsilon q(x,y)}{\Delta(q)}\right)\] We now define approximate concentrated differential privacy (CDP),2 which differs slightly from the original definition but was later shown [21] to be equivalent to the original version. Similar to approximate DP, it permits a small probability of unbounded Renyi divergence. Footnote 2: Although [1] defines zCDP, to differentiate between CDP from [8], we will use CDP to be the version from [1] **Definition 2.2** (Bun and Steinke [1], Papernot and Steinke [16]).: _Suppose \(A:\mathcal{X}\to\mathcal{Y}\) and \(\rho,\delta\geq 0\). We say the algorithm \(A\) is \(\delta\)-approximate \(\rho\)-CDP if, for any neighboring datasets \(x,x^{\prime}\), there exist distributions \(P^{\prime},P^{\prime\prime},Q^{\prime},Q^{\prime\prime}\) such that the outputs are distributed according to the following mixture distributions:_ \[A(x)\sim(1-\delta)P^{\prime}+\delta P^{\prime\prime}\qquad A(x^{\prime})\sim( 1-\delta)Q^{\prime}+\delta Q^{\prime\prime},\] _where for all \(\lambda\geq 1\), \(D_{\lambda}(P^{\prime}\|Q^{\prime})\leq\rho\lambda\) and \(D_{\lambda}(Q^{\prime}\|P^{\prime})\leq\rho\lambda\)._ We can also convert approximate differential privacy to approximate CDP and vice versa. **Theorem 3** (Bun and Steinke [1]).: _If \(A\) is \((\varepsilon,\delta)\)-DP then it is \(\delta\)-approximate \(\varepsilon^{2}/2\)-CDP. If \(A\) is \(\delta\)-approximate \(\rho\)-CDP then it is \((\rho+2\sqrt{\rho\log(1/\delta^{\prime})},\delta^{\prime}+\delta)\)-DP for any \(\delta^{\prime}>0\)._ The classical CDP mechanism is the Gaussian Mechanism. Note that the Gaussian Mechanism was originally introduced as satisfying approximate DP, but it was then shown to satisfy pure CDP in later work [8, 1]. **Theorem 4** (Bun and Steinke [1]).: _Let \(f:\mathcal{X}\to\mathbb{R}^{d}\) have \(\ell_{2}\)-sensitivity \(\Delta_{2}(f)\), then the mechanism \(M:\mathcal{X}\to\mathbb{R}^{d}\) where \(M(x)=f(x)+(Z_{1},\cdots,Z_{d})\) with \(\{Z_{i}\}\stackrel{{ i.i.d.}}{{\sim}}\mathrm{N}(0,\frac{\Delta_{2} (f)^{2}}{2\rho})\) is \(\rho\)-CDP for \(\rho>0\)._ Note that we can apply Theorem 3 to conclude that the Exponential Mechanism is \(\varepsilon\)-DP and hence \(\varepsilon^{2}/2\)-CDP, but work from Cesar and Rogers [2] showed that the Exponential Mechanism actually has a better CDP parameter.3 Footnote 3: Also see the previous blog post at [https://differentialprivacy.org/exponential-mechanism-bounded-range/](https://differentialprivacy.org/exponential-mechanism-bounded-range/) **Theorem 5** (Cesar and Rogers [2]).: _Let \(q:\mathcal{X}\times\mathcal{Y}\to\mathbb{R}\) be a quality score with sensitivity \(\Delta(q)\), then the mechanism \(M:\mathcal{X}\to\mathcal{Y}\) is \(\varepsilon^{2}/8\)-CDP for any \(\varepsilon>0\) where_ \[\Pr[M(x)=y]\propto\exp\left(\frac{\varepsilon q(x,y)}{\Delta(q)}\right)\] We also state the composition property of CDP, showing that the overall privacy parameters degrade after multiple CDP algorithms are run on a dataset. **Theorem 6**.: _Let \(A_{1}:\mathcal{X}\rightarrow\mathcal{Y}\) be \(\delta_{1}\)-approximate \(\rho_{1}\)-CDP and \(A_{2}:\mathcal{X}\times\mathcal{Y}\rightarrow\mathcal{Z}\) where \(A_{2}(\cdot,y)\) is \(\delta_{2}\)-approximate \(\rho_{2}^{\prime}\)-CDP for all \(y\in\mathcal{Y}\). Then \(A:\mathcal{X}\rightarrow\mathcal{Z}\) where \(A(x)=A_{2}(x,A_{1}(x))\) is \((\delta_{1}+\delta_{2}-\delta_{1}\cdot\delta_{2})\)-approximate \(\rho_{1}+\rho_{2}\)-CDP._ ## 3 Unifying Framework We now present a general framework that can be used to unify some of the previous analyses for proving approximate DP (CDP) for various algorithms. If we want to show an algorithm \(A\) is approximate CDP, we need to consider the randomness in \(A\) that can generate differentiating outcomes, where if we are given two inputs \(x\) and \(x^{\prime}\), we see an outcome of \(A\) that could have only come from one of them. We want to be able to show that the chance that randomness in \(A\) can generate these bad outcomes is at most \(\delta\). Furthermore, we want to show that for all other non-differentiating outcomes, there are related mechanisms that match \(A\) with inputs \(x\) and \(x^{\prime}\). Lastly, we need to show that these related mechanisms satisfy pure CDP. To be more precise, the following lemma can be used to prove many different algorithms that are approximate DP can also be proven to be approximate CDP directly, without needing to resort to the general approximate DP conversion to approximate CDP conversion. **Lemma 3.1**.: _Let \(A:\mathcal{X}\rightarrow\mathcal{Y}\) be a randomized algorithm and fix parameters \(\rho,\delta\geq 0\). If for each neighboring datasets \(x,x^{\prime}\) the algorithm \(A\) satisfies the following conditions, then it is \(\delta\)-approximate \(\rho\)-CDP:_ _For each neighboring datasets \(x,x^{\prime}\), we have the following three conditions_ 1. _There exists events_ \(E,E^{\prime}\) _such that_ \[\Pr_{A(x)}[E],\Pr_{A(x^{\prime})}[E^{\prime}]\geq 1-\delta.\] _Let_ \(S\) _be the corresponding outcomes of both_ \(A(x)\) _conditioned on events_ \(E\) _and_ \(A(x^{\prime})\) _conditioned on_ \(E^{\prime}\)_._ 2. _There exists distributions_ \(P^{\prime}\) _and_ \(Q^{\prime}\) _with common support_ \(S\)_, such that for all_ \(y\in S\) _we have the following where_ \(P\)_,_ \(Q\) _are the distributions for_ \(A(x)\) _and_ \(A(x^{\prime})\)_, respectively,_4__ Footnote 4: We denote \(P(y),P^{\prime}(y),P^{\prime\prime}(y),Q(y),Q^{\prime}(y),Q^{\prime\prime}(y)\) to denote the Radon-Nikodym derivative of the corresponding distribution with respect to some base measure (see a similar note in [19]) \[P(y)=\mathbb{1}\left\{y\in S\right\}\cdot\Pr_{A(x)}[E]P^{\prime}(y)\qquad Q(y )=\mathbb{1}\left\{y\in S\right\}\cdot\Pr_{A(x^{\prime})}[E^{\prime}]Q^{\prime }(y)\] 3. _Also we have_ \(D_{\lambda}(P^{\prime}||Q^{\prime})\leq\lambda\rho\) _and_ \(D_{\lambda}(Q^{\prime}||P^{\prime})\leq\lambda\rho\) _for all_ \(\lambda\geq 1\)_._ Proof.: Fix neighbors \(x,x^{\prime}\). Let \(P,Q\) be the distribution for \(A(x)\) and \(A(x^{\prime})\), respectively. Consider the following distribution \(P^{\prime\prime}\) where we write \(P(\cdot\mid\neg E)\) to denote the conditional distribution of \(P\) given events \(E\) \[P^{\prime\prime}(y)=\frac{1}{\delta}\left(P(y\mid\neg E)\Pr[\neg E]+P^{\prime }(y)\left(\Pr_{A(x)}[E]-(1-\delta)\right)\right)\] We then have that for \(y\in S\) \[(1-\delta)P^{\prime}(y)+\delta P^{\prime\prime}(y)=(1-\delta)P^{\prime}(y)+ \delta\cdot\frac{1}{\delta}\left(P^{\prime}(y)\cdot(\Pr[E]-(1-\delta))\right) =P^{\prime}(y)\Pr[E]=P(y).\] Furthermore, for \(y\notin S\) we have \[(1-\delta)P^{\prime}(y)+\delta P^{\prime\prime}(y)=\delta\cdot\frac{1}{\delta }\left(P(y\mid\neg E)\Pr[\neg E]\right)=P(y\mid\neg E)\Pr[\neg E]=P(y).\] Hence, we have \(P(y)=(1-\delta)P^{\prime}(y)+\delta P^{\prime\prime}(y)\) for all outcomes \(y\). A similar arguments works to show \(Q(y)=(1-\delta)Q^{\prime}(y)+(1-\delta)Q^{\prime\prime}(y)\) with \(Q^{\prime\prime}(y)\) defined similar to \(P^{\prime\prime}(y)\). By assumption, we know that \(D_{\lambda}(P^{\prime}||Q^{\prime})\leq\lambda\rho\) and \(D_{\lambda}(Q^{\prime}||P^{\prime})\leq\lambda\rho\) for all \(\lambda\geq 1\). Hence, \(A\) is approximate CDP. Although a similar lemma can be used for approximate DP, this will typically lead to unnecessarily loose privacy parameters, as we will see later. Hence, providing an approximate CDP analysis of each algorithm will be useful in any privacy system that combines these algorithms because the CDP privacy parameters simply add up and can be converted to approximate DP parameters at the end, if necessary. ## 4 Unknown Domain Algorithms We will denote a histogram as \(h\in\mathcal{H}=\{(i,c_{i}):(i,c_{i})\in[d]\times\mathbb{N}\}\) which consists of a set of pairs with a label \(i\) in \([d]\) and its corresponding count \(c_{i}\). When we define neighboring histograms, we will need to consider how much a user can modify a histogram. If we remove a user's contributions from a histogram \(h\), this can both remove items entirely or decrease counts by some amount. We will say that a histogram \(h\) is \((\ell_{0},\ell_{\infty})\)-sensitive if removing or adding a user's data to histogram \(h\) can change at most \(\ell_{0}\) distinct elements and each count in \(h\) can differ by at most \(\ell_{\infty}\). We now turn to some applications of Lemma 3.1 for some existing mechanisms. ### Positive Count Histograms We start with the setting where we want to return a histogram subject to CDP where we are only given positive count items, hence each count present in the histogram is at least 1. It is straightforward to extend this analysis to the case where we have a histogram with only counts above some known value larger than 1. This is a natural setting for data analytics as GROUP BY queries in SQL only provide items that exist in the dataset, so no zero counts are returned. This is problematic for privacy, as a neighboring histogram can have fewer results, resulting in a differing set of counts to add noise to. We present a general template for the private algorithm in Algorithm 1, where we deliberately leave the noise distribution and the threshold arbitrary. ``` 0: Histogram \(h\), noise distribution Noise and threshold \(T>0\) 0: Noisy Histogram \(\tilde{h}\) with counts above \(T\) Initialize \(\tilde{h}=\emptyset\). for each item \(i\) where \((i,c_{i})\in h\) such that \(c_{i}>0\)do Set \(\tilde{c}_{i}=c_{i}+Z_{i}\) where \(Z_{i}\sim\text{Noise}\). if\(\tilde{c}_{i}>T\)then \(h=\tilde{h}\cup\{(i,\tilde{c}_{i})\}\) endif endfor ``` **Algorithm 1** Unknown Domain Histogram Previous mechanisms follow this template, specifically from Korolova et al. [13] and Wilson et al. [22] who used Laplace noise and Swanberg et al. [20] who used Gaussian noise. We now prove that Algorithm 1 is approximate CDP using Lemma 3.1. **Theorem 7**.: _Assume input histograms \(h\) are \((\Delta_{0},\Delta_{\infty})\)-sensitive. If we use \(\mathrm{Noise}\) being the distribution of \(|h|\) many i.i.d. \(\mathrm{Lap}(\Delta_{\infty}/\varepsilon)\) and threshold_ \[T=\Delta_{\infty}+\frac{\Delta_{\infty}}{\varepsilon}\log(\tfrac{\Delta_{0}}{2 \delta}),\] _then Algorithm 1 is \(\delta\)-approximate \(\Delta_{0}\cdot\varepsilon^{2}/2\)-CDP. Furthermore, if we use \(\mathrm{Noise}=\mathrm{N}(0,\Delta_{\infty}^{2}/\varepsilon^{2}\cdot I_{|h|})\) and threshold_ \[T=\Delta_{\infty}+\frac{\Delta_{\infty}}{\varepsilon}\Phi^{-1}(1-\delta/\Delta_ {0}),\] _where \(\Phi^{-1}(z)\) is the inverse CDF of a standard normal then Algorithm 1 is \(\delta\)-approximate \(\Delta_{0}\cdot\varepsilon^{2}/2\)-CDP._ Proof.: We will rely on Lemma 3.1 to prove this result. Consider neighbors \(h,h^{\prime}\) where \(h\) has one additional user's data from \(h^{\prime}\), without loss of generality. By assumption, we know that \(h\) can differ in at most \(\Delta_{0}\) counts for different labels. Furthermore, we know that in counts that are differing, they can differ by at most \(\Delta_{\infty}\). Consider the set \(S\) to be the set of labels that are common between \(h\) and \(h^{\prime}\) along with corresponding counts. Since we assume that \(h\) has one additional user's data from \(h^{\prime}\), we know that this set \(S\) must include all labels of \(h^{\prime}\). Note that the counts for the items that are not present in \(h^{\prime}\) but are present in \(h\) must be at most \(\Delta_{\infty}\). We now cover each item in Lemma 3.1. 1. We define the event \(E\) to be all the randomness in \(A(h)\) that can generate outcomes in \(S\), so that \(E\) must only include the noise that is added to items that are common between \(h\) and \(h^{\prime}\) and the noise that is added to the items' counts in \(h\), but not in \(h^{\prime}\) must be bounded. We then lower bound the probability of event \(E\). Note that for every item \(j\) who is in \(h\) just not in \(S\), it's count can be no more than \(\Delta_{\infty}\). \[\Pr_{A(h)}\left[E\right] =\prod_{j:(j,\cdot)\in h\setminus h^{\prime}}\Pr[c_{j}+\mathrm{ Noise}\leq T]\] \[\geq\prod_{j:(j,\cdot)\in h\setminus h^{\prime}}\Pr[\Delta_{ \infty}+\mathrm{Noise}\leq T]\] We then consider the two scenarios with either Laplace noise or Gaussian noise. * (Laplace) With Laplace noise of scale \(b>0\), with \(T=\Delta_{\infty}+b\log(\frac{\Delta_{0}}{2\delta})\), we have \[\Pr_{A(h)}[E]\geq\left(1-\frac{1}{2}\exp\left(-\frac{T-\Delta_{\infty}}{b} \right)\right)^{\Delta_{0}}\geq 1-\delta.\] * (Gaussian) Next we consider the Gaussian noise version that has standard deviation \(\sigma>0\) with \(T=\Delta_{\infty}+\sigma\Phi^{-1}(1-\delta/\Delta_{0})\). \[\Pr_{A(h)}\left[E\right]\geq\Phi\left(\frac{T-\Delta_{\infty}}{\sigma}\right) ^{\Delta_{0}}\geq 1-\delta.\] For this case, the event \(E^{\prime}\) is all randomness in \(A(h^{\prime})\) that can generate outcomes in \(S\), which would include all randomness in \(A(h^{\prime})\) because \(h^{\prime}\) is a subset of \(h\). 2. We then consider the mechanism \(A^{\prime}\) whose domain is the set of common items between \(h\) and \(h^{\prime}\) and has noise added to each count so that only noisy counts above \(T\) are returned with its corresponding item label. We write the distribution of \(A^{\prime}(h)\) as \(P^{\prime}\) and the distribution of \(A^{\prime}(h^{\prime})\) as \(Q^{\prime}\). Because we add independent noise to each histogram count, we can separate out the noise terms added to the counts of labels that are not common between \(h\) and \(h^{\prime}\), hence we know that \(P(y)=\Pr[E]P^{\prime}(y)\) for \(y\in S\), by design. Furthermore \(Q\equiv Q^{\prime}\). 3. Note that \(A^{\prime}\) is either the Laplace mechanism or the Gaussian mechanism over common items between \(h\) and \(h^{\prime}\). We then cover each variant separately. * (Laplace) We first consider the case where we apply Laplace noise with noise parameter \(b>0\). Note that \(A^{\prime}\) is a Laplace Mechanism over a histogram that can change in at most \(\Delta_{0}\) counts and each count can differ by at most \(\Delta_{\infty}\). Ignoring the threshold in \(A^{\prime}\), as this is a post processing of the noisy histogram and does not impact the privacy analysis, we can then say that the Laplace mechanism is being applied to a histogram with \(\ell_{1}\)-sensitivity \(\Delta_{0}\Delta_{\infty}\). Hence, we know that the Laplace mechanism is \(\Delta_{0}\Delta_{\infty}/b\)-(pure) DP and hence \((\Delta_{0}^{2}\Delta_{\infty}^{2}/b^{2}/2)\)-(pure) CDP. However, this will not get us the result we want because it would result in \(\Delta_{0}^{2}\varepsilon^{2}/2\)-CDP with \(b=\Delta_{\infty}/\varepsilon\). This was due to using the \(\ell_{1}\)-sensitivity of the histogram and then applying Theorem 1. We now consider the Laplace mechanism only on the common items between \(h\) and \(h^{\prime}\) that also have differing counts. We call the corresponding mechanism \(\hat{A}\) and denote the distribution \(\hat{P}\) for \(\hat{A}(h)\) and the distribution \(\hat{Q}\) for \(\hat{A}(h^{\prime})\). Note that each noisy count in \(\hat{A}\) is generated independently, so we can say that each count in \(\hat{A}\) is a single univariate Laplace mechanism. Each Laplace mechanism will then be \(\Delta_{\infty}/b\)-(pure) DP and hence \((\Delta_{\infty}/b)^{2}/2\)-(pure) CDP. Applying composition from Theorem 6 over each univariate Laplace mechanism implies that \(\hat{A}\) is \(\frac{\Delta_{0}(\Delta_{\infty})^{2}}{2b^{2}}\)-(pure) CDP. Let \(\tilde{A}\) denote the Laplace Mechanism applied to the counts that were unchanged between \(h\) and \(h^{\prime}\) with distribution \(\tilde{P}\) for \(\tilde{A}(h)=\tilde{A}(h^{\prime})\). For ease of notation, let \(h\) and \(h^{\prime}\) match on the first \(d^{\prime}\leq d\) indices and let the first \(k\leq\Delta_{0}\) indices of \(h\) and \(h^{\prime}\) have differing counts. Hence, we have for outcome \(y=(y_{1},\cdots,y_{d^{\prime}})\) that \(P^{\prime}(y)=\hat{P}(y_{1},\cdots,y_{k})\cdot\tilde{P}(y_{k+1},\cdots,y_{d^{ \prime}})\). Similarly, we have \(Q^{\prime}(y)=\hat{Q}(y_{1},\cdots,y_{k})\cdot\tilde{P}(y_{k+1},\cdots,y_{d^{ \prime}})\). This gives us the following for \(\lambda\geq 1\) \[D_{\lambda}(P^{\prime}||Q^{\prime})=D_{\lambda}\left(\hat{P}||\hat{Q}\right) \leq\frac{\Delta_{0}(\Delta_{\infty})^{2}}{2b^{2}}\] and similarly, \[D_{\lambda}(Q^{\prime}||P^{\prime})\leq\frac{\Delta_{0}(\Delta_{\infty})^{2} }{2b^{2}}.\] * (Gaussian) We next consider the Gaussian noise variant. We will denote \(A^{\prime}\) as the Gaussian mechanism whose domain is the set of common items between \(h\) and \(h^{\prime}\) and has noise standard deviation \(\Delta_{\infty}/b\) and only counts above \(T\) will be returned with its corresponding item label. We write the distribution of \(A(h)\) as \(P^{\prime}\) and the distribution of \(A^{\prime}(h^{\prime})\) as \(Q^{\prime}\). Note that \(A^{\prime}\) is a Gaussian Mechanism over a histogram that can change in at most \(\Delta_{0}\) counts and each count can differ by at most \(\Delta_{\infty}\). Ignoring the threshold in \(A^{\prime}\), again this does not impact the privacy analysis, we can then say that the Gaussian mechanism is being applied to a histogram with \(\ell_{2}\)-sensitivity \(\Delta_{\infty}\sqrt{\Delta_{\infty}}\). Hence, we know that the Gaussian mechanism is \(\Delta_{\infty}^{2}\Delta_{0}/\sigma^{2}\)-(pure) CDP from Theorem 4. Furthermore, post processing the Gaussian Mechanism is also CDP with the same parameters, so restricting the noisy counts to be larger than \(T\) gets us back to distributions \(P^{\prime}\) and \(Q^{\prime}\). This gives us \(D_{\lambda}(P^{\prime}||Q^{\prime})\leq\lambda\frac{\Delta_{\infty}^{2} \Delta_{0}}{2\sigma^{2}}\) and \(D_{\lambda}(Q^{\prime}||P^{\prime})\leq\lambda\frac{\Delta_{\infty}^{2} \Delta_{0}}{2\sigma^{2}}\), for all \(\lambda\geq 1\). Hence, using Laplace noise with scale \(b>0\) in the Unknown Domain Histogram algorithm is \(\delta\)-approximate \(\frac{\Delta_{0}^{2}\Delta_{0}^{2}}{2b^{2}}\)-CDP. Setting \(b=\Delta_{\infty}/\varepsilon\) completes the proof for Laplace noise. Furthermore, using Gaussian noise with standard deviation \(\sigma\) is \(\delta\)-approximate \(\frac{\Delta_{\infty}^{2}\Delta_{0}}{2\sigma^{2}}\)-CDP and setting \(\sigma=\frac{\Delta_{\infty}}{\varepsilon}\) completes the proof. We want to highlight the improvement we get when we use the CDP analysis. If we consider a similar analysis using approximate DP, we would remove the items that cannot be returned under both neighboring datasets and we would be left with the Laplace mechanism over the common items. We could then use the \(\ell_{1}\)-sensitivity of the resulting histogram, but then adding Laplace noise with scale \(b=\Delta_{\infty}/\varepsilon\), would result in \((\varepsilon\Delta_{0},\delta)\)-DP, which can be converted to CDP using Theorem 3 to get \(\delta\)-approximate \(\varepsilon^{2}\Delta_{0}^{2}/2\)-CDP, although we can get \(\delta\)-approximate \(\varepsilon^{2}\Delta_{0}/2\)-CDP in our analysis. Furthermore, if we convert approximate CDP guarantees to approximate DP guarantees, this would lead to loose privacy parameters when developing privacy systems that use these algorithms. For example, if we use the Gaussian noise variant in Theorem 7 with \(\Delta_{0}=1\), we can conclude that it is \((\varepsilon^{2}/2+\varepsilon\sqrt{2\log(1/\delta)},\delta+\delta^{\prime})\)-DP for any \(\delta^{\prime}>0\), so if we only use DP guarantee and combine it with another Unknown Domain Histogram with Gaussian noise, we can compose to get \((\varepsilon^{\prime},2\delta+\delta^{\prime})\)-DP for any \(\delta^{\prime}>0\) where \[\varepsilon^{\prime}=\varepsilon^{2}+2\varepsilon\sqrt{2\log(1/\delta^{\prime })}.\] However, if we had never converted to approximate DP until after composing both Unknown Domain Histogram mechanisms, we could have gotten an overall \((\varepsilon^{\prime\prime},2\delta+\delta^{\prime})\)-DP guarantee, where \[\varepsilon^{\prime\prime}=\varepsilon^{2}+2\varepsilon\sqrt{\log(1/\delta^{ \prime})}.\] ### Top-\((\bar{k}+1)\) Count Histograms We now turn to a slight variant of releasing private histograms over a histogram with positive counts. In this setting, we assume that only a limited part of the histogram is available, perhaps due to an existing data analytics system that cannot return all counts. This setting first appeared in [6] and is especially important when designing DP algorithms on top of existing systems that cannot provide the full histogram of counts [18]. We will refer to the histogram consisting of the top-\((\bar{k}+1)\) items as the histogram with items and corresponding counts that are in the top-\((\bar{k}+1)\). Note that in this case, the top-\((\bar{k}+1)\) can change between neighboring histograms. We now present a general algorithm template, similar to Algorithm 1, in Algorithm 2 that takes an arbitrary threshold \(T>0\) and a top-\((\bar{k}+1)\)-histogram. ``` 0: Histogram \(h\), noise standard deviation \(\sigma\), threshold \(T>0\), and top-\((\bar{k}+1)\) histogram 0: Noisy Histogram \(\tilde{h}\) with at most \(\bar{k}\) counts. Let \(h_{(\bar{k}+1)}\) be the histogram consisting of the top-\((\bar{k}+1)\) items, breaking ties arbitrarily. if\(h_{(\bar{k})}\) has fewer than \(\bar{k}\) items then Pad \(h_{(\bar{k})}\) with items \(\bot_{1},\cdots,\bot_{\bar{k}-|h_{(\bar{k})}|}\) with \(c_{\bot_{j}}=0\) until there are \(\bar{k}\) items in \(h_{(\bar{k})}\). endif Let \(c_{(\bar{k}+1)}\) be the count of the \((\bar{k}+1)\)-th item in \(h\), which might be zero. Set \(\tilde{T}=T+c_{(\bar{k}+1)}+\mathrm{N}(0,\sigma^{2})\) Initialize \(\tilde{h}=\emptyset\). for each item \(i\) where \((i,c_{i})\in h_{(\bar{k})}\)do Set \(\tilde{c}_{i}=c_{i}+\mathrm{N}(0,\sigma^{2})\) if\(\tilde{c}_{i}>\tilde{T}\)then \(\tilde{h}=\tilde{h}\cup\{(i,\tilde{c}_{i})\}\) endif endfor ``` **Algorithm 2** Unknown Domain from Top-\((\bar{k}+1)\) We now show that it is indeed approximate CDP. **Theorem 8**.: _Assume input histograms \(h\) are \((\Delta_{0},\Delta_{\infty})\)-sensitive. If we use \(\sigma=\frac{\Delta_{\infty}}{\varepsilon}\) and threshold_ \[T=\Delta_{\infty}+\frac{\sqrt{2}\cdot\Delta_{\infty}}{\varepsilon}\Phi^{-1}(1- \delta/\Delta_{0})\] _then Algorithm 2 is \(\delta\)-approximate \(\Delta_{0}\cdot\varepsilon^{2}/2\)-CDP._ Proof.: We follow the same analysis as in the proof of Theorem 7, which used Lemma 3.1. We again set \(S\) to be the set of labels that are common between neighbors \(h\) and \(h^{\prime}\). Note that we are only considering items with counts in the top-\((\bar{k})\) of each histogram and the items between \(h_{(\bar{k})}\) and \(h^{\prime}_{(\bar{k})}\), including the zero count items with labels in \(\{\bot_{j}\}\). Hence, there might be different items between \(h_{(\bar{k})}\) and \(h_{(\bar{k})}\), as reducing some counts in \(h\) might change the order of the top-\((\bar{k}+1)\). From Lemma 5.2 in [6], we know that there can be at most \(\min\{\bar{k},\Delta_{0}\}\) many differing items between the top-\((\bar{k}+1)\) histograms \(h_{(\bar{k})}\) and \(h^{\prime}_{(\bar{k})}\). We then follow the three items that we need to show in order to apply Lemma 3.1. 1. We denote \(A\) as Algorithm 2 and the event \(E\) as the noise added to counts in outcomes \(S\) for \(A(h)\), the noise added for the threshold \(T+c_{(\bar{k}+1)}\) to get \(\tilde{T}\), and the noise added to the differing items not in \(S\) must be no more that \(\tilde{T}\). We define \(E^{\prime}\) similarly for \(A(h^{\prime})\). Note that for any item \(j\) that is in \(h_{(\bar{k})}\) but not an item that can be returned in \(S\), we know \[c_{j}\leq c^{\prime}_{j}+\Delta_{\infty}\leq c^{\prime}_{(\bar{k})}+\Delta_{ \infty}\leq c_{(\bar{k})}+\Delta_{\infty}\leq c_{(\bar{k}+1)}+\Delta_{\infty}.\] The analysis is straightforward due to the difference between two Gaussians being Gaussian itself. Hence, we have with \(\hat{T}=\sqrt{2}\sigma\Phi^{-1}(1-\delta/\Delta_{0})\) so that \(T=\hat{T}+\Delta_{\infty}\). \[\Pr_{A(x)}\left[\neg E\right] \leq\sum_{i=1}^{\Delta_{0}}\Pr\left[c_{(\bar{k}+1)}+\Delta_{ \infty}+\mathrm{N}(0,\sigma^{2})>c_{(\bar{k}+1)}+T+\mathrm{N}(0,\sigma^{2})\right]\] \[=\Delta_{0}\Pr[\mathrm{N}(0,2\sigma^{2})>\hat{T}]\] \[=\Delta_{0}\left(1-\Phi\left(\frac{\hat{T}}{\sqrt{2}\sigma} \right)\right)\] \[=\Delta_{0}\left(1-(1-\delta/\Delta_{0})\right)=\delta\] The analysis to show \(\Pr_{A(x^{\prime})}[\neg E^{\prime}]\) is similar. 2. We then consider the distribution \(P^{\prime}(y)\) to be the distribution of \(A(h)\) conditioned on \(E\). Note that \(P(y)=\Pr[E]P^{\prime}(y)\) for each \(y\in S\) and similarly \(Q(y)=\Pr[E^{\prime}]Q^{\prime}(y)\) for \(Q^{\prime}(y)\) being the distribution of \(A(h^{\prime})\) conditioned on \(E^{\prime}\). We will modify the mechanism \(A(h)\) and \(A(h^{\prime})\) that will result in the same mechanism when conditioned on events \(E\) and \(E^{\prime}\), respectively. For each label that differs between \(h_{(\bar{k})}\) and \(h^{\prime}_{(\bar{k})}\), we change it to common labels \(b_{1},\cdots b_{\ell}\) where \(\ell=|\{j:(j,\cdot)\in\{h_{(\bar{k})}\setminus h^{\prime}_{(\bar{k})}\}\}|\leq \Delta_{0}\). Because we condition on events \(E\), no outcome \(y\in S\) can include the indices \(b_{1},\cdots,b_{\ell}\). Furthermore, it was shown in [17][Lemma 6.4] that the resulting histogram with common labels will also have as many as \(\Delta_{0}\) items with differing counts, and those counts can change by at most \(\Delta_{\infty}\), regardless of how we assign the common labels \(\{b_{j}\}\). We then let \(P^{\prime}\) and \(Q^{\prime}\) be the resulting distribution after this relabeling. Next we will need to bound the Renyi divergence between \(P^{\prime}\) and \(Q^{\prime}\). We will make use of the following result from [12][Corollary 4.3] **Lemma 4.1**.: _Suppose \(F,F^{\prime}\) are two \(\mu\)-strongly convex functions over \(\mathcal{K}\subseteq\mathbb{R}^{d}\), and \(F-F^{\prime}\) is \(G\)-Lipschitz over \(\mathcal{K}\). For any \(k>0\), if we let \(P\propto e^{-mF}\) and \(Q\propto e^{-mF^{\prime}}\) be two probability distributions on \(\mathcal{K}\), then we have for all \(\lambda\geq 1\)_ \[D_{\lambda}(P||Q)\leq\frac{\lambda mG^{2}}{2\mu}\] This result is useful because it allows us to condition on outcomes from a joint Gaussian Mechanism falling in some convex region, which will correspond to releasing only "good" outcomes, i.e. not allowing certain counts from going above some noisy value. For the Gaussian mechanism, we have \(F(z)=||z-h||_{2}^{2}\) and \(F^{\prime}(z)=||z-h^{\prime}||_{2}^{2}\), which are both \(2\)-strongly convex over any convex region. Furthermore, we have \(||z-h||_{2}^{2}-||z-h^{\prime}||_{2}^{2}\) is \(2\sqrt{\Delta_{0}}\cdot\Delta_{\infty}\)-Lipschitz. For the density of a Gaussian, we then use 3. We now want to prove that the Renyi divergence between \(P^{\prime}\) and \(Q^{\prime}\), which are conditioned on events in \(E\) and \(E^{\prime}\) respectively. To do this we will consider the joint Gaussian distribution that releases all counts over the histograms with common labels, including the \((\bar{k}+1)\)-th largest count with added \(T\) to its count being labeled \(\bot\), but we do not enforce the threshold. We only want to consider events that do not have the items with labels in \(\{b_{j}\}\) having noisy count above the noisy count for \(\bot\). We then consider the convex region \(\mathcal{K}\) where these "bad" noisy counts do not go above the noisy count for \(\bot\). We then apply Lemma 4.1 to claim that the resulting mechanism conditioned on this region has a bound on the Renyi Divergence that is the same as if it were the Gaussian mechanism not constrained to region \(\mathcal{K}\). Dropping the items with counts lower than the noisy count for \(\bot\) is simply post processing, which does not increase the Renyi divergence bound. Hence, we have \(D_{\lambda}(P^{\prime}||Q^{\prime})\leq\frac{\lambda\Delta_{0}\Delta_{\infty}^ {2}}{2\sigma^{2}}\) and \(D_{\lambda}(P^{\prime}||Q^{\prime})\leq\frac{\lambda\Delta_{0}\Delta_{\infty}^ {2}}{2\sigma^{2}}\) for all \(\lambda\geq 1\). Setting \(\sigma=\Delta_{\infty}/\varepsilon\) completes the proof. #### 4.2.1 Exponential Mechanism For the previous applications, there was a crucial assumption that the input histograms were \((\Delta_{0},\Delta_{\infty})\)-sensitive, specifically that the histogram's \(\ell_{0}\)-sensitivity must be bounded by \(\Delta_{0}\). However, this might not always be the case, and would be difficult to enforce a bound in practice - requiring that each user has only a certain number of distinct items in the data. One way to limit the impact a single user can have on the result is to limit the number of items that can be returned. The Laplace/Gaussian Mechanism based algorithms can return an arbitrary number of things that are above the threshold and in the setting where we only have access to the top-\((\bar{k}+1)\) items, we could return all \(\bar{k}\). Hence, the CDP parameter could be bounded in terms of \(\bar{k}\), but we would like to control how many things could be returned from the counts we have access to. In this case, we can return the top-\(k\) results, where \(k\leq\bar{k}\) is an input to the algorithm. Unfortunately, the previous analysis would still have the CDP parameter scale with \(\bar{k}\) despite only wanting to return \(k\). To ensure that privacy loss only scales with the number of things that are returned, we can use the classical Exponential Mechanism [15], as presented in Theorem 2. It was shown that the Exponential Mechanism is part of a general framework of DP mechanisms called _report noisy max_. This can be summarized as adding noise to the quality scores for each outcome and returning the outcome with the noisy max. Note that it is critical that only the arg max is returned, not the actual noisy value. It turns out that adding Gumbel noise to the quality scores and returning the max item is equivalent to the Exponential Mechanism, see [6]. Furthermore, it was shown that iteratively applying the Exponential Mechanism to return the top-\(k\) outcomes is equivalent to adding Gumbel noise to all quality scores and returning the outcomes with the \(k\) largest noisy quality scores [6]. Other mechanisms in the report noisy max framework include adding Laplace noise to the quality scores [7] and adding Exponential noise to the quality scores [4] which turns out to be equivalent to the Permute-and-Flip Mechanism [14].5 Footnote 5: See previous blog post on _one-shot_ top-\(k\) DP algorithms: [https://differentialprivacy.org/one-shot-top-k/](https://differentialprivacy.org/one-shot-top-k/) We then present the Unknown Domain Gumbel algorithm in Algorithm 3 from [6] which takes an additional parameter \(k\leq\bar{k}\) and importantly returns a ranked list of items, but not their counts. We then state Unknown Domain Gumbel's privacy guarantee, which will largely follow the analysis in [6], although adapted for CDP. **Theorem 9**.: _Assume input histograms \(h\) are \((\infty,1)\)-sensitive. If we use the noise scale \(\beta=1/\varepsilon\), and threshold_ \[T=1+\frac{1}{\varepsilon}\log(\frac{\Delta_{0}}{\delta}),\] _then Algorithm 3 is \(\delta\)-approximate \(k\varepsilon^{2}/8\)-CDP._ Proof.: We will apply our general framework from Lemma 3.1 and some previous results from [6]. We will denote \(A\) as the Unknown Domain Gumbel mechanism. We assume WLOG that \(h,h^{\prime}\) differ by there being one additional user's data in \(h\) when compared to \(h^{\prime}\). 1. We denote \(S\) as the set of common outcomes that are possible between top-\((\bar{k})\) histograms \(h_{(\bar{k})}\) and \(h^{\prime}_{(\bar{k})}\) and \(E\) to be all the randomness in \(A(h)\) that can generate outcomes in \(S\). Hence, \(E\) must only include the Gumbel noise added to items that are common in \(h\) and \(h^{\prime}\) and the noise that is added to the differing counts must have a noisy count either below the top-\(k\) or below the noisy threshold \(\tilde{T}\) that is set. From Lemma 5.5 in [6], we have the following bound when \(T=1+\beta\log\left(\Delta_{0}/\delta\right)\) \[\Pr_{A(h)}[E]\geq 1-\delta.\] We similarly define events \(E^{\prime}\) for the randomness in \(A(h^{\prime})\) that can generate outcomes in \(S\) which gives us the same lower bound for \(\Pr_{A(h^{\prime})}[E^{\prime}]\). 2. From Lemma 5.6 in [6], we know that there exists a distribution \(P^{\prime},Q^{\prime}\) such that for all outcomes \(y\in S\), we have \[\Pr[A(x)=y]=\Pr[E]P^{\prime}(y)\qquad\Pr[A(x^{\prime})=y]=\Pr[E^{\prime}]Q^{ \prime}(y).\] 3. These distributions \(P^{\prime},Q^{\prime}\) are also related to the Exponential Mechanism. Specifically, \(P^{\prime}\) is the distribution of iteratively applying the Exponential Mechanism on \(h_{(\bar{k})}\) only over the items that are common between \(h_{(\bar{k})}\) and \(h_{(\bar{k})}\). Similarly, we can define \(Q^{\prime}\) as the distribution of iteratively applying the Exponential Mechanism on \(h^{\prime}_{(\bar{k})}\) from items that are common between \(h_{(\bar{k})}\) and \(h_{(\bar{k})}\). We can then use Theorem 5 to conclude that \(D_{\lambda}(P^{\prime}||Q^{\prime})\leq\frac{k}{8\beta^{2}}\) and \(D_{\lambda}(Q^{\prime}||P^{\prime})\leq\frac{k}{8\beta^{2}}\), for all \(\lambda\geq 1\). Setting \(\beta=\frac{1}{\varepsilon}\) completes the proof. ### Continual Observation We now present an approach for continually releasing a running counter over various domain elements while ensuring differential privacy. The _continual observation_ privacy model was introduced in [3, 11] and is meant to ensure strong privacy guarantees in the setting where we want to continually provide a running counter on a stream of events, denoted at \(\omega^{(1:\ell)}=\left(\omega^{(1)},\cdots,\omega^{(\ell)}\right)\) for \(\ell=1,\cdots,L\) where \(\omega^{(i)}\in\{0,1\}\). It is then straightforward to extend the continual observation counters to the setting of releasing a running histogram over a known set of items, i.e. \(\omega^{(i)}\subseteq[d]\), where someone can contribute a limited set of items \(|\omega^{(i)}|\leq\Delta_{0}\) at each round \(i\) in the stream. Typically we want to ensure _event-level_ privacy, where we consider the change in outcomes when one event in the stream can change. Recent work from [17, 23] have also considered the continual observation setting, but in the case where we want to continually release histograms over an unknown set of items. Consider the motivating example where we want to provide a running counter for the drugs that are purchased at a pharmacy and each customer can buy at most \(\Delta_{0}\) different drugs. There might be several different drugs to provide a counter for and new drugs can emerge later that were not known about before. Hence, we would like to have algorithms that do not require a set of items to provide counts over. We describe the algorithm at a high level, as formally describing will require some additional notation. The main subroutine is the Binary Mechanism from [3, 11] which will add at most \(\log_{2}(\ell)\) many noise terms to the count of a stream of length \(\ell\) events. The number of noise terms depends on the number of \(1\)s in the binary representation of \(\ell\). This will ensure that for a stream of length at most \(L\), we can release \(L\) counts, one after each event, each with Gaussian noise with standard deviation \(O(\log_{2}(L)/\varepsilon)\). Although we release \(L\) counts, the privacy analysis relies on the fact that we form a table of _partial sums_, so that one event in the stream can modify at most \(\log_{2}(L)\) partial sums and we can view the Binary Mechanism as a Gaussian mechanism on the table of partial sums for all common items between \(\omega^{(1:L)}\) and \(\omega^{\prime(1:L)}\), which has \(\ell_{2}\)-sensitivity \(\Delta_{0}\cdot\log_{2}(L)\). The Unknown Domain Binary Mechanism follows the same approach as the classical Binary Mechanism, which we present at a high level. The idea on the Binary Mechanism is to split a stream of items \(\omega^{(1:\ell)}\) into overlapping partial sums, guaranteeing that each \(\omega^{(i)}\) is part of no more than \(\left\lceil\log_{2}(L+1)\right\rceil\) partial sums. We create separate partial sums for each item in \(\omega^{(1:\ell)}\). For instance, with \(\ell=8\), we will focus on a single partial sum, based on the binary representation of \(\ell\), so that we need only add noise to the partial sums \(\sum_{j=1}^{8}\mathbbm{1}\left\{u\in\omega^{(j)}\right\}\) for each \(u\in\omega^{(1:\ell)}\) which we add \(\mathrm{N}(0,\sigma^{2})\) noise to the partial sum and is then used again for any other partial sum that utilizes \(\sum_{j=1}^{8}\mathbbm{1}\left\{u\in\omega^{(j)}\right\}\). For instance, when \(\ell=10\), we will use the two partial sums \(\sum_{j=1}^{8}\mathbbm{1}\left\{u\in\omega^{(j)}\right\}\) and \(\sum_{j=9}^{10}\mathbbm{1}\left\{u\in\omega^{(j)}\right\}\) for each \(u\in\omega^{(1:10)}\), each with its own noise added to it. Note that for each \(u\in\omega^{(1:\ell)}\setminus\omega^{(1:\ell-1)}\), we will add fresh noise to each prior partial sum for the new item \(u\). We then only release items with corresponding noisy counts if it is larger than a fixed threshold \(T>0\). We now present the privacy analysis for Unknown Domain Binary Mechanism from [17]. **Theorem 10**.: _Assume that \(|\omega^{(\ell)}|\leq\Delta_{0}\) for all \(\ell\in[L]\). Setting \(\sigma=1/\varepsilon\) and threshold \(T\) to be the following for any \(\delta>0\)_ \[T=1+\sigma\cdot\sqrt{\left\lceil\log_{2}(L+1)\right\rceil+1}\cdot\Phi^{-1} \left(1-\frac{\delta}{\Delta_{0}\cdot L}\right)\] _ensures that Unknown Domain Binary Mechanism is \(\delta\)-approximate \(\Delta_{0}\cdot\left\lceil\log_{2}(L+1)\right\rceil\varepsilon^{2}/2\)-CDP, under event-level adjacent streams._ Proof.: We follow the same analysis as in the earlier theorems, mainly leveraging Lemma 3.1. Let \(\omega^{(1:L)}\) contain an event where neighboring stream \(\omega^{\prime(1:L)}\) has an empty set at that event. Say that round \(\ell\) is where they differ so that \(\omega^{\prime(\ell)}=\emptyset\) and \(|\omega^{(\ell)}|=\Delta_{0}\). Let \(A\) denote the Unknown Domain Binary Mechanism. We denote \(S\) as the set of all outcomes that both \(A(\omega^{(1:L)})\) and \(A(\omega^{\prime(1:L)}\) can return. Note that at round \(\ell\), stream event \(\omega^{(\ell)}\) can introduce as many as \(\Delta_{0}\) previously unseen items in the stream. We will write the distribution of \(A(\omega^{(1:L)})\) as \(P\) and the distribution of \(A(\omega^{\prime(1:L)})\) as \(Q\). 1. We will write \(E\) as the randomness in \(A(\omega^{(1:L)})\) that can generate outcomes that are common between \(A(\omega^{(1:L)})\) and \(A(\omega^{\prime(1:L)})\). We need to ensure that no noisy count on any new item that appears in \(\omega^{(\ell)}\) but not in \(\omega^{(1:\ell-1)}\) can go above the threshold \(T\). At round \(\ell\) we can add together as many as \((\lceil\log_{2}(\ell+1)\rceil)\leq(\lceil\log_{2}(L+1)\rceil)\) independent Gaussian noise terms, which itself will be Gaussian. We then apply a union bound over all possible \(\Delta_{0}\) items in \(\omega^{(\ell)}\) and further a union bound over all possible rounds \(L\) to get the following when \(T=1+\sigma\cdot\sqrt{\left\lceil\log_{2}(L+1)\right\rceil}\cdot\Phi^{-1}(1- \frac{\delta}{\Delta_{0}\cdot L})\) \[\Pr[\neg E]\leq\Delta_{0}\cdot L\cdot\Pr[\mathrm{N}(0,\lceil\log_{2}(L+1) \rceil\sigma^{2})\geq T-1]=\delta.\] 2. We will write the distribution of the Binary Mechanism evaluated on \(\omega^{(1:L)}\) with only the common items with \(\omega^{\prime(1:L)}\) as \(P^{\prime}\) and whose counts are positive. We then have for all outcomes \(y\in S\) that \[P(y)=\Pr[E]P^{\prime}(y)\] Since \(Q\) is the distribution for the neighboring input \(\omega^{(1:L)}\) where \(\omega^{(\ell)}=\emptyset\), we have \(S\) is all outcomes so that \(Q(y)=Q^{\prime}(y)\) for all \(y\in S\). 3. Note that \(P^{\prime}\) and \(Q^{\prime}\) are simply a post-processing functions of the partial sums table with Gaussian noise added to each cell in the table. Hence, we need to consider the Renyi divergence for the partial sum tables on common items between \(\omega^{(1:L)}\) and \(\omega^{(1:L)}\). We then have \(D_{\lambda}(P^{\prime}||Q)\leq\Delta_{0}(\lceil\log_{2}(L+1)\rceil)/\sigma^{2}\lambda\) and \(D_{\lambda}(P^{\prime}||Q)\leq\Delta_{0}(\lceil\log_{2}(L+1)\rceil)/\sigma^{2}\lambda\) for all \(\lambda>1\) because this is a randomized post processing (including additional noise terms) on the common items in each partial sum table in the same as the Binary Mechanism. Setting \(\sigma=1/\varepsilon\) completes the proof. ## 5 Conclusion We have presented a unified framework to prove that several different algorithms over unknown domain histograms are approximate CDP. In many settings, practitioners want to have a way to incorporate private algorithms with minimal onboarding. A major bottleneck for incorporating private algorithms into existing systems is requiring a fixed list of items that we want to release counts for. Furthermore, products teams might be comfortable with noise added to counts, but not displaying counts for items that never appeared in the dataset. We wanted to show how the privacy analyses of many existing DP algorithms can be unified by fixing neighboring datasets and considering not just outcomes that can occur in both neighboring inputs, but also the related distributions that can only generate these _good_ outcomes. We think that approximate CDP provides the easiest way to combine these algorithms together to get tight privacy loss bounds, as the privacy analysis of many rely on improved pure CDP bounds rather than pure DP bounds. For example, we showed how using a CDP analysis of the Laplace mechanism can improve the CDP privacy parameter by considering composition over differing counts, rather than relying on an \(\ell_{1}\)-sensitivity bound as would be the case for DP. We can also use the tighter connection between Exponential Mechanisms and CDP, rather than using pure DP parameters of the Exponential Mechanism. Lastly, the Gaussian mechanism does not satisfy a pure DP bound, so using CDP is a natural fit and converting to approximate DP would result in a lossy DP parameter. We hope that this unified framework will help demystify some previous analyses and can be leveraged in designing future private algorithms. ## 6 Acknowledgements Special thanks for the helpful comments from David Durfee and Thomas Steinke that helped improve the quality of this survey.
2309.14647
State-Compute Replication: Parallelizing High-Speed Stateful Packet Processing
With the slowdown of Moore's law, CPU-oriented packet processing in software will be significantly outpaced by emerging line speeds of network interface cards (NICs). Single-core packet-processing throughput has saturated. We consider the problem of high-speed packet processing with multiple CPU cores. The key challenge is state--memory that multiple packets must read and update. The prevailing method to scale throughput with multiple cores involves state sharding, processing all packets that update the same state, i.e., flow, at the same core. However, given the heavy-tailed nature of realistic flow size distributions, this method will be untenable in the near future, since total throughput is severely limited by single core performance. This paper introduces state-compute replication, a principle to scale the throughput of a single stateful flow across multiple cores using replication. Our design leverages a packet history sequencer running on a NIC or top-of-the-rack switch to enable multiple cores to update state without explicit synchronization. Our experiments with realistic data center and wide-area Internet traces shows that state-compute replication can scale total packet-processing throughput linearly with cores, deterministically and independent of flow size distributions, across a range of realistic packet-processing programs.
Qiongwen Xu, Sebastiano Miano, Xiangyu Gao, Tao Wang, Adithya Murugadass, Songyuan Zhang, Anirudh Sivaraman, Gianni Antichi, Srinivas Narayana
2023-09-26T03:55:46Z
http://arxiv.org/abs/2309.14647v2
# State-Compute Replication: Parallelizing High-Speed Stateful Packet Processing ###### Abstract With the slowdown of Moore's law, CPU-oriented packet processing in software will be significantly outpaced by emerging line speeds of network interface cards (NICs). Single-core packet-processing throughput has saturated. We consider the problem of high-speed packet processing with multiple CPU cores. The key challenge is state--memory that multiple packets must read and update. The prevailing method to scale throughput with multiple cores involves state sharding, processing all packets that update the same state, i.e., flow, at the same core. However, given the heavy-tailed nature of realistic flow size distributions, this method will be untenable in the near future, since total throughput is severely limited by single core performance. This paper introduces _state-compute replication_, a principle to scale the throughput of a single stateful flow across multiple cores using replication. Our design leverages a _packet history sequencer_ running on a NIC or top-of-the-rack switch to enable multiple cores to update state without explicit synchronization. Our experiments with realistic data center and wide-area Internet traces shows that state-compute replication can scale total packet-processing throughput linearly with cores, deterministically and independent of flow size distributions, across a range of realistic packet-processing programs. ## 1 Introduction Designing software to handle high raw packet-processing loads is crucial in networked systems. For example, software load balancers, CDN nodes, DDoS mitigators, and many other middleboxes depend on it. Yet, with the slowdown of Moore's law, software packet processing has struggled to catch up with line speeds of network interface cards (NICs), with emerging speeds of 200 Gbit/s and beyond. Consequently, there have been significant efforts to speed up packet processing through better network stack design, removing user-kernel crossings, running software at lower layers of the stack, and designing better host interconnects. We consider the problem of scaling software packet processing by using multiple cores on a server. The key challenge is that many packet-processing applications are _stateful_, maintaining and updating regions of memory across many packets. If multiple cores contend to access the same memory regions, there is significant memory contention and cache bouncing, resulting in poor performance. Hence, the classic approach to multicore scaling is to process packets touching distinct states, _i.e._ flows, on different cores, hence removing memory contention and synchronization. For example, a load balancer that maintains a separate backend server for each 5-tuple may send all packets of a given 5-tuple to a fixed core, but process different 5-tuples on different cores, hence scaling performance with multiple cores. Many prior efforts have looked into optimizing such sharding-oriented solutions [29, 39, 44, 56], including the recent application of automatic code parallelization technology [51]. However, projecting into the future, we believe that the existing approaches to multi-core scaling have run their course (\(\lx@sectionsign\)2). Realistic traffic workloads have heavy-tailed flow size distributions and are highly skewed. Large "elephant flows" updating state on a single core will reduce total throughput and inflate tail latencies for all packets, since they are limited by the performance of a single CPU core. With emerging 200 Gbit/s--1 Tbit/s NICs, a single packet processing core may be too slow to keep up even with a single elephant flow. Additionally, with the growing scales of volumetric resource exhaustion attacks, packet processors must gracefully handle attacks where adversaries force packets into a single flow [36]. In this paper, we introduce a principle that enables scaling software processing for a _single, stateful flow_ across multiple cores, while avoiding shared memory and contention. **State-Compute Replication (informal)**. In a system bottleneck by per-packet CPU work and software dispatch, _replicating_ the state and the computation of a stateful flow across cores increases throughput _linearly_ with cores, so long as we preserve the total number of packets traversing the system. Figure 1 shows how state-compute replication (SCR) scales the processing throughput of a single TCP connection for a TCP connection state tracker [34] using multiple cores, when other techniques fail. Note that a connection tracker may change its internal state after every packet. We present the scaling principle more formally in SS3.1, and it applies to any packet processing program that may be abstracted as a deterministic finite state machine. Intuitively, as long as each core can reconstruct the flow state by processing _all_ the packets of the flow, multiple cores can successfully process a single stateful flow with zero cross-core synchronization. However, we need to meet two requirements to actually improve performance. First, despite replication, we must preserve the total number of packets moving through the system. Second, we must divide up _dispatch_--the CPU software labor of presenting the packets to the "interesting" packet-processing computation--across multiple cores (SS3.1). To achieve the two requirements for high performance, we design a _packet history sequencer_ (SS3.2), an entity which sees every packet and sprays packets across cores in a round-robin fashion, while piggybacking the relevant information from packets missed by a core on the next packet sent to that core. The sequencer must maintain a small bounded history of packet headers relevant to the packet processing program. Today, a high-speed packet-processing pipeline, running either on a programmable NIC or a top-of-the-rack switch, may serve as a sequencer. We present sequencer designs for two hardware platforms (SS3.3), a Tofino switch, and a Verilog module we integrated into the NetFPGA-PLUS reference pipeline. We also how to rewrite existing packet-processing programs to take advantage of state-compute replication (SS3.4). No technique can continue scaling indefinitely. In SS3.1, we outline when the scaling benefits of SCR stop. We evaluated the scaling efficacy of SCR using a suite of realistic programs and traffic (SS4). SCR is the only technique we are aware of that linearly scales the total processing throughput across multiple cores regardless of the skewness in the arriving flow size distribution. We also show that there are significant scaling benefits to be enjoyed before hitting the limits of SCR on today's hardware. ## 2 Background and Motivation ### High Speed Packet Processing This paper considers packet-processing programs that must work at heavy network loads with quick turnarounds for packets. We are specifically interested in applications implementing a "hairpin" traffic flow: receiving a packet, processing it as quickly as possible through lightweight computations, and transmitting the packet right out, all at the highest possible rate. We say that the computation is "lightweight" in comparison to full-fledged applications (running in user space) or transport protocols (TCP stacks) running at endpoints. Examples of the kinds of applications we consider include (i) middlebox workloads (network functions), such as firewalls, connection trackers, and intrusion detection/prevention systems; and (ii) high-volume compute-light applications such as key-value stores, telemetry systems, and stream-based logging systems, which process many requests with a small amount of computation per request. A key characteristic of such applications is their need for high packet-processing rate (which is more important than byte-processing rate) and the fact that their performance is primarily bottlenecked by CPU usage [37, 57, 61]. The performance of such applications is mission-critical in many production systems. Even small performance improvements matter in Internet services with large server fleets and heavy traffic. As a specific example, Meta's Katran layer-4 load balancer [7] and CloudFlare's DDoS protection solution [37] process every packet sent to those respective services, and must withstand not only the high rate of request traffic destined to those services but also large denial-of-service attacks. More generally, the academic community has undertaken significant efforts for performance optimization of network functions, including optimization of the software frameworks [39], designing language-based high-performance isolation [50], and developing custom hardware offload solutions [52]. In this paper, we consider applications developed within high-speed packet processing software frameworks. Given the slowdown of Moore's law and the end of Dennard scaling, the software packet-processing performance of single CPU cores has saturated. Even expert developers must work meticulously hard to improve per-core throughput by small margins (like 10%) [41, 42, 55, 31]. The community has pursued various efforts to improve performance, such as re-architecting the software stack to make efficiency gains [45, 57, 17], introducing stateless hardware offloads working in conjunction with the software stack [25, 16], full-stack hardware offloads [28, 26], and kernel extensions [2]. This paper considers software frameworks that modify the device driver to enable programmable and custom functionality to be incorporated by developers at high performance with minimal intervention Figure 1: Scaling the throughput of a TCP connection state tracker for a _single TCP connection_ across multiple cores. Sharing state across cores degrades performance beyond 2 cores due to contention. Sharding state (_e.g._ using RSS) cannot improve throughput beyond a single CPU core (§2). In contrast, State-Compute Replication (§3) provides linear scale-up in throughput with cores. from the existing kernel software stack. Specifically, we study performance and evaluate our techniques in the context of kernel extensions implemented using the eXpress Data Path (XDP/eBPF [41]) within the Linux kernel. In SS3, we will discuss how our observations and principles apply more generally to other high-speed software packet-processing frameworks, including those written with user-space libraries like DPDK [17]. ### Parallelizing Stateful Packet Processing This paper considers the problem of parallelizing high-speed packet processing programs across multiple cores. The key challenge is handling _state_: memory that must be updated in a sequential order upon processing each packet to produce a correct result. Consider the example of the connection tracker [34], a program which identifies the TCP connection state (_e.g._ SYN sent, SYN/ACK, _etc._) using packets observed from both directions of a TCP connection. Each packet in the connection may modify the internal connection state maintained by the program. There are two main techniques used to parallelize such programs across cores. _Shared state parallelism._ One could conceive a parallel implementation that (arbitrarily) splits packets across multiple cores, with explicit synchronization guarding access to the shared memory, _i.e._ the TCP connection state, to ensure a correct result. Shared-state parallelism works well when the contention to shared memory is low. Specifically, shared-memory scaling could work well when (i) packets of a single flow arrive slowly enough, _e.g._ if there are a large number of connections with a roughly-equal packet arrival rate, or (ii) when there are efficient implementations available for synchronization or transactional updates in software [40] or hardware [21, 15]. However, neither of these conditions are generally applicable. Many flow size distributions encountered in practical networks follow heavy-tailed distributions [63, 30] or exhibit highly bursty behavior [58], resulting in significant memory contention if packets from the heavier flows are spread across cores. Further, the state update operation in many programs, including the TCP connection tracker, are too complex to be implemented directly on transactional hardware, since the latter only supports individual arithmetic and logic operations (like fetch-add-write). Our evaluation results (SS4) show that the performance of shared-state multicore processing plummets with more cores under realistic flow size distributions. _Sharded (shared-nothing) parallelism._ Today, the predominant technique to scale stateful packet processing across multiple cores is to process packets that update the same memory at the same core, sharding the overall state of the program across cores. Such sharding is usually achieved through techniques like Receive Side Scaling (RSS [4]), available on modern NICs, to direct packets from the same flow to the same core, and using shared-nothing data structures on each core. However, sharding suffers from a few disadvantages. First, it is not always possible to avoid coordination through sharding. There may be parts of the program state that are shared across all packets, such as a list of free external ports in a Network Address Translation (NAT) application. On the practical side, RSS implementations on today's NICs partition packets across cores using a limited number of combinations of packet header fields. For example, a NIC may be configured to steer packets with a given source and destination IP address to a fixed core. However, the granularity at which the application wants to shard its state--for example, a key-value cache may seek to shard state by the key requested in the payload--could be infeasible to implement with the packet headers that are usable by RSS at the NIC [51]. Second, sharding state may create load imbalance across cores if some flows are heavier or more bursty than others, creating bottlenecks on single CPU cores. Heavy-tailed flow size distributions [30], bursty flow transmission patterns [58], and denial of service attacks [36] create conditions ripe for such imbalance. The research community has investigated solutions to balance the packet processing load across shards [29]. However, the efficacy of re-balancing is limited by the granularity at which flows can be migrated across cores. As we show in our evaluation (SS4), the throughput of the heaviest and most bursty flows is still limited by a single CPU core, which in turn limits the total achieved throughput. Another alternative is to evenly spray incoming packets across cores [59, 35], assuming that only a small number of packets in each flow need to update the flow's state. If a core receives a packet that must update the flow state, the packet is redirected to a designated core that performs all writes to the state for that flow. However, the assumption that state is mostly read-only is not universal, _e.g._ a TCP connection tracker may update state potentially on every packet. Further, packet reordering at the designated core can lead to incorrect results [29]. ### Goals Given the drawbacks of existing approaches for multi-core scaling discussed above (SS2.2), we seek a scaling technique that achieves the following goals: 1. _Generic stateful programming._ The technique must produce correct results for general stateful updates, eschewing solutions that only work for "simple" updates (_i.e._ fitting hardware transactional instructions), or only programs compatible with NIC configuration, or only update state for a small number of packets per flow. 2. _State access independence._ The scaling technique should improve performance independent of the flow size distribution of the incoming traffic or how the flows share or access the state in the program. 3. _Monotonic performance gain._ Performance should in general improve, not degrade, with additional cores in the system. ## 3 State-Compute Replication (SCR) In SS3.1, we present scaling principles that meet the goals in SS2.3 for multi-core high-speed stateful packet processing. In SS3.2 through SS3.4, we show how to operationalize these principles. ### Scaling Principles Suppose the packet-processing program is _deterministic_, _i.e._ in every execution, it produces the same output state and packet given a fixed input state and packet. _Principle #1 (replication for correctness)._ Sending every packet to every core, and replicating the state and computation on every core, produces the correct output state and packet on every core _with no explicit cross-core synchronization_, regardless of how the program state is accessed by packets. This principle asks us to treat each core as one replica of a replicated state machine running the deterministic packet-processing program. Each core processes packets in the same order, without missing any packets. With each incoming packet, each core updates a private copy of its state, which is equal to the private copy on every other core. There is no need to synchronize. Further, replication provides the benefit that the workload across cores is even regardless of how the state is accessed by packets, _i.e._ independent of workload skew. Naively, one way to operationalize this principle is to broadcast every packet received externally on the machine to every core. However, with \(k\) cores, for each external packet, the system will be required to process \(k\) internal packets, due to \(k\)-fold packet duplication. _Principle #2 (State-Compute Replication)._ Piggybacking a _bounded recent packet history_ on each packet sent to a core allows us to use replication (#1) while equalizing the external and internal packets-per-second workloads on the system. In CPU-bound packet processing, smaller packets typically require the same computation that larger packets do. That is, the total amount of work performed by a packet-processing system is proportional to the packets-per-second offered to it, rather than the bits-per-second [41, 57]. Further, there are two parts to the CPU processing that must be done per packet even after it reaches the CPU core where it will be ultimately processed: (i) _dispatch_, that is, the CPU/software work of presenting the packet to the user-developed packet-processing program, and signaling the packet(s) output by the program for transmission by the NIC; and (ii) the computation running within the packet-processing program itself. Dispatch often dominates the per-packet work [41]. While these observations are known in the context of high-speed packet processing, we also benchmarked a simple application on our own test machine to validate them. Consider Figure 2, where we show the throughput (packets/second (a), bytes/second (b)) and latency (c) of a simple packet forwarder written in the XDP framework running on a single CPU core. Our testbed setup is described in much more detail later (SS4), but we briefly note here that our device under test is an Intel Ice Lake CPU configured to run at a fixed frequency of 3.6 GHz and attached to a 100 Gbit/s NIC. At each packet size below 1024 bytes, the CPU core is fully utilized (at 1024 bytes the bottleneck is the NIC bandwidth). The achieved packets/second is stable across all packet sizes which are CPU bound. With a processing latency of roughly 14 nanoseconds at all packet sizes (measured only for the XDP forwarding program), back-to-back packet processing should "ideally" process \(10^{9}\) / 14 \(\approx\) 71 million packets/second. However, the achieved best-case throughput (\(\approx\) 18 million packets/second) is much smaller--implying that significant CPU work is expended in presenting the input packets to and extracting the output packets from the forwarder and setting up the NIC to transmit and receive those packets. This is not merely a feature of the framework we used (XDP); the DPDK framework has similar dispatch characteristics [41]. Principle #2 states that replication (principle #1) need not impose an untenable increase in the total internal packet rate processed by the system. Suppose it is possible to spray the incoming packets across cores in a round-robin fashion. If there are \(k\) cores, each core receives every \(k^{th}\) packet. Then: 1. It is unnecessary for each core to have the most up-to-the-packet private state at all times. It is sufficient if a core has the latest state just in time to make a decision on the current packet that it is processing. 2. With each new packet, suppose the core that receives it also sees (as metadata on that packet) all the \(k-1\) packets from the last time it processed a packet, _i.e.a recent packet history_. The core can simply "catch up" on the computations required to obtain the most up-to-the-packet value Figure 2: The nature of CPU work in high-speed packet processing: Consider the throughput of a simple packet forwarding application (packets/second (a), bytes/second (b)) running on a single CPU core clocked at 3.6 GHz, as the size of the incoming packets varies. The average latency to execute the XDP program is also shown in nanoseconds (c). CPU usage is tied to the number of packets processed per second, not the bytes processed per second. Further, significant time elapses in _dispatch_: CPU work needed to present the input packet to and retrieve the output packet from the “interesting” packet-processing computation. for its private state. 3. If packets are sprayed round-robin across cores, the number of historic packets needed to ensure that the most updated state is available to process the current packet is _bounded_ by the number of cores. Further, just those packet bits required to update the shared state are necessary from each historic packet. As a crude model, suppose a system has \(k\) cores, and each core can dispatch a single packet in \(d\) cycles and runs a packet-processing program that computes over a single packet in \(c\) cycles. For each piggybacked packet, the total processing time is \(d+(k\times c)\). When dispatch time dominates compute time, \(d\gg c\). With \(k\) cores, the total rate at which externally-arriving packets can be processed is \(k\times\frac{1}{d+(k\times c)}\approx k/d\). Hence, it is possible to scale the packet-processing rate linearly with the number of cores \(k\). Intuitively, this principle scales packet dispatch across cores with a little extra computation in the program, while maintaining correctness. _Principle #3 (Scaling limits)._ Leveraging principle #2 can enable a linear scale-up in the total packets-per-second throughput with the number of cores, so long as the per-packet work is dominated by dispatch. The system's achievable packet rate will not scale linearly if dispatch no longer dominates the per-packet work. This can be seen easily from the fact that the approximation in our simple model above will no longer hold. Some concrete examples where dispatch time can be overtaken as the primary bottleneck include (i) the compute time \(k\times c\) for each piggybacked packet becomes sizable; (ii) the per-packet compute time \(c\) itself increases due to overheads in the system, _e.g._ larger memory access time when a core's copy of the state spills into a larger memory; or (iii) other components such as the PCIe bus become the bottleneck rather than the CPU. ### Operationalizing SCR Operationalizing the scaling principles discussed above (SS3.1) conceptually requires two pieces. _A reliable packet history sequencer (SS3.3)._ We require an additional entity in the system, which we call a _sequencer_, to (i) steer packets across cores in round-robin fashion, (ii) maintain the most recent packet history across all packets arriving at the machine, and (iii) piggyback the packet history on each packet steered to the cores. The act of stripping off the piggybacked history from the packet after it is processed by the program can be implemented either at the CPU core or the sequencer. The NIC hardware or the top-of-the-rack switch are natural points to introduce the sequencer functionality, since they observe all the packets flowing into and out of the machine. Today's existing fixed-function NICs do not implement the functionality necessary to construct and piggyback a reliable packet history. However, we have identified three possible instantiations that could, in the near future, achieve this: (i) emerging NICs with programmable pipelines [1, 3, 23]; or (ii) a combination of a NIC implementing round-robin packet steering [4, 6] and a programmable top-of-the-rack switch pipeline [14, 19, 32] for maintaining and piggybacking the packet history; or (iii) a fixed function NIC pipeline directly incorporating the functionality of the reliable sequencer. We believe that any of these instantiations may be realistic and applicable given the context: for example, high-speed programmable NICs are already common in some large production systems [38], as are programmable switch pipelines [49]. We will show two possible hardware designs in SS3.3. Hereafter, for brevity, we refer to all of these designs as simply sequencers. _An SCR-aware packet-processing program (SS3.4)._ The packet-processing program must be developed to replicate the program state and keep private copies per core. Further, the program must process the packet history first before computing over the current packet. We discuss how to transform a single-threaded program to its SCR-aware variant in SS3.4. Assuming the packet-processing program is deterministic (SS3.1), an SCR-aware program is guaranteed to produce the correct output state and packet if every CPU core is guaranteed to receive the packets sent to it by the sequencer. _An example showing scaling principles in action._ Consider Figure 3, where a sequencer and three cores are used to run a packet-processing program. As shown in Figure 2(a), the sequencer sprays packets (_i.e._\(p_{i},p_{i+1},\ldots\)) in a round-robin fashion across \(k=3\) cores (_i.e._\(core_{1},core_{2},core_{3}\)). Further, the sequencer stores the recent packet history consisting of the packet fields from the last \(k\) packets which are relevant to evolving the shared state. We denote the relevant part of a packet \(p_{i}\) by \(f(p_{i})\). For example, in a TCP connection tracking program, this includes the TCP 4-tuple, the TCP flags, and Figure 3: An example illustrating the scaling principles. \(p_{i}\) is the \(t^{th}\) packet received by the sequencer, \(f(p_{j})\) are relevant fields from \(p_{j}\), and \(S_{i}\) is the state after processing packets \(p_{1},\ldots,p_{i}\) in order. sequence and ACK numbers. Note that this packet history is updated only by the sequencer and is never written to by the cores. In the example in Figure 2(a), the packet history supplied to \(core_{1}\) processing packet \(p_{i}\) is \(f(p_{i-2}),f(p_{i-1})\). As shown in Figure 2(b), each core updates its local private state by first fast-forwarding the state using the packet history \(f(p_{i-2}),f(p_{i-1})\), and then processing the packet \(p_{i}\) sprayed to it. ### Packet History Sequencer The primary goal of the sequencer is to maintain and propagate recent packet history to CPU cores to help replicate the computation with the correct program state (SS3.2). We assume the NIC is already capable of spraying packets across CPU cores [4, 6], and hence do not discuss that functionality further. We describe the rest of the the sequencer's functions in terms of the following: (i) designing a packet format that modifies existing packets to piggyback history from the sequencer to the CPU cores; (ii) designing a hardware data structure that maintains a recent bounded packet history at the sequencer, and enables reading out the history into metadata on the packet. The packet fields that are maintained in the sequencer history depend on the specific fields used by the packet-processing application. The number of historic packets that must be tracked depends on the degree of parallelism that is sought, _e.g._ the number of available CPU cores over which scaling is implemented. We have implemented sequencing hardware data structures on two platforms, the Tofino programmable switch pipeline [19] and a Verilog module that we integrated into the NetFPGA-PLUS project [10]. #### 3.3.1 Packet format The key question answered in this subsection is: given a packet, what is the best place to put the packet history on it? While this may appear "just an engineering detail", designing the right packet format has important implications to the design of hardware data structures on the sequencer and the SCR-aware program. As shown in Figure 3(a), we choose to place the packet history close to the beginning of the packet, before the entirety of the original packet. Relative to placing the packet history between headers of the original packet, this placement simplifies the hardware logic that writes the history into the packet, as the write must always occur at a fixed address (0) in the packet buffer. Further, for reasons explained in SS3.3.2, we include a pointer to the metadata of the packet that arrived the earliest among the ones in the piggybacked history, which packet does not always correspond to the first piece of metadata when reading the packet in order. Keeping all the bytes of the original packet together in one place also simplifies developing an SCR-aware packet-processing program. The packet parsing logic of the original program can remain unmodified if the program starts parsing from the location in the modified packet buffer which contains all the bytes of the original packet in order. Finally, we also prefix a dummy Ethernet header to the packet in instantiations of the sequencer which run outside of the NIC, _i.e._ our Tofino switch instantiation. Adding this header helps the NIC process the packet correctly--without it, the packet appears to have an ill-formatted Ethernet MAC header to the NIC. Our setup also uses this Ethernet header to force RSS on the NIC [4] to spray packets across CPU cores. #### 3.3.2 Hardware data structures for packet history We show how to design data structures to maintain and update a recent packet history on two high-speed platforms, a Tofino programmable switch pipeline [19] and a Verilog module integrated into the NetFPGA-PLUS platform [10]. These designs are specific to the platform where they are implemented, and hence we describe them independently. A key unifying principle between the two designs is that although the items in the maintained packet history change after each packet, we only want to update a small part of the data structure for each packet. Conceptually, a ring buffer data structure is appropriate to maintain such histories. Hence, in both designs, we use an _index pointer_ to refer to the current data item that must be updated, which corresponds to the head pointer of the abstract ring buffer where data is written. _Tofino._ We use Tofino registers [22], which are stateful memories to hold data on the switch, to record the bits of each historic packet relevant to the computation in the packet-processing program. Suppose the pipeline has \(s\) match-action table stages, \(N\) registers per stage, and \(b\) bits per register. For simplicity in this description, we assume there is exactly one packet field of size \(b\) bits used in the computation in the packet-processing program. Our data structure can maintain a maximum of \((s-1)\times N\times b\) bits of recent packet history, _i.e._ history for \((s-1)\times N\) packets, as shown in Figure 3(b). We have successfully compiled the design to the Tofino ASIC. First, we use a single register in the first stage to store the index pointer. The pointer refers to the specific register in the subsequent stages that must be updated with a header field from the current packet. The index pointer is incremented by 1 for each packet, and is rounded back to 0 when it reaches the maximum number of fields required in the history. The pointer is also carried on a metadata field on the packet through the remaining pipeline stages. Next, register ALUs in subsequent stages are programmed to read out the values stored in them into pre-designated metadata fields on the packet. If the index pointer points to this register, an additional action occurs: rewrite the stored contents of the register by the pre-designated history field from the current packet. Finally, all the metadata fields, consisting of the packet history fields and the index pointer, are deparsed and serialized into the packet in the format shown in Figure 3(a). We also add a dummy Ethernet header to ensure that the server NIC can receive the packets correctly (SS3.3.1). Recent work explored the design of ring buffers to store packet histories in the context of debugging [43], reading out the histories from the control plane when a debugging action is triggered. A key difference in our design is that reading out histories into the packet is a data plane operation, occurring on every packet. _NetFPGA._ To show the possibility of developing high-speed fixed-function hardware for sequencing, we also present a sequencer design developed in Verilog in Figure 3(c). Suppose we wish to maintain a history of \(N\) packets, each packet contributing \(b\) bits of information. A simple design, for small values of \(N\) and \(b\) (we used \(N=16\) and \(b=112\)), uses a memory which has \(N\) rows, each containing a tuple of \(b\) bits. We also maintain a register containing the index pointer (\(p\) bits), initialized to zero. At the beginning the memory is initialized with all zeroes. When a packet arrives, it is parsed to extract the bits relevant to the packet history. Then the entire memory is read and put in front of the packet (moving the packet contents by a fixed size known apriori, \(N\times b+p\) bits). The information relevant to the packet history from the current packet is put into the memory row pointed to by the index pointer, and the index pointer is incremented (modulo the memory size). We have integrated this design into the NetFPGA-PLUS platform. ### SCR-Aware Multi-Core Programming Consider a packet-processing program developed assuming single-threaded execution on a single CPU core. The question we tackle in this subsection is: how should the program be changed to take advantage of multi-core scaling with state-compute replication? We walk through the process of adapting a program written in the eBPF/XDP framework [41], but we believe it is conceptually similar to adapt programs written in other frameworks such as DPDK. We describe the program transformations necessary through a running example. Suppose we have a port-knocking firewall [24] with the state machine shown in Figure 5. The program runs a copy of this state machine per source IP address. If a source transmits IPv4/TCP packets with the correct sequence of TCP destination ports, then all further communication is permitted from that source. All other packets are dropped. Any transition not shown in the figure leads to the default CLOSED_1 state, and only the OPEN state permits packets to traverse the firewall successfully. A simplified XDP implementation of this single-threaded firewall is shown below. ``` /*Definitionofprogramstate*/ structmapstates{ /*assumewedefaticdienergywithkeys assourceIPaddressesandvaluesasfirewall statesamongCLOSED_{1,2,3}andOPEN.*/ } /*Statetransitionfunction.SeeFigure 5*/ intget_new_state(intcurr_state,intdport){ /*Afunctionthatimplementsthestatemachinefor theportknockingfirewall.*/ if(curr_state==CLOSED_1&&dport==PORT_1) returnCLOSED_2; if(curr_state==CLOSED_2&&dport==PORT_2) returnCLOSED_3; if(curr_state==CLOSED_3&&dport==PORT_3) returnOPEN; if(curr_state==OPEN) returnOPEN; returnCLOSED_1; } /*Themainfunction*/ intsimple_port_knocking(...){ /*Assumethepacketislaidoutasabyetarray startingattheaddresspkt_start.Supposethe Figure 4: Hardware data structures. (a) Packets modified to propagate history from the sequencer to CPU cores. The sequencer prefixes the packet history to the original packet, which allows for a simpler implementation in hardware (SS3.3) and simpler transformations to make a packet-processing program SCR-aware (SS3.4). In instantiations where the sequencer is partly implemented on a top-of-the-rack switch (SS3.2), we further prefix a dummy Ethernet header to ensure that the NIC can process the packet correctly. (b) The data structure used to maintain and propagate packet history on the Tofino programmable switch pipeline (§3.3.2). Inset shows the specific actions performed on each Tofino register. (c) The data structure used to maintain and propagate packet history on our Verilog module integrated into NetFPGA-PLUS (§3.3.2). packet is long enough to include headers up to layer 4. First, parse IPv4/TCP pkts. */ struct ethhdr* eth = pkt_start; // parse Ethernet int l3proto = eth->proto; // layer-3 protocol int off = sizeof(struct ethhdr); struct iphr* iph = pkt_start * off; int l4proto = iph->protocol; // layer-4 protocol if (l3proto!= IPv4 || l4proto!= TCP) return XDP_DROP; // drop non IPv4/TCP pkts int srcip = iph->src; // source IP addr off += sizeof(struct iphr); struct tcphdr* tcp = pkt_start + off; int dport = tcp->dport; // TCP dst port /* Extract & update firewall state for this src. */ int state = map_lookup(states, srcip); int new_state = get_new_state(state, dport); map_update(states, srcip, new_state); /* Final packet verdict */ if (new_state == OPEN) return XDP_TK; // allow traversal return XDP_DROP; // drop everything else } The program's state is a key-value dictionary mapping source IP addresses to an automaton state described in Figure 5. The function get_new_state implements the state transitions. The main function, simple_port_knocking first parses the input packet, dropping all packets other than IPv4/TCP packets. Then the program fetches the recorded state corresponding to the source IP on the packet, and performs the state transition corresponding to the TCP destination port. If the final state is OPEN, all subsequent packets of that source IP may traverse the firewall to the other side. All other packets are dropped. To enable this program to use state-compute replication across cores, this program should be transformed in the following ways. We believe that these transformations may be automated by developing suitable compiler passes, but we have not yet developed such a compiler. (1) _Define per-core state data structures and per-packet metadata structures._ First, the program's state must be replicated across cores. To achieve this, we must define per-core state data structures that are identical to the global state data structures, except that they are not shared among CPU cores. Packet-processing frameworks provide APIs to define such per-core data structures [13]. Additionally, we must define a per-packet metadata structure that includes any part of the packet that is used by the program--through either control or data flow--to update the state corresponding to that packet. For the port-knocking firewall, the per-packet metadata should include the l3proto, l4proto, srcip, and dport. The data structures that maintain packet history on the sequencer correspond to this per-packet metadata (SS3.3). _(2) Fast-forward the state machine using the packet history._ The SCR-aware program must prepend a loop to "catch up" the state machine for each packet missed by the CPU core where the current packet is being processed. By leveraging the recent history piggybacked on each packet, at the end of this loop, the CPU core has the most up-to-the-packet state. /* Assume the pointer 'data' locates where the per-pkt metadata begins in the byte array of the packet (Figure 3(a) ). Suppose 'index' is the offset of the earliest packet $3.3.2, and NUM_META is the number of packets in the piggybacked history. * sizeof(meta); l3proto = pkt->l3proto; l4proto = pkt->l4proto; srcip = pkt->srcip; dport = pkt->dport; if (l3proto!= IPv4 || l4proto!= TCP) continue; // no state txns or pkt verdicts / * Update state for this srcip and dport: */ / * map_lookup; get_new_state; map_update. */ / * Note: No pkt verdicts for historic pkts. */ } pkt_start = data + NUM_META * sizeof(struct meta) + sizeof(index); A few salient points about the code fragment above. First, the semantics of the ring buffer of packet history (SS3.3) are implemented by looping over the packet history metadata starting at offset index rather than at offset 0. The decision to implement the ring buffer semantics in software makes the hardware significantly easier to design, since only a small part of the hardware data structure needs to be updated for each packet (SS3.3.2). Second, the loop must implement appropriate control flow before the state update to ensure that only packets that should indeed update the flow state do. Note that the metadata includes parts of the packet that are not only the data dependencies for the state transition (srcip, dport) but also the control dependencies (l3proto, l4proto). Third, no packet verdicts are given out for packets in the history: we want the program to return a judgment for the "current" packet, not the historic packets used merely to fast-forward the state machines. Finally, the code fragment conveniently adjusts pkt_start to the position in the packet buffer (Figure 3(a)) corresponding to where the "original" packet begins. The rest of the original program--unmodified--may process this packet to completion and assign a verdict. What is excluded in our code transformations is also crucial. This program avoids locking and explicit synchronization, despite the fact that it runs on many cores, even if there is global state maintained across all packets. With these transformations, in principle, a packet processing program is able to scale its performance using state-compute replication across multiple cores. ## 4 Evaluation We seek to answer two main questions through the experiment setup described in SS4.1. (1) Does state-compute replication provide better multi-core scaling than existing techniques (SS4.2)? (2) How practical is sequencer hardware (SS4.3)? ### Experiment Setup _Machines and configurations._ Our experiment setup consists of two server machines connected back-to-back over a 100 Gbit/s Nvidia/Mellanox ConnectX-5 NIC on each machine. Our servers run Intel Ice Lake processors (Xeon Gold 6334) with 16 physical cores (32 hyperthreads) and 256 GB DDR4 physical memory spread over two NUMA nodes. The system bus is PCIe 4.0 16x. We run Ubuntu 22 on them with a v6.2 Linux kernel. One of the two machines serves as a packet replayer/generator, running a DPDK burst-replay program which can transmit packets from a traffic trace that is provided. We have tested that the traffic generator can replay large traces (1 million packets) at speeds of \(\sim\) 120 million packets/second (Mpps), for sufficiently small packets (so that the NIC bandwidth isn't saturated first). The traffic generator can be directed to transmit packets at a fixed transmission (TX) rate and measure the corresponding received (RX) packet rate. Our second server is the Device Under Test (DUT), which runs on identical hardware and operating system as the first server. Additionally, we implement standard configurations to benchmark high-speed packet processing: hyperthreading is disabled; the processor C-states, DVFS, and TurboBoost are disabled; dynamic IRQ balancing is disabled; and the clock frequency is set to a fixed 3.6 GHz. We enable PCIe descriptor compression and use 256 in-flight PCIe descriptors. Receive-side scaling (RSS [4]) is configured according to the baseline and application being benchmarked (see below). _The definition of throughput._ We use the standard _maximum loss-free forwarding rate_ (MLFFR [5]) methodology to benchmark packet-processing throughput. Our threshold for packet loss is in fact larger than zero (we count \(<\) 4% loss as "loss-free"), since at high speeds we have observed that the software typically always incurs a small amount of bursty packet loss. We use binary search to expedite the search for the MLFFR, stopping the search when the bounds of the search interval are separated by less than 0.4 Mpps. We ensure that the computations are bottlenecked by CPU in all of our throughput measurement experiments. Experimentally, we observe that MLFFR is a stable throughput metric: we get highly repeatable results across multiple runs. We only report throughput measurements from a single run of the MLFFR binary search. _Traces._ We are interested in understanding whether SCR provides better multi-core scaling than existing techniques on realistic traffic workloads. We have set up and used three traces for throughput comparison: a university data center trace [30], a wide-area Internet backbone trace from CAIDA [9], and a synthetic trace with flows whose sizes and inter-arrivals were sampled from a hyperscalar's data center flow characteristics [27]. These traces are highly dynamic, with flow states being created and destroyed throughout--an aspect that we believe is crucial to handle in real deployment environments (than simply holding a steady set of flow states). Further, we ensure that the trace segments are 'complete,' _i.e._ all flows that begin in the trace also end, so that the trace may be replayed any number of times with correct semantics. The flow size distributions of these traces are shown in Figure 6. The eBPF framework limits our implementations in terms of the number of concurrent flows that our data structures for state can include. This is not a limitation of the techniques, but an artifact of the current packet-processing framework we use (eBPF/XDP). To account for this limitation, specifically for the CAIDA trace, we have sampled flows from the trace's empirical flow size distribution to faithfully reflect the underlying distribution, without over-running the limit on the number of concurrent flows that any of our baseline programs may hold across the lifetime of the experiment. _Baselines._ We compare state-compute replication against (i) state sharing, an approach that uses hardware transactional instructions when the stateful update is simple enough or eBPF spinlocks [8] to share state across CPU cores; (ii) state sharding using classic RSS; and (iii) sharding using a state-of-the-art CPU load balancing technique, RSS++ [29]. Both SCR and state sharing spray packets evenly across CPU cores. The packets sent to each core for the sharding techniques depends on the configuration of RSS, which varies across the applications we evaluated (see below and Table 1). _Applications._ We tested five packet-processing applications developed in eBPF/XDP, including (i) a heavy hitter monitor, (ii) DDoS mitigator, (iii) TCP connection state tracker, (iv) port-knocking firewall, and (v) a token bucket policer. Table 1 summarizes these applications. Each program maintains state across packets in the form of a key-value dictionary, Figure 6: Flow size distributions of the packet traces we used. We used real packet traces captured at (a) university data center [30] and (b) wide-area Internet backbone by CAIDA [9]. We also synthesized (c) a packet trace with real TCP flows whose sizes are drawn from Microsoft’s data center flow size distribution [27]. whose size and contents are listed in the table. We developed a cuckoo hash table to implement the functionality of this dictionary with a single BPF helper call [12] and use it across all the baselines (sharding, sharing, SCR). The packet fields in the key determine how RSS must be configured: packets having the same key fields must be sent to the same CPU core. However, today's NICs do not allow RSS to steer packets on arbitrary sets of packet fields [51]. For example, the source and destination IP addresses may be used together to hash a packet to a core, but not separately. To be fair to sharding-oriented baselines, we process our input traces to ensure that a superset of the key fields may be used to configure RSS to steer packets correctly for our trace, indeed sharding the program state. For the connection tracker, since both directions of the connection must go to the same CPU core, we use the hash configuration key prescribed by symmetric RSS [62]. ### Multi-core throughput scaling In this section, we compare the MLFFR throughput (SS4.1) of several packet-processing programs (Table 1) scaled across multiple cores using three baseline techniques: SCR (SS3), state sharing with packets sprayed evenly across all cores, and sharding using RSS (SS2). Since the TCP connection tracking application requires packets from the two directions of the connection to be aligned and lossless, we evaluated it on a synthetic but realistic hyperscalar data center trace (SS4.1). For the rest of the applications, we report results from real university data center and Internet backbone traces. We have ensured that these experiments reflect a fair comparison of CPU packet-processing efficacy. First, we truncated the packets in the traces to a size smaller than the full MTU, to stress the applications with a high packets/second (Mpps) workload rather than saturate the NIC quickly (SS3.1). Further, we fix the packet sizes used across all baselines for a given application, since feeding packets of different sizes to the program for the same fixed packets/second arrival rate may induce bottlenecks other than the CPU (we show such experiments later). We used a fixed packet size of 256 bytes for the connection tracker and 192 bytes for the others. The packet size limits the number of items of packet history metadata that can be piggybacked on each packet. Since the metadata size changes by the application (Table 1), the number of cores we evaluate on also varies by application: we scale up to 6 cores for token bucket and heavy hitter monitoring, 7 for connection tracking, and 14 for the DDoS mitigation and port knocking firewall. _Throughput results._ Figure 6(a) and Figure 8 show the throughput as we increase the number of packet-processing cores. SCR is the only multi-core scaling technique that can gracefully scale the throughput of all the stateful packet-processing programs we evaluated across multiple cores, regardless of the flow size distribution. SCR is a generic, access-independent, and monotonically-improving scaling technique for multiple cores (SS2.3). The throughput for SCR increases linearly across cores in all of the configurations we tested. Somewhat surprisingly, SCR provides even better absolute performance than hardware transactional instructions in the case of the heavy hitter and DDoS mitigation programs. However, the performance of lock-based sharing falls off catastrophically with 3 or more cores. The throughput of sharding using RSS depends on the vagaries of how the RSS hash function steers flows to cores. RSS can neither split a single elephant flow, nor does it intelligently redistribute elephant flows to balance load across CPU cores. _Comparison with RSS++._ We compared SCR against RSS++ [29] to evaluate whether a state-of-the-art sharding solution scales better by rebalancing work across cores upon heavy load. Due to a configuration issue in RSS++, we could only run RSS++ on CPU cores that were on a NUMA node different from that of the NIC. We were only able to compare SCR and RSS++ both running on these far-NUMA cores, with one program, TCP connection tracking. Figure 9 shows \begin{table} \begin{tabular}{l|c|c|c|c|c|c|c} \hline **Application** & \multicolumn{2}{c|}{**State**} & \multicolumn{2}{c|}{**Metadata size**} & \multicolumn{2}{c|}{**RSS hash**} & \multicolumn{2}{c|}{**Packet traces**} & \multicolumn{2}{c|}{**Atomic HW**} & \multicolumn{1}{c}{**Lines of code**} \\ \cline{2-7} & **Key** & **Value** & **(bytes/packet)** & **fields** & **evaluated** & **vs. Locks** & **(shard/RSS)** \\ \hline DDoS mitigator & source IP & count & 4 & src \& dts IP & CAIDA, Univ DC & Atomic HW & 168 \\ Heavy hitter monitor & 5-tuple & flow size & 17 & 5-tuple & CAIDA, Univ DC & Atomic HW & 141 \\ TCP connection state tracking & 5-tuple & TCP state, timestamp, seq \# & 30 & 5-tuple & Hyperscalar DC & Locks & 1029 \\ Token bucket policer & 5-tuple & last packet timestamp, \# tokens & 17 & 5-tuple & CAIDA, UnivDC & Locks & 169 \\ Port-knocking firewall & source IP & knocking state (_e.g._ OZD) & 4 & src \& dts IP & CAIDA, UnivDC & Locks & 123 \\ \hline \end{tabular} \end{table} Table 1: The packet-processing applications we evaluated. Figure 7: (a) Comparing the throughput of a TCP connection state tracker parallelized using three techniques, SCR (§3), shared state and sharding (§2) on the hyperscalar data center trace (§4.1). The packet sizes used by all baselines is the same. (b) Comparing the throughput of a token bucket policer parallelized using the same three techniques on the university data center trace (§4.1), while truncating all packets in the trace to 64 bytes, and having only SCR add metadata to packets before feeding them to the NIC. the results. Fundamentally, flow-level sharding is ineffective when the workload is highly skewed with elephant flows, as many real traffic workloads tend to be (Figure 6). _Why does SCR scale better than the other techniques?_ Figure 10 shows detailed performance metrics measured from Intel's performance counter monitor (PCM [20]) and BPF profiling [11]. We measure the L2 cache hit ratios, instructions retired per cycle (IPC), and the program's computation latency (only the XDP portion, excluding the dispatch functionality in the driver), as the load offered to the system increases, when the token bucket policer program is run across different numbers of cores (2, 4, or 8). The numbers show the averages for these metrics across the cores running the program. Error bars for IPC show the min and max values across cores. Lock-based sharing in general suffers from lower L2 cache hit ratios ((a)-(c)) and higher latencies ((g)-(i)) due to lock and cache line contention across cores--a trend that holds as the offered load increases and also with additional cores at the same offered load. As we might expect, IPC increases with the offered load ((d)-(f)), since the cores get busier with packet Figure 8: Throughput (§4.1) in millions of packets per second (Mpps) of four stateful packet-processing programs implemented using state-compute replication (§3), shared state, and sharding (§2). Packet traffic is replayed from data center and Internet backbone traces (§4.1). Figure 10: Performance metrics drawn from Intel PCM counters and a BPF profiler while executing the token bucket application with varying offered loads (labeled “TX rate”) on different numbers of cores (2, 4, and 8). Metrics shown are averages across the cores running the program. Error bars (shown only for IPC) are the min and max values across cores. Packet traffic is from the university data center trace (§4.1). Figure 9: Comparing TCP connection tracking parallelized using SCR and RSS++ [29] on the hyperscalar data center trace (§4.1). processing. While the sharding approach effectively uses the CPU with a high average IPC value for 2 cores, its average IPC drops significantly--but with very high variation (see error bars)--with additional cores, indicating an imbalance of CPU work. Flow-affinity-based sharding is unable to balance packet processing effectively across cores, leaving some cores idle and others heavily used. However, SCR has a consistently high IPC with more cores and higher offered loads. SCR has higher packet-processing latency ((g)-(i)) than sharding since it needs to process the history for each packet (SS3.4). However, its more effective usage of the CPU cores results in better throughput (Figure 8g). _Are there limits to SCR scaling?_ To test how far SCR can continue scaling with additional cores, we measured throughput with the university data center trace where we truncated the base packet to 64 bytes for the shared state and sharding (RSS) programs, with SCR alone adding additional metadata to the packets before feeding them into the NIC. Figure 7b shows the throughput of the token bucket program as the number of cores varies. Increasing the packet size specifically for SCR has at least three performance consequences: it may saturate the NIC earlier for SCR than other baselines, it may result in additional L3 cache memory pressure due to higher DDIO cache occupancy [18], and it may result in PCIe bottlenecks [48] due to additional PCIe transactions and consumption of PCIe bandwidth. Indeed, after 9 cores, the CPU is no longer the bottleneck, and the throughput of SCR saturates at a point much higher than the other baselines. ### Are sequencers practical? We integrated our Verilog module implementing the sequencer (SS3.3.2) into the NetFPGA-PLUS [10] reference switch, which is clocked at 250 MHz with a data bus width of 1024 bits. We use the Alveo U250 board, which contains 1728000 lookup tables (LUTs) and 3456000 flip-flops. We synthesized our sequencer design with different numbers of memory rows (SS3.3.2), corresponding to the size of the recent packet history (in number of packets). In our design, each row is 112 bits long, enough to maintain a TCP 4-tuple and an additional 16-bit value (_e.g._ a counter, timestamp, _etc._) for each packet in the history. Table 2 shows the resource usage for our design after synthesis. Our design meets timing at 250 MHz, implying an achievable bandwidth of more than 200 Gbit/s. If each packet history metadata is smaller than a row (112 bits), parallelizing across \(N\) cores requires \(N\) rows. For such applications, our design can support parallelization across 128 cores. The LUT and flip-flop hardware usage is negligible compared to the capacity of the FPGA across all row counts we measured. Further, we have checked that our register-based design on Tofino (SS3.3.2) can hold the packet history necessary to parallelize all of the programs we tested up to 16 cores, except connection tracking (11 cores). ## 5 Related Work High-performance packet processing is a deeply studied research area. We covered the works most closely related to SCR in SS2. Here, we discuss other related work. _Frameworks for network function performance._ The problem of scaling out packet processing is prominent in the network function virtualization (NFV), with frameworks such as split/merge [56], openNF [39], and Metron [44] enabling elastic scaling for flow-oriented processing across many flows. There have also been efforts to parallelize network functions automatically [51] and designing data structures to minimize cross-core contention [40]. In all of these efforts, the finest granularity of managing state and compute is a single stateful flow. SCR scales across cores at a per-packet granularity. _General techniques for software parallelism._ Among the canonical frameworks to implement software parallelism [46], our scaling principles are most reminiscent of Single Program Multiple Data (SPMD) parallelism, with the program being identical on each core but the data being distinct. However, the inputs to each core are closely related (_i.e._ overlapping packet history sequences). In SCR, the data is made distinct for each core by the sequencer. _Parallelizing finite state machines._ A natural model of stateful packet processing programs is as finite state automata (the state space is the set of flow states) making transitions on events (packets). There have been significant efforts taken to parallelize FSM execution using speculation [53, 54] and data parallelism [47]. In contrast, SCR exploits replication. _Parallel network software stacks._ There has been recent interest in abstractions and implementations that take advantage of parallelism in network stacks, for TCP [60] and for end-to-end data transfers to/from user space [33]. SCR takes a complementary approach, using replication rather than decomposing the program into smaller parallelizable computations. ## 6 Conclusion It is now more crucial than ever to investigate techniques to scale packet processing across multiple cores. This paper presented state-compute replication (SCR), a principle that enables scaling stateful packet-processing programs across \begin{table} \begin{tabular}{|l|c|c|c|c|c|} \hline **Rows** & \multicolumn{3}{c|}{**LUT**} & \multicolumn{3}{c|}{**Flip-flops**} \\ & **Usage** & **Logic** & **\%** & **Usage** & **\%** \\ \hline 16 & 1045 & 646 & 0.060 & 2369 & 0.069 \\ 32 & 1852 & 1444 & 0.107 & 3158 & 0.091 \\ 64 & 2637 & 2229 & 0.153 & 4707 & 0.136 \\ 128 & 3390 & 2982 & 0.196 & 7786 & 0.226 \\ \hline \end{tabular} \end{table} Table 2: Sequencer resource usage after synthesis into the NetFPGA-PLUS reference switch and meeting timing at 250 MHz. cores by leveraging a packet history sequencer, even for a single stateful flow. SCR provides throughput benefits with linear scaling as more cores are added, for realistic packet traffic, packet-processing benchmarks, and system configurations.
2309.13128
Emergent inflation of the Efimov spectrum under three-body spin-exchange interactions
We resolve the unexpected and long-standing disagreement between experiment and theory in the Efimovian three-body spectrum of Li-7, commonly referred to as the lithium few-body puzzle. Our results show that the discrepancy arises out of the presence of strong non-universal three-body spin-exchange interactions, which enact an effective inflation of the universal Efimov spectrum. This conclusion is obtained from a thorough numerical solution of the quantum mechanical three-body problem, including precise interatomic interactions and all spin degrees of freedom for three alkali-metal atoms. Our results show excellent agreement with the experimental data regarding both the Efimov spectrum and the absolute rate constants of three-body recombination, and in addition reveal a general product propensity for such triatomic reactions in the Paschen-Back regime, stemming from Wigner's spin conservation rule.
J. van de Kraats, D. J. M. Ahmed-Braun, J. -L. Li, S. J. J. M. F. Kokkelmans
2023-09-22T18:27:45Z
http://arxiv.org/abs/2309.13128v2
# Emergent inflation of the Efimov spectrum under three-body spin-exchange interactions ###### Abstract One of the most fascinating predictions of few-body quantum physics is the Efimov effect, a universal accumulation of an infinite geometric series of three-body bound states at a two-body scattering resonance. Ever since the first experimental observation of such an Efimov state, the precise characterization of their physical properties has continued to challenge few-body theory. This is demonstrated most strongly by the lithium few-body puzzle, a remarkable theoretical discrepancy with the observed Efimov spectrum in \({}^{7}\)Li. Here, we resolve this long-standing puzzle, demonstrating that the discrepancy arises out of the presence of strong non-universal three-body spin-exchange interactions. This conclusion is obtained from a thorough numerical solution of the quantum mechanical three-body problem, including precise interatomic interactions and all spin degrees of freedom for three alkali-metal atoms. Our results show excellent agreement with the experimental data regarding both the Efimov spectrum and the absolute rate constants of three-body recombination, and in addition reveal a general product propensity for such triatomic reactions in the Paschen-Back regime, stemming from Wigner's spin conservation rule. There exists a general desire in physics to formulate accurate descriptions of nature from a minimal number of adjustable parameters, thus uncovering the presence of _universal_ behavior. A paradigmatic example of a system where this ideal picture is realized is the scattering of two particles at low energy. Here, the wave function delocalizes to the point that the observable properties of the system become insensitive to the exact microscopic detail of the interaction, allowing for a description purely in terms of the \(s\)-wave scattering length \(a\)[1]. This remarkable universality carries through to systems of more than two particles, most strikingly exemplified at the three-body level by virtue of the Efimov effect. At a two-body scattering resonance, where \(a\to\infty\), the Efimov effect induces a universal emergence of an infinite tower of geometrically spaced three-body bound states [2; 3]. The resulting spectrum is fully determined by a single length scale, the three-body parameter, typically expressed as the negative scattering length \(a_{-}\) where the ground state trimer dissociates into the three-body scattering continuum [1; 4; 5; 6]. In turn, the Efimov effect and three-body parameter induce universal properties in few-body clusters of four or more particles, further extending the applicability of universal theory [5]. The vast majority of experimental studies of the Efimov effect utilize ultracold atomic gases, where the scattering length can be directly controlled by means of a magnetic Feshbach resonance [7; 8]. Near such a resonance, the three-body parameter can be extracted from a characteristic log-periodic modulation of the rate of three-body recombination [1; 4; 9]. Interestingly, although the precise value of \(a_{-}\) is typically sensitive to non-universal short-range physics, the Efimov spectrum in atomic systems possesses an additional van der Waals universality \(a_{-}\approx-9.7\ r_{\rm vdW}\)[10; 11; 12], where \(r_{\rm vdW}\) gives the characteristic length scale associated with the two-body interaction. Theoretical analyses have shown that this universality is robust for Efimov states near broad Feshbach resonances, and originates from a universal suppression of three-body probability in the short range [13; 14]. For Feshbach resonances of intermediate to narrow width, both theoretical and experimental works have demonstrated an increase of \(|a_{-}|\), arising from the associated growth of the two-body effective range scale [15; 16; 17; 18; 19; 20; 21; 22]. However, a series of experiments in this regime with the lightest bosonic alkali, \({}^{7}\)Li, have failed to observe this behavior, and in fact measured values for \(|a_{-}|\) that, remarkably, recede slightly below the universal van der Waals value [23; 24; 25; 26; 27]. While similar behavior can be obtained in some theoretical scenarios [28; 20], it is generally unclear how to connect these to \({}^{7}\)Li, and sophisticated numerical models have so far failed to reproduce the data. The long-standing challenge to explain this unexpected mismatch between theory and experiment is now referred to as the _lithium few-body puzzle_[29; 30; 31]. In this work, we investigate the connection between the anomalous value of \(|a_{-}|\) in \({}^{7}\)Li and the presence of three-body spin-exchange interactions. Here, _spin-exchange_ refers to a process in which the internal spin state of the atoms is altered by coupling of the valence electrons, and thus necessarily occurs at short length scales. Due to the aforementioned suppression of three-body probability in this regime, a useful distinction can be made between two-body spin exchange, where the third particle only spectates and its state is conserved, and three-body spin-exchange, where all particles partake. As we illustrate in Fig. 1, this distinction can also naturally be applied to three-body recombination. For many applications, the contributions of three-body spin-exchange are negligible, which significantly simplifies the three-body problem [15; 16; 17; 18; 19; 20; 32; 33; 34]. Recently however, studies have found three-body spin-exchange to contribute significantly to three-body observables in \({}^{39}\)K [35] and \({}^{7}\)Li [36], both at relatively large magnetic fields. Motivated by these findings we study the Efimov spectrum in \({}^{7}\)Li, using a recently developed numerical approach to the quantum mechanical three-body problem which, together with high-performance computing facilities, allows us to include _all_ coupled three-body channels in the Hamiltonian [35; 37] (see Methods). ## II Inflation of the Efimov Spectrum Following the experiment of Ref. [26], we analyze two high-field Feshbach resonances in spin-polarized ultracold \({}^{7}\)Li gases, in the hyperfine states \(\left|f,m_{f}\right\rangle_{\text{in}}=\left|1,1\right\rangle\) and \(\left|f,m_{f}\right\rangle_{\text{in}}=\left|1,0\right\rangle\) respectively. Both resonances have similar negative background scattering lengths on the order of \(r_{\text{vdW}}=32.4863~{}a_{0}\), and are both of intermediate to narrow resonance width [26; 38]. In line with experiment we study the rate of three-body loss of the trapped gas density \(n\), typically expressed in terms of a recombination rate constant \(L_{3}\) as \(\text{d}n/\text{d}t=-L_{3}~{}n^{3}\). We calculate \(L_{3}\) for a range of scattering lengths on the attractive (\(a<0\)) side of the Feshbach resonance, and subsequently extract the values of the three-body parameter \(a_{-}\) and trimer width \(\eta_{-}\) by fitting the data to universal predictions from effective field theory [1]. To highlight the role of three-body spin-exchange, we compare a full multichannel spin (FMS) model with a fixed spectating spin (FSS) model [35], where the latter is constrained to purely two-body spin exchange (i.e. the upper pathway in Fig. 1, see Methods for more detail). To model the pairwise interactions we use realistic singlet and triplet Born-Oppenheimer interaction potentials [38]. It follows that the FMS model is quantum mechanically rigorous, save for the omission of nonadditive three-body forces which are generally assumed to be negligible in ultracold atomic gases [4]. Our results are shown in Fig. 2. In an FSS calculation, the value of \(\left|a_{-}\right|\) is significantly larger than both the universal van der Waals value and the experimental data, suggesting a significant squeezing of the spectrum for this narrow resonance which is in line with the majority of multichannel three-body models in the current literature [17; 18; 22]. Our main result is that upon including three-body spin-exchange processes, the additional accessible states induce non-trivial multichannel physics that acts to cancel the increase of \(\left|a_{-}\right|\), and can even decrease \(\left|a_{-}\right|\) to below the universal van der Waals value thus affecting an inflation of the spectrum. Consequently, our FMS calculations significantly improve on the FSS results with respect to the experimental data, and for the \(\left|f,m_{f}\right\rangle_{\text{in}}=\left|1,1\right\rangle\) state specifically our FMS results for both \(a_{-}\) and \(\eta_{-}\) fall within the experimental uncertainty of Ref. [26]. Due to the increased number of coupled channels, the FMS calculations in the \(\left|f,m_{f}\right\rangle_{\text{in}}=\left|1,0\right\rangle\) state are too intensive numerically to get fully converged, but the similarity in behavior to the \(\left|f,m_{f}\right\rangle_{\text{in}}=\left|1,1\right\rangle\) state strongly indicates that a similar match with the experiment is achievable if numerical resources allow (see Supplementary Material). Previous three-body studies have shown that the universal increase in \(\left|a_{-}\right|\) near narrow resonances arises from a repulsive barrier in the three-body potential scaling with the effective range, which progressively squeezes the Efimov spectrum [39]. Recently however, the unexpected observation of a trimer state above the atom-dimer dissociation threshold has prompted further theoretical analysis, which indicates that the Efimov state of \({}^{7}\)Li may actually exist _behind_ the universal repulsive barrier [40]. While the universal effects of the barrier are evident in our FSS results, the observed sensitivity of the value of \(\left|a_{-}\right|\) to short-range three-body spin-exchange processes is in fact consistent with the presence of a non-universal trimer state in the inner potential well. In this sense, our FMS results can serve as important numerical confirmation of this novel trimer binding mechanism. Such an identification furthermore indicates that three-body spin-exchange couplings induce an effective attractive interaction that can tug the trimer state into the inner potential well, thus causing the inflation of the Efimov spectrum. In the future, it may be interesting to analyze the exact nature of this trimer and the corresponding potential in more detail, for which an approach similar to Ref. [22] could prove a useful starting point. Figure 1: Schematic representation of three-body recombination through two distinct spin-exchange pathways, where the color of particles represents their spin state and two connected particles represent a molecule. The upper pathway contains purely two-body spin-exchange, where one of the particles conserves its spin throughout the recombination. In the bottom pathway, all three particles partake in spin-exchange such that no single spin is conserved. We note that there is an additional pathway, not pictured here, in which all particles preserve their initial spin state. This pathway is included in all our calculations. ## Propensity rule for three-body recombination Next to the excellent match with the three-body parameter our calculations also show good agreement with the individual measurements of \(L_{3}\). As three-body recombination is an important and ubiquitous chemical process, relevant far beyond the specific context of Efimov physics, this agreement motivates us to analyze the recombination rates more closely [41]. Specifically we aim to characterize the nature of the spin-exchange pathways, for which we decompose \(L_{3}\) into partial recombination rates to all distinct atom-dimer channels, providing a measure of the population distribution of product states following three-body recombination. Interestingly our calculations show that, even though the FSS approximation fails, the actual number of strongly coupled channels remains remarkably small, with the vast majority of product channels having near negligible relative recombination rates (see Supplementary Material for more detail). We will now show that this behavior results from a manifestation of Wigner's electronic-spin conservation rule [42; 43] for three atoms, originating from the relatively weak coupling between the electronic and nuclear spins of the atoms at large magnetic fields. In this Paschen-Back regime [44], the spins independently precess around the magnetic field direction, such that single-particle states are best described by the individual projection quantum numbers \(m_{s}\) and \(m_{i}\). For simplicity let us briefly neglect the subdominant contribution from the nuclear spin, whose coupling to the magnetic field is relatively weak. Then both incoming single-particle states studied in this work may be written as, \[\ket{f,m_{f}}_{\rm in}\sim\ket{\downarrow}+\delta\ket{\uparrow}, \tag{1}\] where \(\ket{\downarrow}/\uparrow\) represent the down and up electronic spin states \(m_{s}=-1/2\) and \(m_{s}=1/2\) respectively, and \(\delta\) scales with the ratio of the hyperfine and Zeeman energies [45]. In the Paschen-Back regime \(\delta\) is a small number, which motivates us to expand the incoming three-body state into four distinct components scaling as \(\delta^{n}\), where \(n\) equals the number of electronic spins pointing up. Each component can be uniquely identified with a definite value of the total electronic spin projection \(M_{S}=m_{s_{1}}+m_{s_{2}}+m_{s_{3}}=-3/2+n\), which is rigorously conserved in this basis as it is fully uncoupled from the nuclear spin. Hence, if it is possible to identify a dominant incoming projection \(M_{S}\), then the outgoing product state distribution will show a propensity to states that conserve this projection. To determine the dominant incoming component we have to consider both the Figure 2: Results of our three-body calculations, compared directly with the experimental data of Ref. [26]. Table (a) shows the three-body parameter \(a_{-}\) and Efimov trimer width \(\eta_{-}\) obtained using FSS and FMS models. The associated values of the three-body recombination rate coefficient \(L_{3}\) as a function of the scattering length \(a\) are shown in Figs. (b) and (c), for the two distinct incoming hyperfine states. The fits of \(L_{3}\) to universal theory giving the results in table (a) are shown as dash-dotted lines in matching color. Insets show the two-body scattering length as a function of magnetic field surrounding the Feshbach resonance, where the green shaded region of field values matches the range of scattering lengths in the enclosing figure. We note that next to Ref. [26], an additional independent measurement of the Efimov trimer in the \(\ket{1,1}\) state obtained \(a_{-}/r_{\rm vdW}=-7.76(31)\) and \(\eta_{-}=0.17\)[27]. scaling with the small parameter \(\delta\) and the amplitude of the associated wave functions at small nuclear separations, where recombination processes typically take place. As we illustrate in Figs. 3(a) and 3(b), the dominant short-range components of the incoming two-body wave functions are in the singlet two-body state \(\,\left|\downarrow\uparrow\right\rangle\), correspondent with the spin character of the resonantly coupled Feshbach level. It follows that recombination preferably occurs through three-body states with (partial) singlet character, of which the state \(\,\left|\downarrow\downarrow\uparrow\right\rangle\) (\(n=1\)) has the dominant scaling with \(\delta\). Thus, we finally deduce that three-body recombination will show a propensity to product channels with \(M_{S}=-\frac{1}{2}\). To confirm the presence of this propensity in our numerics, we introduce the quantity \(\mathcal{L}_{3}(M_{S})\) as the average of the partial recombination rates with respect to the projection of the associated outgoing state on a three-body spin state with definite total electronic spin projection (see Methods section). In this way \(\mathcal{L}_{3}(M_{S})\) provides a measure for the relative prevalence of a given value of \(M_{S}\) in the product state distribution following recombination. As shown in Figs. 3(c) and 3(d), the electronic spin-conservation propensity is clearly present for both Feshbach resonances we study. Note that in the argument outlined above all three particles are treated equally, and no reference is made to a special role for the spectating particle. Indeed, we observe that the \(\,\left|\downarrow\downarrow\uparrow\right\rangle\) component of \(\mathcal{L}_{3}\) is split almost equally between channels in which the spectating spin is conserved or changed, with the latter being slightly larger. ## Outlook Our findings suggest several new avenues for future research. First the excellent match between our \({}^{7}\)Li results and the experimental data now provides a new benchmark for the theoretical description of strongly interacting few-body systems. Our method thus shows great promise for studying other systems where measurements deviate from the current theoretical predictions [21]. Aside from these experimental concerns, there is also a more conceptual challenge to now further characterize the physical mechanism underpinning the formation of the non-universal Efimov trimer observed in this work, which will require untangling the exact reshaping of the three-body potential in the presence of three-body spin-exchange [22; 39; 40]. Our results also have interesting implications beyond the realm of Efimov physics. The uncovered spin-propensity rule in the rate of three-body recombination provides a remarkably simple picture of triatomic chemical reactions in large magnetic fields, which can now aid in the understanding and possible experimental con Figure 3: Analysis of the electronic spin-propensity rule for three-body recombination. In Figs. (a) and (b) we show the components of the incoming two-body radial wave functions \(r\psi_{\mathrm{in}}(r)\), highlighting the electronic spin components \(\,\left|\downarrow\downarrow\right\rangle\) in green and \(\,\left|\downarrow\uparrow\right\rangle\) in dash-dotted orange. The singlet \(\,\left|\downarrow\uparrow\right\rangle\) components are resonantly enhanced by the Feshbach level and thus dominate the wave function at short distance. In Fig. (c) and (d) we show the averaged partial recombination rate \(\mathcal{L}_{3}(M_{S})\) to channels with definite total electronic projection \(M_{S}\), which automatically fixes the total nuclear projection \(M_{I}\). The fraction in blue originates from atom-dimer states accessible by purely two-body spin exchange, while the red portion comes from states only accessible by three-body spin exchange. trol of state-to-state quantum chemistry in these regimes [33; 34]. Further studies in this direction may also seek to elucidate the more subtle role of the individual nuclear spins, which should have a similar propensity to be conserved in the Paschen-Back regime. ###### Acknowledgements. We thank Jose D'Incao for fruitful discussions. J.v.d.K. and S.J.J.M.F.K. acknowledge financial support from the Dutch Ministry of Economic Affairs and Climate Policy (EZK), as part of the Quantum Delta NL program. D.J.M.A.B. acknowledges financial support from the Netherlands Organisation for Scientific Research (NWO) under Grant No. 680-47-623. The results presented in this work were obtained on the Dutch national supercomputer Snellius, with support from TU/e HPC Lab and Surf.
2309.04454
An upper bound on geodesic length in 2D critical first-passage percolation
We consider i.i.d. first-passage percolation (FPP) on the two-dimensional square lattice, in the critical case where edge-weights take the value zero with probability $1/2$. Critical FPP is unique in that the Euclidean lengths of geodesics are superlinear, rather than linear, in the distance between their endpoints. This fact was speculated by Kesten in 1986 but not confirmed until 2019 by Damron and Tang, who showed a lower bound on geodesic length that is polynomial with degree strictly greater than $1$. In this paper we establish the first non-trivial upper bound. Namely, we prove that for a large class of critical edge-weight distributions, the shortest geodesic from the origin to a box of radius $R$ uses at most $R^{2+\epsilon}\pi_3(R)$ edges with high probability, for any $\epsilon > 0$. Here $\pi_3(R)$ is the polychromatic 3-arm probability from classical Bernoulli percolation; upon inserting its conjectural asymptotic, our bound converts to $R^{4/3 + \epsilon}$. In any case, it is known that $\pi_3(R) \lesssim R^{-\delta}$ for some $\delta > 0$, and so our bound gives an exponent strictly less than $2$. In the special case of Bernoulli($1/2$) edge-weights, we replace the additional factor of $R^\epsilon$ with a constant and give an expectation bound.
Erik Bates, David Harper, Xiao Shen, Evan Sorensen
2023-09-08T17:25:58Z
http://arxiv.org/abs/2309.04454v1
# An upper bound on geodesic length in 2D critical first-passage percolation ###### Abstract. We consider i.i.d. first-passage percolation (FPP) on the two-dimensional square lattice, in the critical case where edge-weights take the value zero with probability \(1/2\). Critical FPP is unique in that the Euclidean lengths of geodesics are superlinear--rather than linear--in the distance between their endpoints. This fact was speculated by Kesten in 1986 but not confirmed until 2019 by Damron and Tang, who showed a lower bound on geodesic length that is polynomial with degree strictly greater than \(1\). In this paper we establish the first non-trivial upper bound. Namely, we prove that for a large class of critical edge-weight distributions, the shortest geodesic from the origin to a box of radius \(R\) uses at most \(R^{2+\varepsilon}\pi_{3}(R)\) edges with high probability, for any \(\varepsilon>0\). Here \(\pi_{3}(R)\) is the polychromatic \(3\)-arm probability from classical Bernoulli percolation; upon inserting its conjectural asymptotic, our bound converts to \(R^{4/3+\varepsilon}\). In any case, it is known that \(\pi_{3}(R)\lesssim R^{-\delta}\) for some \(\delta>0\), and so our bound gives an exponent strictly less than \(2\). In the special case of Bernoulli(\(1/2\)) edge-weights, we replace the additional factor of \(R^{\varepsilon}\) with a constant and give an expectation bound. Key words and phrases:critical first-passage percolation, geodesic length, square lattice 2020 Mathematics Subject Classification: 60K35, 60K37, 82B27, 82B43 E.B. was partially supported by NSF grants DMS-1902734 and DMS-2246616 D.H. was partially supported by NSF grant DMS-2054559 X.S. was partially support from the Wylie Research Fund at the University of Utah E.S. was partially supported by the Fernholz foundation. This work was partly performed while E.S. was a PhD student at the University of Wisconsin-Madison, where he was partially supported by Timo Seppalainen under NSF grants DMS-1854619 and DMS-2152362. ## 1. Introduction ### The model of critical first-passage percolation (FPP) Let \(E(\mathbb{Z}^{2})\) denote the edge set of the square lattice \(\mathbb{Z}^{2}\). Consider a family of i.i.d. random variables \((t_{e})_{e\in E(\mathbb{Z}^{2})}\) defined on some probability space \((\Omega,\mathcal{F},\mathbb{P})\) such that \[\mathbb{P}(t_{e}<0)=0\quad\text{and}\quad\mathbb{P}(t_{e}=0)=1/2. \tag{1.1}\] We say that \(t_{e}\) is the _weight_ of edge \(e\). For each pair \(x,y\in\mathbb{Z}^{2}\), let \(\mathcal{P}(x,y)\) denote the collection of all self-avoiding nearest-neighbor paths starting at \(x\) and ending at \(y\). The _passage time_ between \(x\) and \(y\) is the random quantity \[T(x,y)=\inf_{\gamma\in\mathcal{P}(x,y)}T(\gamma),\quad\text{where}\quad T( \gamma)=\sum_{e\in\gamma}t_{e}. \tag{1.2}\] When \(x=y\), we allow the empty path so that \(T(x,x)=0\). The map \(T(\cdot\,,\cdot)\) is thus a pseudometric on \(\mathbb{Z}^{2}\), and it naturally extends to sets: for \(\mathcal{A},\mathcal{B}\subset\mathbb{Z}^{2}\), we define \[T(\mathcal{A},\mathcal{B})=\inf_{x\in\mathcal{A},\,y\in\mathcal{B}}T(x,y).\] A path \(\gamma\) is said to be a _geodesic_ between \(\mathcal{A}\) and \(\mathcal{B}\) if \(\gamma\) starts at a vertex in \(\mathcal{A}\), ends at a vertex in \(\mathcal{B}\), and achieves the minimal passage time \(T(\gamma)=T(\mathcal{A},\mathcal{B})\). It is known that with probability one, geodesics exist between every pair of points [28, Cor. 1.3]. Therefore, geodesics exist between any two finite sets \(\mathcal{A}\) and \(\mathcal{B}\). This model can be projected to classical Bernoulli percolation by declaring that all edges \(e\) with \(t_{e}=0\) are _open_, while those with \(t_{e}>0\) are _closed_. The assumption (1.1) means that the resulting projection is _critical_ percolation; in particular, there is no infinite connected cluster of open edges. Consequently, for \(x\) and \(y\) far apart, a geodesic between \(x\) and \(y\) will typically use a large number of open edges without penalty, but will also need to traverse a small number of closed edges to go between distinct open clusters. On the criticality assumption (1.1), Kesten [14, Sec. 9.24] conjectured that the length of a geodesic (i.e. the total number of edges it contains) between \(0\) and \(x\) grows superlinearly in \(\|x\|\). This conjecture was verified by Damron and Tang [7]. To state their result precisely, we let \(\mathcal{N}_{0,x}\) denote the minimum length of a geodesic between \(0\) and \(x\). **Theorem A**.: _[_7_, Thm. 1]_ _Assume (1.1). Then there exist \(c>0\) and \(\beta>1\) such that_ \[\mathbb{P}(\mathcal{N}_{0,x}\leq\|x\|_{1}^{\beta})\leq(1/c)\exp(-\|x\|_{1}^{c} )\quad\text{for all $x\in\mathbb{Z}^{2}$.} \tag{1.3}\] It should be emphasized that this result is special to the critical case, and at least superficially to \(d=2\). Indeed, if \(\mathbb{P}(t_{e}=0)\neq 1/2\), then \(\mathcal{N}_{0,x}\) grows linearly in \(\|x\|\) and is known in some cases to even satisfy a law of large numbers if \(x\) is brought to infinity along a fixed direction (see [2, Sec. 1.5] and references therein, including [1, Thm. 4.9] for the subcritical case, and [29, Thm. 4] for supercritical). What remains unsettled by Theorem A is the exact growth rate of \(\mathcal{N}_{0,x}\) in critical FPP, as there is no matching upper bound. In fact, until now, no upper bound whatsoever has been established. ### Main results for Bernoulli weights The most well-studied case of critical FPP is that of Bernoulli weights: \[\mathbb{P}(t_{e}=0)=\mathbb{P}(t_{e}=1)=\frac{1}{2}. \tag{1.4}\] Our results are strongest in this case. The estimates we provide for geodesic length are given in terms of arm events, which are of fundamental interest in the study of Bernoulli percolation and not immediately connected to FPP. We now recall the relevant definitions. Define the dual lattice \(\widehat{\mathbb{Z}}^{2}=\mathbb{Z}^{2}+(\frac{1}{2},\frac{1}{2})\). For clarity, we will often refer to \(\mathbb{Z}^{2}\) as the primal lattice. The shift by \((\frac{1}{2},\frac{1}{2})\) means that for each primal edge \(e\in E(\mathbb{Z}^{2})\), there is a unique dual edge \(\hat{e}\in E(\widehat{\mathbb{Z}}^{2})\) that bisects it. We say that the endpoints of \(\hat{e}\) are the _dual neighbors_ of \(e\), and similarly, the endpoints of \(e\) are the dual neighbors of \(\hat{e}\). We also define the dual neighbors of a _vertex_\(v\), which are the four points on the dual lattice closest to \(v\). Once the edges of \(\mathbb{Z}^{2}\) are given open or closed status, their dual edges are given the same status. That is, we declare \(\hat{e}\) to be open if \(e\) is open, or closed if \(e\) is closed. We can then speak of open or closed paths; all open paths we consider are on the primal lattice, while all closed paths are on the dual lattice. Furthermore, it will often be useful to identify self-avoiding paths with the simple curves their edges trace out in the plane. For instance, this identification makes defining disjointness very intuitive: two paths are _disjoint_ if their associated curves are disjoint.1 Footnote 1: If the two paths are on the same lattice, then disjointness is equivalent to sharing no vertices; if on different lattices, it is equivalent to sharing no dual pair of edges (i.e. never crossing). Consider the box \(B_{R}=[-R,R]^{2}\cap\mathbb{Z}^{2}\). The boundary of \(B_{R}\) is written \(\partial B_{R}=B_{R}\setminus B_{R-1}\). Let \(\pi_{3}(R)\) denote the probability of the following event depicted in Figure 1: there exist two primal paths and one dual path that are disjoint and satisfy the following conditions: * The two primal paths are open, start at \((0,0)\) and \((1,0)\) respectively, and both end at \(\partial B_{R}\). * The dual path is closed, starts at either \((\frac{1}{2},\frac{1}{2})\) or \((\frac{1}{2},-\frac{1}{2})\), and eventually reaches a dual neighbor of a point inside \(\partial B_{R}\). In the percolation literature, this is called a \(3\)-arm event, and \(\pi_{3}(R)\) is the \(3\)-arm probability at distance \(R\). Our bounds are given in terms of this quantity. Let \(\mathcal{N}_{R}\) denote the length of the shortest geodesic from \(0\) to \(\partial B_{R}\). **Theorem 1.1**.: _Assume (1.4). Then there exists a constant \(C\) such that for all \(R\geq 1\),_ \[\mathbb{E}[\mathcal{N}_{R}]\leq CR^{2}\pi_{3}(R). \tag{1.5}\] Figure 1. The polychromatic \(3\)-arm event at edge \(e=\{(0,0),(1,0)\}\): two open primal paths (shown solid) and one closed dual path (shown dashed). Our next result considers a more general setting. For finite sets \(\mathcal{A},\mathcal{B}\subset\mathbb{Z}^{2}\), let \(\mathcal{N}_{\mathcal{A},\mathcal{B}}\) denote the minimum length of a geodesic from \(\mathcal{A}\) to \(\mathcal{B}\). When \(\mathcal{A}\) and \(\mathcal{B}\) are single points, the following gives a result for point-to-point geodesics. **Theorem 1.2**.: _Assume (1.4). Let \(\mathcal{A}\) and \(\mathcal{B}\) be disjoint finite connected sets of vertices, and let \(d=\mathrm{dist}(\mathcal{A},\mathcal{B})\). There exist constants \(C,c>0\), independent of \(d,|\mathcal{A}|\), and \(|\mathcal{B}|\) such that_ \[\mathbb{P}(\mathcal{N}_{\mathcal{A},\mathcal{B}}\geq\lambda d^{2}\pi_{3}(d)) \leq C(|\mathcal{A}|+|\mathcal{B}|)^{3}\lambda^{-c}\quad\text{for all }\lambda>0. \tag{1.6}\] The exact decay rate of \(\pi_{3}(R)\) as \(R\to\infty\) has not been established on the square lattice, but on the triangular lattice, it is known that \(\pi_{3}(R)=R^{-2/3+o(1)}\)[24, Thm. 4] (see also [21, Thm. 21]). It is widely believed that the same asymptotic holds on the square lattice, although we are not aware of any rigorous estimates other than those inferred from bounds for the one-arm and five-arm exponents; see Remark 3.6. **Remark 1.3**.: In the case \(|\mathcal{A}|=|\mathcal{B}|=1\), Theorem 1.2 is analogous to [3, Cor. 2] concerning chemical distance in critical percolation. We do not obtain a precise value of \(c\) in the proof, but we explain here why \(c\) cannot exceed the value \(2\) when \(\mathcal{A}=\{(0,0)\}\) and \(\mathcal{B}=\{(1,0)\}\). Indeed, it was shown in [3, Prop. 3] that \[\mathbb{E}\big{[}(\mathcal{N}_{(0,0),(1,0)})^{2}\;\big{|}\;(0,0) \leftrightarrow(1,0)\big{]}=\infty,\] where \((0,0)\leftrightarrow(1,0)\) denotes the event that the two points are connected by an open path. In particular, \(\mathbb{E}\big{[}(\mathcal{N}_{(0,0),(1,0)})^{2}\big{]}=\infty\). But if \(c\) in (1.6) were greater than \(2\), then \(\mathcal{N}_{(0,0),(1,0)}\) would have a finite second moment. ### Main results for general edge-weights To obtain bounds for geodesic length beyond the Bernoulli case, we require either of two possible assumptions on the distribution function \(F\) of the edge-weights. These are (1.8) and (1.9) stated below. These conditions involve quantities \((p_{R})_{R\geq 1}\) which are standard in near-critical percolation. We now recall their definition. Consider i.i.d. random variables \((U_{e})_{e\in E(\mathbb{Z}^{2})}\), each uniformly distributed on \([0,1]\). For a specific choice of distribution function \(F\), we can realize the edge-weights as \(t_{e}=F^{-1}(U_{e})\), where \[F^{-1}(u)=\inf\{t\in\mathbb{R}:\,F(t)\geq u\}.\] We say a self-avoiding path \(\gamma\) is \(p\)-open if, for each edge \(e\in\gamma\), we have \(U_{e}\leq p\). With \(p_{\mathrm{c}}=1/2\), the open paths we have defined previously are the same as \(p_{\mathrm{c}}\)-open paths defined here. For positive integers \(n,m\) and \(p\in(p_{\mathrm{c}},1]\), let \[\sigma(n,m,p)=\mathbb{P}(\text{there is a $p$-open left- right crossing of $[0,n]\times[0,m]$}),\] where "left-right crossing of \([0,n]\times[0,m]\)" means a path in \([0,n]\times[0,m]\) that starts in \(\{0\}\times[0,m]\) and ends in \(\{n\}\times[0,m]\). For \(\varepsilon>0\) and \(p>p_{\mathrm{c}}\), define \[L(p,\varepsilon)=\min\{R\geq 1:\sigma(R,R,p)>1-\varepsilon\}.\] This \(L(p,\varepsilon)\) is called the (finite-size scaling) correlation length. It is shown in [15, Thm. 4] that there exists \(\varepsilon_{1}>0\) such that for all \(0<\varepsilon,\varepsilon^{\prime}\leq\varepsilon_{1}\), one has \(L(p,\varepsilon)\asymp L(p,\varepsilon^{\prime})\) as \(p\searrow p_{\mathrm{c}}\). We therefore write \(L(p)=L(p,\varepsilon_{1})\) with this fixed \(\varepsilon_{1}\), as is customary. For \(R\geq 1\), define \[p_{R}=\inf\{p>p_{\mathrm{c}}:L(p)\leq R\}. \tag{1.7}\] The two maps \(R\mapsto p_{R}\) and \(p\mapsto L(p)\) should be thought of as inverse to each other. **Theorem 1.4**.: _Assume that the edge-weight distribution function \(F\) satisfies (1.1) and one of the following two conditions:_ \[\limsup_{n\to\infty}F^{-1}(p_{2^{n+1}})/F^{-1}(p_{2^{n}})<1, \tag{1.8}\] \[\text{or}\quad\liminf_{n\to\infty}F^{-1}(p_{2^{2n}})/F^{-1}(p_{2 ^{n}})>0. \tag{1.9}\] _Then for any \(\varepsilon>0\), there exist constants \(C,c,s>0\) such that for all \(\lambda\geq 1\) and \(R\geq 1\),_ \[\mathbb{P}(\mathcal{N}_{R}\geq\lambda R^{2+\varepsilon}\pi_{3}(R))\leq Ce^{- c(\log\lambda R)^{s}}. \tag{1.10}\] Since \(p_{R}\searrow\frac{1}{2}\) as \(R\to\infty\) (for instance, see (4.6)), the conditions (1.8) and (1.9) control the behavior of \(F\) near its atom at \(0\). Similar conditions have been used in [6] for characterizing the asymptotic behavior of \(T(0,\partial B_{R})\) as \(R\to\infty\), but what is striking here is that (1.8) and (1.9) represent opposite (indeed, mutually exclusive) regimes for the behavior of \(F\) near \(0\). Intuitively, (1.8) suggests that as \(\varepsilon\searrow 0\), the value of \(F^{-1}(1/2+\varepsilon)\) does not decrease too rapidly towards zero. Meanwhile, (1.9) indicates that the graph of \(F\) becomes very flat to the right of \(0\), so that \(F^{-1}(1/2+\varepsilon)\) rapidly vanishes as \(\varepsilon\searrow 0\). Somewhat surprisingly, either of these opposite possibilities is enough to prove the same upper bound (1.10). In either case, the strategy is to control the number of nonzero edges along the geodesics. Under condition (1.8), we control the number of nonzero edges in two ways. When the maximum edge weight along the geodesics is less than or equal to \(F^{-1}(1/2+\varepsilon)\), we employ an argument based on the work of Kiss [17, 18]. In particular, we cite a result from [7] that uses this machinery. Otherwise, we show that it is rare for the maximum edge weight along a geodesic to be greater than \(F^{-1}(1/2+\varepsilon)\). It is worth noting that we need this estimate to hold for all large boxes, which translates to all small values of \(\varepsilon\). If \(F^{-1}(1/2+\varepsilon)\) decays too quickly, the tail estimate for the maximum edge weight breaks down. This is where condition (1.8) comes into play: it controls the decay rate of \(F^{-1}(1/2+\varepsilon)\) as \(\varepsilon\searrow 0\). On the other hand, the condition (1.9) gives a different decomposition. It ensures that edges with small positive values are infrequent across the entire box \(B_{R}\). Consequently, there cannot be many of these edges along the geodesic. Additionally, edges with larger values do not appear on the geodesic too often, as their presence would again result in a higher first-passage value. The following collection of examples demonstrates that conditions (1.8) and (1.9) allow for a broad class of edge-weight distributions. Figure 2 provides an illustration. **Example 1.5**.: In each example below, it is assumed \(F(t)=0\) for \(t<0\) and \(F(0)=p_{\mathrm{c}}\). We specify how \(F(t)\) behaves for small positive \(t\). 1. Suppose there are constants \(C,\alpha,h>0\) such that \(F(t)=p_{\mathrm{c}}+Ct^{\alpha}\) for all \(t\in[0,h]\). Then \(F\) satisfies (1.8). 2. Any distribution whose support has a gap at \(0\) (i.e. \(F(h)=p_{\mathrm{c}}\) for some \(h>0\)) satisfies (1.9) because the ratio is \(1\) for all \(n\). In particular, the Bernoulli distribution (1.4) satisfies (1.9). 3. Suppose there are constants \(C,\alpha,h>0\) such that \(F(t)=p_{\mathrm{c}}+Ce^{-t^{-\alpha}}\) for all \(t\in(0,h]\). Then \(F\) satisfies (1.9). 4. Not all distributions must satisfy one of the conditions. For example, since \(p_{R}\searrow p_{\mathrm{c}}\) as \(R\to\infty\), there exists a distribution \(F\) with \(F(0)=p_{\mathrm{c}}\) and \(F^{-1}(p_{2^{n}})=e^{-\sqrt{n}}\) for every \(n\geq 1\). Such a distribution satisfies neither (1.8) nor (1.9). The proof that these examples have the stated properties is given in Section 4.1. ### Three-arm heuristic and organization of the paper The appearance of \(\pi_{3}(R)\) in (1.5), (1.6), and (1.10) can be intuitively explained by the following heuristic. From every edge \(e\) in a geodesic \(\gamma\) between two sets, the geodesic itself provides two paths: one to the starting set and another to the ending set. These subpaths of \(\gamma\) consist mostly of open edges, making it plausible that they resemble open arms. By duality, there is also a closed dual path from the starting set to the ending set that consists mostly of closed edges. We show that if \(\gamma\) is chosen "as close as possible" to \(\zeta\), then for each edge \(e\) in \(\gamma\), there must exist a closed path from (a dual neighbor of) \(e\) to \(\zeta\). Joining this closed path with \(\zeta\) itself, we obtain a path resembling a closed arm. Hence \(e\) resembles an edge admitting a 3-arm event. Carrying out this argument has two main challenges. The first is to overcome the obvious issue that all three "arms" described above have some defect edges of the wrong type. This is dealt with in two essentially independent ways. In the Bernoulli case, Section 3 employs a patching argument that stitches together arm events on different scales in exchange for only constant factors in front of probabilities. The main technical devices are found in the proof of Lemma 3.10. For more general edge-weight distributions, Section 4 provides a quite different approach. Instead of avoiding the defects by patching as before, we allow our three arms to actually use the defects. Each defect incurs a small probabilistic penalty thanks to a result of [21] (this is the reason for the additional factor of \(R^{\varepsilon}\) in (1.10)). In order for these penalties to not accumulate too much, we must control the number and location of closed edges used by a geodesic (Sections 4.3 and 4.5), which in turn requires some understanding of the passage time across annuli (Section 4.2). These intermediate results are interesting in their own right, as they shed light on the structure of geodesics in critical FPP. The second main challenge is to supply all of these arguments with rigorous topological constructions. The paths involved in arm events are arrived at by particular interactions of the geodesic with open and closed circuits. Proving these interactions do occur requires a careful treatment of the relevant topological objects. So as to not distract from the probabilistic arguments, we state the requisite definitions in Section 2 and postpone the finer topological details until Sections 5, 6, and 7. Section 5 recalls a result from [13] regarding percolation on planar graphs, and then collects various consequences for bond percolation on \(\mathbb{Z}^{2}\). For instance, a key fact is that a Figure 2. An illustration of conditions (1.8) and (1.9). _Left:_ Three distribution functions that behave near \(t=0\) as \(1/2+t^{1/10}\), \(1/2+t\), and \(1/2+t^{10}\). All three distributions satisfy (1.8). _Right:_ A distribution function that is constant on \([0,h]\) for some \(h>0\). Any such distribution satisfies (1.9). bounded cluster of open edges is enclosed by a closed dual circuit (Lemma 5.6). This and other results in Section 5 are well-known but not conveniently quotable from the literature, hence the inclusion of this section for completeness. Section 6 provides some general facts about Jordan curves. Finally, Section 7 constructs the desired geodesic. The outcome is summarized in Proposition 2.5, which gives a circuit-based description of this distinguished geodesic in any critical FPP model on \(\mathbb{Z}^{2}\). This and other topological results we develop are quite general and could be useful for future studies. It should be noted that our construction performs several modifications of the percolation environment (in order to use the separation result following from [13]) that are similar to techniques used in [16] on the triangular lattice. In that setting, topological considerations are somewhat simpler because there is no auxiliary dual lattice. We work on the square lattice to highlight the general applicability of our approach, as our methods work just as well on the triangular lattice. ### Related literature and methodology The study of the shortest geodesic in critical FPP is similar to the concept of chemical distance. The chemical distance \(\operatorname{dist}_{\mathrm{c}}(x,y)\) between \(x,y\in\mathbb{Z}^{2}\) is the minimum number of edges in an open path starting at \(x\) and ending at \(y\). If no such open path exists, the chemical distance is considered to be infinite. A natural question is how the chemical distance scales with the Euclidean distance. For instance, if we condition on the event \(\{x\leftrightarrow y\}\), then how does \(\mathbb{E}[\operatorname{dist}_{\mathrm{c}}(x,y)\,|\,x\leftrightarrow y]\) depend on \(\|x-y\|\)? It is generally believed that there is some \(\beta>1\) such that \[\mathbb{E}[\operatorname{dist}_{\mathrm{c}}(x,y)\,|\,x\leftrightarrow y] \asymp\|x-y\|^{\beta}.\] Establishing the existence of \(\beta\), let alone computing its value, remains a very challenging open problem. In fact, it was posited in [23] that even on the triangular lattice, the value of \(\beta\) is not obtainable through SLE methods. Currently, there are not even many non-rigorous arguments for the value of \(\beta\). Nevertheless, numerical results [10, 11, 30] have suggested that \(\beta\approx 1.13\). Another version of the chemical distance question to ask about the minimal length of an open left-right crossing of the box \(B_{R}\), conditional on the event \(\mathsf{C}_{R}\) of such a crossing. It turns out that a more amenable quantity is the length \(L_{R}\) of the _lowest_ such crossing. This is because the lowest crossing consists exclusively of \(3\)-arm edges. Using this fact, Morrow and Zhang [20] showed (for critical site percolation on the triangular lattice) that \[\mathbb{E}[L_{R}\,|\,\mathsf{C}_{R}]=R^{4/3+o(1)}=R^{2+o(1)}\pi_{3}(R). \tag{1.11}\] Since the length of the lowest crossing is an upper bound on the length of the shortest crossing, this result effectively provides upper bound on the exponent \(\beta\), namely \(\beta\leq 4/3\). Our arguments in the Bernoulli case (1.4) can be viewed as a generalization of this approach, where "lowest" is replaced by "closest to a certain closed dual path". More recently, the upper bound was improved to \(\beta<4/3\) in [4, 5], by creating shortcuts along the lowest crossing. This use of shortcuts has also been applied to give the same strict inequality for radial chemical distance [25], improving on [3]. ### Open problems We enumerate some open problems suggested by our work. 1. Can the result of Theorem 1.4 be generalized to apply to any critical FPP edge-weight distribution? 2. Does the rate of growth of geodesic length in critical FPP depend on the edge-weight distribution? 3. Find the exact order of growth of geodesic length in critical FPP. Does there exist \(s>0\) so that \(\mathcal{N}_{R}\asymp R^{1+s+o(1)}\)? The existence of such an exponent \(s\) governing the growth of geodesic length is unknown even on the triangular lattice, even though the arm exponents are known there. 4. Our results give length bounds for a specific choice of geodesic that is constructed in Proposition 2.5. Is \(R^{2}\pi_{3}(R)\) optimal for this geodesic, as in (1.11)? With a different choice of geodesic, can the ideas of [4, 5, 25] be adapted to FPP in order to replace the bound \(R^{2+\varepsilon}\pi_{3}(R)\) with \(R^{2-\delta}\pi_{3}(R)\) for some small \(\delta>0\)? 5. Related to Remark 1.3, [3] asks whether the expected chemical distance between two points is finite. In our notation, this is the problem of determining whether, in the setting (1.4), we have \(\mathbb{E}\big{[}\mathcal{N}_{(0,0),(0,1)}\bigm{|}(0,0)\leftrightarrow(1,0) \bigm{]}<\infty\). ## 2. Topological preliminaries: circuits and associated regions Critical FPP is intimately connected to the structure of open and closed circuits. **Definition 2.1**.: A _circuit_ is a path of length at least \(4\) that starts and ends at the same vertex but is otherwise self-avoiding. An _open circuit_ is a circuit consisting of open primal edges. A _closed circuit_ is a circuit consisting of closed dual edges. A circuit \(\mathcal{C}\) can be naturally identified with the Jordan curve its edges trace out in the plane; in particular, we can speak of its interior and exterior, and we will denote these two disjoint sets by \(\mathsf{int}(\mathcal{C})\) and \(\mathsf{ext}(\mathcal{C})\). **Definition 2.2**.: We say a circuit \(\mathcal{C}\)_encloses_ a set of vertices \(\mathcal{A}\) (i.e. either \(\mathcal{A}\subseteq\mathbb{Z}^{2}\) or \(A\subset\widehat{\mathbb{Z}}^{2}\)) if \(\mathcal{A}\subseteq\mathsf{int}(\mathcal{C})\). We say a circuit \(\mathcal{C}\)_encloses_ another circuit \(\mathcal{C}^{\prime}\) if \(\mathsf{int}(\mathcal{C}^{\prime})\subseteq\mathsf{int}(\mathcal{C})\). We note that each circuit can also be considered as a collection of vertices. From this perspective, the two notions of enclosure are not the same, but for technical topological reasons, it is essential for the that we establish the two definitions in the present paper. In all applications, it should be clear from context whether we are enclosing a circuit or enclosing a set of vertices. Notice that if the two circuits live on the same lattice, enclosure still allows for them to intersect. So we will often impose _edge-disjointness_, meaning the two circuits have no common edges. A stronger condition is _vertex-disjointness_, which means the two circuits share not vertices. Under the criticality assumption (1.1), it is well-known that the following event, which we shall call \(\Omega_{\infty}\), occurs with probability one. **Definition 2.3**.: Let \(\Omega_{\infty}\) be the full probability event on which, for every \(n\geq 1\), there exist both an open circuit and a closed circuit containing \(B_{n}\) in their interiors. In the context of planar FPP, circuits are useful objects to consider for the simple reason that paths must cross them to go from their interior to their exterior. However, when we speak of paths going from one circuit to another, there is potential for confusion depending on which lattice--primal or dual--each object belongs to. Instead of exhausting the reader with clarifications every time, we simply rely on the conventions specified in Definition 2.4 to maintain precision. **Definition 2.4**.: A primal path \(\gamma\) is _open_ if all its edges are open. * If \(A\subset\mathbb{Z}^{2}\), then we say \(\gamma\) starts (ends) at \(\mathcal{A}\) if its first (last) vertex is an element of \(\mathcal{A}\). * If \(\mathcal{E}\) is a collection of primal edges, then we say \(\gamma\) starts (ends) at \(\mathcal{E}\) if its first (last) vertex is an endpoint of some element of \(\mathcal{E}\). * If \(\widehat{\mathcal{E}}\) is a collection of dual edges, then we say \(\gamma\) starts (ends) at \(\widehat{\mathcal{E}}\) if its first (last) vertex is a dual neighbor of some element of \(\widehat{\mathcal{E}}\). A dual path \(\zeta\) is _closed_ if all its edges are closed. * If \(\mathcal{A}\subset\mathbb{Z}^{2}\), then we say \(\zeta\) starts (ends) at \(\mathcal{A}\) if its first (last) vertex is equal to \(x\pm(\frac{1}{2},\frac{1}{2})\) or \(x\pm(\frac{1}{2},-\frac{1}{2})\) for some \(x\in\mathcal{A}\). * If \(\mathcal{E}\) is a collection of primal edges, then we say \(\gamma\) starts (ends) at \(\mathcal{E}\) if its first (last) vertex is a dual neighbor of some element of \(\mathcal{E}\). * If \(\widehat{\mathcal{E}}\) is a collection of dual edges, then we say \(\gamma\) starts (ends) at \(\widehat{\mathcal{E}}\) if its first (last) vertex is an endpoint of some element of \(\widehat{\mathcal{E}}\). To streamline the exposition, certain topological facts regarding circuits and paths are presented in Sections 5 and 6 near the end of the present paper. ### Constructing the geodesic In the present paper, we are concerned with bounding the length of the shortest geodesic. This is achieved by obtaining an upper bound for the length of a specific choice of geodesic that is carefully constructed. Proposition 2.5 below states the properties of the construction that we need. Rigorously constructing this geodesic requires technical topological care, which is done in Section 7. Here, we outline how this geodesic is constructed. Let \(\mathcal{A}\) and \(\mathcal{B}\) be finite, connected, disjoint sets of vertices in \(\mathbb{Z}^{2}\). We first construct a sequence of edge-disjoint circuits, each of which either encloses \(\mathcal{A}\) or \(\mathcal{B}\). Let \(\mathcal{I}_{1}\) be the innermost open circuit enclosing \(\mathcal{A}\) such that \(\mathcal{B}\) is contained in \(\mathsf{ext}(\mathcal{I}_{1})\) (if such a circuit exists). Next, let \(\mathcal{I}_{2}\) be the innermost open circuit enclosing and edge disjoint from \(\mathcal{I}_{1}\) such that \(\mathcal{B}\) is contained in \(\mathsf{ext}(\mathcal{I}_{2})\). Continue this way, obtaining a finite sequence \(\mathcal{I}_{1},\ldots,\mathcal{I}_{L}\) of nested open circuits, until we can no longer create any more such circuits. Then, let \(\mathcal{I}_{L+1}\) be the outermost open circuit, edge-disjoint from \(\mathcal{I}_{L}\) that encloses \(\mathcal{B}\) and keeps \(\mathsf{int}(\mathcal{I}_{L})\) in its exterior. From there, let \(\mathcal{I}_{L+2}\) be the outermost open circuit enclosing \(\mathcal{B}\) that is enclosed by and edge disjoint from \(\mathcal{I}_{L+1}\). Continue this way, obtaining a nested sequence of open circuits \(\mathcal{I}_{L+1},\mathcal{I}_{L+2},\ldots,\mathcal{I}_{P}\), each enclosing \(\mathcal{B}\). Section 7 proves that this sequence is well-defined on the full-probability event \(\Omega_{\infty}\). With this sequence constructed, there exists a dual path \(\zeta\), from \(\mathcal{A}\) to \(\mathcal{B}\), that consists of exactly \(P\) open edges. These open edges are seen exactly when the path crosses each of the open circuits \(\mathcal{I}_{1},\ldots,\mathcal{I}_{P}\). We start with any geodesic \(\gamma\) between \(\mathcal{A}\) and \(\mathcal{B}\). We then argue that for any closed edge \(e\) along the geodesic, the dual edge \(e^{\star}\) belongs to a closed circuit that either encloses \(\mathcal{A}\) and keeps \(\mathcal{B}\) in its exterior or encloses \(\mathcal{B}\) and keeps \(\mathcal{A}\) in its exterior. By rerouting the paths \(\gamma\) and \(\zeta\) around the open and closed circuits if needed, we can choose \(\zeta\) and \(\gamma\) to be disjoint. Between any two closed edges on \(\gamma\), we may choose an alternate path for \(\gamma\) consisting of open edges. We choose \(\gamma\) to be the closest geodesic (in the appropriate sense) to \(\gamma\). By this choice, for each open edge \(e\in\gamma\), there exists a closed dual path from \(e\) to \(\zeta\). Then, the edge \(e\) satisfies a \(3\)-arm event: one closed arm to \(\zeta\) and two open arms (by following the geodesic in each direction to the next closed edges). Quantitatively controlling the size of this arm event is done in the next section. The properties of this construction are summarized in the following proposition, whose proof may be found in Section 7. **Proposition 2.5**.: _Let \(\mathcal{A}\) and \(\mathcal{B}\) be finite disjoint connected subsets of \(\mathbb{Z}^{2}\). On the event \(\Omega_{\infty}\), there exists a (possibly empty) sequence of edge-disjoint open circuits \(\mathcal{I}_{1},\ldots,\mathcal{I}_{L},\mathcal{I}_{L+1},\ldots,\mathcal{I}_ {P}\) satisfying the following:_ 1. \(\mathcal{A}\subseteq\mathsf{int}(\mathcal{I}_{1})\subseteq\mathsf{int}(\mathcal{I}_{ 2})\subseteq\cdots\subseteq\mathsf{int}(\mathcal{I}_{L})\subseteq\mathsf{int}( \mathcal{I}_{L})\cup\mathcal{I}_{L}\subseteq B^{C}\). 2. \(\mathsf{int}(\mathcal{I}_{L})\cap\mathsf{int}(\mathcal{I}_{L+1})=\varnothing\). 3. \(A^{C}\supseteq\mathsf{int}(\mathcal{I}_{L+1})\cup\mathcal{I}_{L+1}\supseteq \mathsf{int}(\mathcal{I}_{L+1})\supseteq\mathsf{int}(\mathcal{I}_{L+2}) \supseteq\cdots\supseteq\mathsf{int}(\mathcal{I}_{P})\supseteq B\). 4. For \(j\in\{1,\ldots,P-2\}\), the circuits \(\mathcal{I}_{j}\) and \(\mathcal{I}_{j+2}\) are vertex-disjoint. 5. For \(j\in\{1,\ldots,P\}\) and every \(e\in\mathcal{I}_{j}\), there exists a dual path \(\zeta_{e}\) to \(\mathcal{A}\) that has exactly \(j-1\) open edges-one crossing each of the circuits \(\mathcal{I}_{1},\ldots,\mathcal{I}_{j-1}\). Furthermore, there exists a geodesic \(\gamma\) from \(\mathcal{A}\) to \(\mathcal{B}\) and a disjoint dual path \(\zeta\) from \(\mathcal{A}\) to \(\mathcal{B}\) satisfying the following properties (here we note that the paths \(\zeta_{e}\) in Item (v) are not necessarily disjoint from \(\gamma\)): 1. \(\zeta\) has exactly \(P\) open edges-one crossing each of the circuits \(\mathcal{I}_{1},\ldots,\mathcal{I}_{L},\mathcal{I}_{L+1},\ldots,\mathcal{I}_{P}\). 2. For each circuit \(\mathcal{I}_{j}\), let \(x\) and \(y\) be the first and last vertices from \(\gamma\) on that circuit. Then, the portion of \(\gamma\) between \(x\) and \(y\) lies entirely on \(\mathcal{I}_{j}\). If \(\mathcal{I}_{j}\) and \(\mathcal{I}_{j+1}\) are not vertex-disjoint, then the path \(\gamma\) crosses from \(\mathcal{I}_{j}\) to \(\mathcal{I}_{j+1}\) via a common vertex, and \(\gamma\) contains no edges lying strictly between the circuits. 3. If \(\gamma_{R}\) is the geodesic corresponding to \(\mathcal{A}=\{0\}\) and \(\mathcal{B}_{R}=\partial B_{R}\) for \(R\geq 0\), then the sequence \(\gamma_{R}\) can be chosen so that the portion of \(\{\gamma_{R}\}_{R\in\mathbb{Z}}\) between successive open circuits does not depend on \(R\) (while the portion between the last circuit in \(\partial B_{R}\) and the boundary does depend on \(R\)). 4. From every open edge \(e\in\gamma\) with \(e\notin\mathcal{I}_{1}\cup\cdots\cup\mathcal{I}_{L}\cup\mathcal{I}_{L+1}\cup \cdots\cup\mathcal{I}_{P}\), there exists a closed path from \(e\) to \(\zeta\). 5. The dual of each closed edge along \(\gamma\) belongs to a closed circuit \(\mathcal{U}\) that either contains \(\mathcal{A}\) in its interior and \(\mathcal{B}\) in its exterior, or vice versa. The circuit \(\zeta\) does not contain the dual of any other edges along \(\gamma\). 6. With \(\{0,1\}\)-valued edge-weights, the closed circuits \(\mathcal{U}\) from Item (x) can be chosen to form a edge-disjoint collection \(\mathcal{U}_{1},\ldots,\mathcal{U}_{V}\). The union of the circuits \(\mathcal{I}_{1},\ldots,\mathcal{I}_{P}\) and the circuits \(\mathcal{U}_{1},\ldots,\mathcal{U}_{V}\) forms a sequence \(\mathcal{C}_{1},\ldots,\mathcal{C}_{K}\), which is ordered so that, for some index \(W\in\{0,\ldots,K\}\), \(\mathsf{int}(\mathcal{C}_{W})\cap\mathsf{int}(\mathcal{C}_{W+1})=\varnothing\) (with the convention \(\mathsf{int}(\mathcal{C}_{0})=\mathsf{int}(\mathcal{C}_{K+1})=\varnothing\)), and we have the following inclusions \[\mathcal{A}\subseteq\mathsf{int}(\mathcal{C}_{1})\subseteq\cdots\subseteq \mathsf{int}(\mathcal{C}_{W}),\quad\text{and}\quad\mathsf{int}(\mathcal{C}_{W+ 1})\supseteq\cdots\mathsf{int}(\mathcal{C}_{K})\supseteq B.\] Figure 3 shows an example of two circuits \(\mathcal{I}_{i},\mathcal{I}_{i+1}\) that are not vertex-disjoint. The only way that this can happen is if the two circuits touch at a corner. Since each vertex is only seen in four edges on \(\mathbb{Z}^{2}\), \(\mathcal{I}_{i}\) and \(\mathcal{I}_{i+2}\) must always be vertex-disjoint, as stated in Item (ii). ## 3. Proofs for Bernoulli edge-weights In this Section, we prove Theorems 1.1 and 1.2. So we assume the edge-weights have the Bernoulli\((\frac{1}{2})\) distribution from (1.4). In this case, all closed edges have weight \(1\), and so the passage time along any path is simply the number of closed edges it includes. Hence the passage time between two sets \(\mathcal{A},\mathcal{B}\subset\mathbb{Z}^{2}\) is the minimal number of closed edges that are encountered by a path starting at \(\mathcal{A}\) and ending at \(\mathcal{B}\). By Proposition 2.5(x), that number of closed edges is also the number of edge-disjoint closed circuits \(\mathcal{U}_{1},\ldots,\mathcal{U}_{V}\) that separate \(\mathcal{A}\) and \(\mathcal{B}\). Throughout our arguments, the symbols \(c\) and \(C\) will denote positive constants that are chosen sufficiently small or sufficiently large, respectively. Their values may change from line to line, but they will never depend on the radii on arm events. ### Arm events along the modified geodesic We start by defining the open and closed arms which form the arm events. For an edge \(e\), Let \(x_{e}\) denote the end point of \(e\) which has the smaller norm, and define \(B_{b}(e)=x_{e}+B_{b}\). For integers \(a<b\), define \[A[a,b)(e)=\{x\in\mathbb{R}^{2}:a\leq|x-x_{e}|_{\infty}<b\}.\] Fix two boxes \(B_{a}\) and \(B_{b}\) where \(a,b\) are integers with \(1\leq a\leq b\). Recall the definition of "start" and "end" from Definition 2.4, and we define * a (primal) open arm between \(\partial B_{a}\) and \(\partial B_{b}\) is a primal open path inside \(A[a,b)\) (except for the ending point) that starts on \(\partial B_{a}\) and ends on \(\partial B_{b}\), and * a (dual) closed arm between \(\partial B_{a}\) and \(\partial B_{b}\) is a dual closed path inside \(A[a,b)\) that starts at \(\partial B_{a}\) and ends at \(\partial B_{b}\). Note \(\partial B_{a}\) and \(\partial B_{b}\) are both on the primal lattice, so we invoke Definition 2.4 here. For \(k\in\mathbb{Z}_{>0}\), a \(k\)-arm event event between \(\partial B_{a}\) and \(\partial B_{b}\) is the event that there exist \(k\) open or closed arms between \(\partial B_{a}\) and \(\partial B_{b}\). If the arms are all open or all closed, we call this _monochromatic \(k\)-arm event_. Otherwise, it will be called a _polychromatic \(k\)-arm event_. We will also use polychromatic \(3\)-arm events starting from a single edge instead of a box. Fix the edge \(e=\{(0,0),(1,0)\}\), recall a \(3\)-arm event starting from \(e\) up to \(\partial B_{a}\) means there exist two open arms that start at \((0,0)\) and \((1,0)\) and end at \(\partial B_{b}\), and a closed arm inside \(B_{b}\) starting at \((\frac{1}{2},\frac{1}{2})\) or \((\frac{1}{2},-\frac{1}{2})\) and ending at \(\partial B_{b}\). Note the above definitions immediately extend to \(x+B_{a},x+B_{b}\), \(x+(1,0)\), and \(x+(0,1)\) for any \(x\in\mathbb{Z}^{2}\). Next, let us introduce notation for the probabilities of arm events. For \(a<b\), let \(\pi^{\prime}_{k}(a,b)\) and \(\pi^{\prime\prime}_{k}(a,b)\) denote the probability of a \(k\)-arm event between \(\partial B_{a}\) and \(\partial B_{b}\) where all the arms are open or closed, respectively. The probability of the polychromatic \(k\)-arm event between \(\partial B_{a}\) and \(B_{b}\) with \(k-1\) open arms and \(1\) closed arm will be denoted as \(\pi_{k}(a,b)\). We only define this single probability for polychromatic \(k\)-arm events because the probabilities for different polychromatic color sequences are comparable up to absolute constants [22]. Lastly, the probability of a polychromatic \(3\)-arm event described above from the edge \(e\) to \(\partial B_{b}\) described above will be denoted as \(\pi_{3}(b)\). We start by stating three propositions for the arm events. The first proposition is known as quasi-multiplicativity, which is widely used in near-critical percolation. Figure 3. Two primal circuits (one in solid, the other dashed) which are edge-disjoint but not vertex-disjoint, and such that one encloses the other. **Proposition 3.1**.: _[_21_, Prop. 17]_ _Fix \(j\geq 1\) and let \(n_{0}=n_{0}(j)\) be the smallest integer such that \(\pi_{j}(n_{0},n_{0}+1)>0\). Then there exists \(C=C(j)>0\) such that for all \(n_{0}(j)\leq n_{1}<n_{2}<n_{3}\),_ \[\pi_{j}(n_{1},n_{3})\leq\pi_{j}(n_{1},n_{2})\pi_{j}(n_{2},n_{3})\leq C\pi_{j}(n _{1},n_{3}).\] (3.1a) _In the case \[j=3\] we have, for \[1\leq n_{1}\leq n_{2}\],_ \[\pi_{3}(n_{2})\leq\pi_{3}(n_{1})\pi_{3}(n_{1},n_{2})\leq C\pi_{3}(n_{2}). \tag{3.1b}\] **Remark 3.2**.: It follows from the Russo-Seymour-Welsh (RSW) theorem [13, Chap. 6] that for any fixed constant \(\alpha>1\), there is some constant \(c=c(\alpha,j)>0\) such that \(\pi_{j}(n,\lceil\alpha n\rceil)\geq c\) for all \(n\geq 1\). Combining this observation with quasi-multiplicativity, one obtains \[\pi_{j}(1,n)\leq\pi_{j}(1,n)\frac{\pi_{j}(n,\lceil\alpha n\rceil)}{c}\overset {\eqref{eq:RSW}}{\leq}C\pi_{j}(1,\lceil\alpha n\rceil). \tag{3.2}\] The second result is an upper bound on the \(1\)-arm probability. It follows from the RSW theorem. **Proposition 3.3**.: _[_13_, Lem. 8.5]_ _There exists \(c>0\) such that_ \[\pi_{1}(a,b)\leq(b/a)^{-c}\quad\text{for all $b\geq a\geq 1$.} \tag{3.3}\] Meanwhile, the polychromatic \(5\)-arm exponent is known. **Proposition 3.4**.: _[_21_, Thm. 24(3)]_ _There exist constants \(c,C>0\) such that_ \[c(b/a)^{-2}\leq\pi_{5}(a,b)\leq C(b/a)^{-2}\quad\text{for all $b\geq a\geq 1$.} \tag{3.4}\] **Remark 3.5**.: It should be noted that [21] works on the triangular lattice instead of the square lattice. On the triangular lattice, the polychromatic arm exponents are known exactly: \(\pi_{1}(1,R)=R^{-5/48+o(1)}\)[19] while \(\pi_{j}(1,R)=R^{-(j^{2}-1)/12+o(1)}\) for \(j\geq 2\)[24]. But unlike the derivations of the other arm exponents on the triangular lattice that use conformal invariance, the proof of [21, Thm. 24(3)] works on the square lattice as well; see the discussion above [21, Thm. 24]. Alternatively, one can use the argument outlined in [27, Chap. 1] together with the equivalence of polychromatic arm probabilities [22]. **Remark 3.6**.: Propositions 3.3 and 3.4 imply that for \(j\in\{2,3,4\}\), there exists \(c>0\) such that \[c(b/a)^{-(2-c)}\leq\pi_{j}(a,b)\leq(b/a)^{-c}\quad\text{for all $b\geq a\geq 1$.}\] Indeed, the upper bound is immediate from (3.3) since \(\pi_{j}(a,b)\leq\pi_{1}(a,b)\). For the lower bound, first use the Van den Berg-Kesten (BK) inequality inequality to determine that \(\pi_{5}(a,b)\leq\pi_{1}(a,b)\pi_{4}(a,b)\), or equivalently \(\pi_{4}(a,b)\geq\pi_{5}(a,b)/\pi_{1}(a,b)\). Then use (3.3) and (3.4) to conclude \(\pi_{4}(a,b)\geq C(b/a)^{-(2-c)}\). The same lower bound follows for \(j=2\) and \(j=3\) since then \(\pi_{j}(a,b)\geq\pi_{4}(a,b)\). In the case \(j=4\), the upper bound can be improved to \((b/a)^{-(1+c)}\) for some \(c>0\)[26], but we will not need this. **Lemma 3.7**.: _There exists \(C>0\) such that for all \(L\in\mathbb{Z}_{>0}\),_ \[\sum_{i=1}^{L}L\pi_{3}(i)\leq CL^{2}\pi_{3}(L).\] Proof.: By quasi-multiplicativity (3.1b), \[\sum_{i=1}^{L}L\pi_{3}(i)\leq CL\pi_{3}(L)\sum_{i=1}^{L}\pi_{3}(i,L)^{-1}.\] By [4, Lemma 3.1] (in the first inequality below), there exists \(\alpha\in(0,1)\) such that \[\sum_{i=1}^{L}\pi_{3}(i,L)^{-1}\leq C\sum_{i=1}^{L}(i/L)^{\alpha-1}\leq C\frac {\int_{1}^{L}x^{\alpha-1}dx}{L^{\alpha-1}}\leq CL.\qed\] Our final preparatory result gives an upper bound for the probability of a sequence of arm events by a single polychromatic \(3\)-arm event. **Proposition 3.8**.: _Fix any \(L\in\mathbb{Z}_{>0}\). Let \(k_{1},\ldots,k_{L}\) be any sequence of integers greater than or equal to \(4\). There exists a constant \(C>0\), depending only on \(L\), such that for all \(M\in\mathbb{Z}_{>0}\),_ \[\sum_{j_{1}=0}^{\lfloor\log_{2}M\rfloor}\sum_{j_{2}=j_{1}+1}^{\lfloor\log_{2} M\rfloor}\cdots\sum_{j_{L}=j_{L-1}+1}^{\lfloor\log_{2}M\rfloor}\pi_{3}(2^{j_{1} })\pi_{k_{1}}(2^{j_{1}+1},2^{j_{2}})\ldots\pi_{k_{L}}(2^{j_{L}+1},M)\leq C\pi_ {3}(M).\] _Furthermore, if \(k_{1},\ldots,k_{L-1}\geq 4\) and \(k_{L}=3\), then for each \(0<\varepsilon<1\), there exists a positive constant \(C_{\varepsilon}\)_ \[\sum_{j_{1}=0}^{\lfloor\log_{2}M\rfloor}\sum_{j_{2}=j_{1}+1}^{ \lfloor\log_{2}M\rfloor}\cdots\sum_{j_{L}=j_{L-1}+1}^{\lfloor\log_{2}M\rfloor} \pi_{3}(2^{j_{1}})\pi_{k_{1}}(2^{j_{1}+1},2^{j_{2}})\ldots\pi_{k_{L-1}}(2^{j_ {L-1}+1},2^{j_{L}})\pi_{3}(2^{j_{L}+1},M)\Big{(}\frac{2^{j_{L}+1}}{M}\Big{)}^{\varepsilon}\] \[\leq C_{\varepsilon}\pi_{3}(M).\] Proof.: We prove the first statement, as the second statement essentially follows from the same proof. By the BK inequality, \(\pi_{4}(k)\leq\pi_{3}(k)\pi_{1}(k).\) Then, using quasi-multiplicativity (3.1) and Proposition 3.3, we have \[\sum_{j_{1}=0}^{\lfloor\log_{2}M\rfloor}\sum_{j_{2}=j_{1}+1}^{ \lfloor\log_{2}M\rfloor}\cdots\sum_{j_{L}=j_{L-1}+1}^{\lfloor\log_{2}M\rfloor} \pi_{3}(2^{j}_{1})\pi_{k_{1}}(2^{j_{1}+1},2^{j_{2}})\cdots\pi_{k_{L}}(2^{j_{L }+1},M)\] \[\leq\sum_{j_{1}=0}^{\lfloor\log_{2}M\rfloor}\sum_{j_{2}=j_{1}+1} ^{\lfloor\log_{2}M\rfloor}\cdots\sum_{j_{L}=j_{L-1}+1}^{\lfloor\log_{2}M \rfloor}\pi_{3}(2^{j}_{1})\pi_{4}(2^{j_{1}+1},2^{j_{2}})\cdots\pi_{4}(2^{j_{L }+1},M)\] \[\leq\pi_{3}(M)\sum_{j_{1}=0}^{\lfloor\log_{2}M\rfloor}\sum_{j_{2} =j_{1}+1}^{\lfloor\log_{2}M\rfloor}\cdots\sum_{j_{L}=j_{L-1}+1}^{\lfloor\log _{2}M\rfloor}\pi_{1}(2^{j_{1}+1},2^{j_{2}})\cdots\pi_{1}(2^{j_{L}+1},M)\] \[\leq C\pi_{3}(M)\sum_{j_{1}=0}^{\lfloor\log_{2}M\rfloor}\sum_{j_{2 }=j_{1}+1}^{\lfloor\log_{2}M\rfloor}\cdots\sum_{j_{L}=j_{L-1}+1}^{\lfloor\log _{2}M\rfloor}\Big{(}\frac{2^{j_{1}+1}}{M}\Big{)}^{\varepsilon}\] \[\leq C2^{L}\pi_{3}(M)\sum_{j_{1}=0}^{\lfloor\log_{2}M\rfloor}( \lfloor\log_{2}M\rfloor-j_{1}+1)^{L-1}\Big{(}\frac{2^{j_{1}+1}}{M}\Big{)}^{\varepsilon}\] \[\leq C2^{L}\pi_{3}(M)\int_{0}^{\lfloor\log_{2}M\rfloor}(\lfloor\log_{2 }M\rfloor-x+1)^{L-1}\Big{(}\frac{2^{x+1}}{M}\Big{)}^{\varepsilon}\,dx\quad(\text{ then, let }x=\log_{2}M-z-1)\] \[=C2^{L}\pi_{3}(M)\int_{-1}^{\lfloor\log_{2}M\rfloor-1}(z+2)^{L-1} \Big{(}\frac{1}{2^{z}}\Big{)}^{\varepsilon}\,dz\] \[\leq C2^{L}\pi_{3}(M)\int_{-1}^{\infty}(z+2)^{L-1}\Big{(}\frac{1} {2^{z}}\Big{)}^{\varepsilon}\,dz\leq C_{L}\pi_{3}(M).\qed\] We state one last needed result from [25]. Recall that \(\pi^{\prime}\) indicates an arm event where all arms are open, and \(\pi^{\prime\prime}\) indicates an arm event where all arms are closed. **Proposition 3.9**.: _[_25_, Lem. 2.3]_ _On the square lattice \(\mathbb{Z}^{2}\), there exists \(\varepsilon>0\) and a large positive integer \(r_{0}\) such that for any \(1\leq R_{1}<R_{2}\),_ \[\pi^{\prime}_{r_{0}}(R_{1},R_{2})\leq\pi_{3}(R_{1},R_{2})\Big{(}\frac{R_{1}}{ R_{2}}\Big{)}^{\varepsilon}.\] _By duality, it also holds that_ \[\pi^{\prime\prime}_{r_{0}}(R_{1},R_{2})\leq\pi_{3}(R_{1},R_{2})\Big{(}\frac{R_{ 1}}{R_{2}}\Big{)}^{\varepsilon}.\] Notice that the \(3\)-arm event in (1.5) and (1.6) requires arms that reach all the way to distance \(R\). But the arms in Proposition 2.5 only extend as far as the nearby circuits. To actually furnish the quantity \(\pi_{3}(R)\) from these shorter arms, we define a larger family of arm events between annuli, and then make a patching argument. The goal is to establish the following theorem. **Theorem 3.10**.: _Assume (1.4). Let \(\mathcal{A}\) and \(\mathcal{B}\) be finite, connected, disjoint subsets of \(\mathbb{Z}^{2}\). Let \(\gamma\) be the geodesic between \(\mathcal{A}\) and \(\mathcal{B}\) defined in Proposition 2.5. There exists a positive constant \(C\) such that for each edge \(e\in E(\mathbb{Z}^{2})\),_ \[\mathbb{P}(e\in\gamma)\leq C\pi_{3}(M)\] _where \(M=\min\{d(e,\mathcal{A}),d(e,\mathcal{B})\}.\)_ To prove this inequality, we split the event \(\{e\in\gamma\}\) into various disjoint events listed below in Lemma 3.11. First define the event \[D_{0}=\{\text{the set of circuits }\mathcal{C}_{1},\dots,\mathcal{C}_{K}\text{ in Proposition \ref{prop:C1}(xi) is nonempty}\}.\] **Lemma 3.11**.: _Assume (1.4). Let \(M=\min\{d(e,\mathcal{A}),d(e,\mathcal{B})\}.\) There exists an absolute positive constant \(C\) (independent of \(\operatorname{dist}(\mathcal{A},\mathcal{B})\),\(|\mathcal{A}|\),\(|\mathcal{B}|\)) such that_ \[\mathbb{P}(\{e\in\gamma\}\cap D_{0}^{\mathrm{c}})\leq C\pi_{3}(M) \tag{3.5}\] \[\mathbb{P}(\{e\in\gamma\}\cap\{e\text{ is between two open circuits }\mathcal{C}_{k},\,\mathcal{C}_{k+1}\}\cap D_{0})\leq C\pi_{3}(M)\] (3.6) \[\mathbb{P}(\{e\in\gamma\}\cap\{e\in\cup_{k=1}^{K}\mathcal{C}_{k} \})\cap\{\tau_{e}=0\}\cap D_{0})\leq C\pi_{3}(M)\] (3.7) \[\mathbb{P}(\{e\in\gamma\}\cap\{\tau_{e}=1\}\cap D_{0})\] \[\mathbb{P}(\{e\in\gamma\}\cap\{e\text{ is between two closed circuits }\mathcal{C}_{k},\,\mathcal{C}_{k+1}\}\cap D_{0})\] \[\mathbb{P}(\{e\in\gamma\}\cap\{e\text{ is between two circuits }\mathcal{C}_{k},\,\mathcal{C}_{k+1}\text{, one open, one closed}\}\cap D_{0})\] (3.8) \[\mathbb{P}(\{e\in\gamma\}\cap\{e\text{ is between }\mathcal{A},\,\mathcal{C}_{1}\}\cap D_{0})\] (3.9) \[\mathbb{P}(\{e\in\gamma\}\cap\{e\text{ is between }\mathcal{C}_{K},\, \mathcal{B}\}\cap D_{0}) \tag{3.10}\] In the remainder of this subsection, we prove Lemma 3.11. Proof of Lemma 3.11.: By quasi-multiplicativity (3.1b), we have \(\pi_{3}(M)\asymp\pi_{3}(1,M)\), and \[\pi_{3}(1,2^{\lfloor\log_{2}M\rfloor})\leq C\pi_{3}(1,M).\] Thus, it suffices to bound the events in our lemma by \(\pi_{3}(1,2^{\lfloor\log_{2}M\rfloor})\) instead of \(\pi_{3}(M)\). We can start by observing that (3.5) is a direct consequence of Proposition 2.5: On the event where \(e\in\gamma\) and the sequence \(\mathcal{C}_{1},\ldots,\mathcal{C}_{K}\) is empty, Proposition 2.5(x) implies that we can obtain two open arms to distance \(2^{\lfloor\log_{2}M\rfloor}\) by following the geodesic from \(e\) in both directions until reaching \(\mathcal{A}\) and \(\mathcal{B}\), respectively. On the other hand, the closed arm to distance \(2^{\lfloor\log_{2}M\rfloor}\) can be obtained using Item (ix) of the same proposition. Specifically, we take a closed dual path from \(e\) to \(\zeta\) and then follow the closed dual path \(\zeta\) to either \(\mathcal{A}\) or \(\mathcal{B}\). Next, we show (3.6). For an illustration of the argument, see Figure 4. Take an open edge \(e\in\gamma\) lying between two consecutive open circuits \(\mathcal{C}_{k},\mathcal{C}_{k+1}\). Proposition 2.5(vii) implies that \(\mathcal{C}_{k}\) and \(\mathcal{C}_{k+1}\) are vertex-disjoint. Two arms to distance \(2^{\lfloor\log_{2}M\rfloor}\) are obtained by following \(\gamma\) in different directions until reaching \(\mathcal{C}_{k},\mathcal{C}_{k+1}\), respectively, then following those circuits around the origin. Proposition 2.5(xi) guarantees that all closed edges on \(\gamma\) cross one of the closed circuits in the sequence \(\mathcal{C}_{1},\ldots,\mathcal{C}_{K}\), so since \(\mathcal{C}_{k}\) and \(\mathcal{C}_{k+1}\) are open, these two arms to distance \(2^{\lfloor\log_{2}M\rfloor}\) must be open. Next, let \(\zeta\) be the dual path from \(\mathcal{A}\) to \(\mathcal{B}\) constructed in Proposition 2.5. By Proposition 2.5(ix), there exists a closed dual path which connects \(\zeta\) and \(e\). Furthermore, the edges along \(\zeta\) are mostly closed; the only open edges are places where \(\zeta\) crosses the open circuits \(\mathcal{I}_{1},\ldots\mathcal{I}_{P}\) among the circuits \(\mathcal{C}_{1},\ldots,\mathcal{C}_{K}\) defined in Proposition 2.5. Figure 4. An illustration of the case when \(e\in\gamma\) is between two open circuits \(\mathcal{C}_{k}\) and \(\mathcal{C}_{k+1}\) (light/thick). There exists a dual path \(\zeta\) (dark/thin) between \(\mathcal{A}\) and \(\mathcal{B}\). The open dual edges in \(\zeta\) are contained in the collection of annuli \(\{A_{i_{j}}\}\), which are shown in light gray. Now, consider a sequence of annuli around \(e\), labelled as \[A_{i}^{(e)}=A[2^{i},2^{i+1})(e)\text{ where }i=0,\ldots,\lfloor\log_{2}M\rfloor-1, \tag{3.10}\] with the convention that a subset of \(\mathbb{R}^{2}\) contains an edge if it contains the midpoint of the edge. We now extract a (possibly empty) subsequence \(\{i_{j}\}_{j=1}^{J}\) such that each of these annuli contains at least one dual open edge along \(\zeta\). In addition, by ignoring the last few terms of this subsequence, we may require the random variable \(J\) to be no greater than \(100r_{0}\), where \(r_{0}\) is the large positive integer from Lemma 3.9. First, suppose \(J=0\), meaning the collection \(\{i_{j}\}\) is empty. Then, there is a polychromatic 3-arm event from \(e\) up to distance \(2^{\lfloor\log_{2}M\rfloor}\). Recall the two open arms exist because \(e\) is connected to \(\mathcal{C}_{k}\) and \(\mathcal{C}_{k+1}\) via the geodesic \(\gamma\). The closed dual arm is obtained by following the closed dual path from \(e\) to \(\zeta\), then following \(\zeta\) towards \(\mathcal{A}\). Thus, \[\mathbb{P}(\{e\in\gamma\}\cap\{e\text{ is between open circuits }\mathcal{C}_{k},\,\mathcal{C}_{k+1}\ \}\cap\{J=0\})\leq\pi_{3}(2^{\lfloor\log M\rfloor}). \tag{3.11}\] Next, suppose that \(J=1\). In this case, we have a polychromatic 3-arm event from \(\partial B_{1}(e)\) to \(\partial B_{2^{i_{1}}}(e)\), where the configuration of these three arms is the same as the case when \(J=0\), except that the arms only reach until the dual arm meets an open edge inside \(A_{i_{1}}^{(e)}\). Let us consider the arm events outside of \(\partial B_{2^{i_{1}+1}}(e)\). By assumption, \(\zeta\) contains an open edge inside \(B_{2^{i_{1}+1}}(e)\) at some vertex. That open edge is contained in an open circuit \(\mathcal{I}_{j}\) for some \(j\). Then, between \(\partial B_{2^{i_{1}+1}}(e)\) and \(\partial B_{2^{\lfloor\log_{2}M\rfloor}}(e)\), there are two open paths formed by the open circuit(s) which meets \(\zeta\) in \(B_{2^{i_{1}+1}}(e)\), and two closed paths from \(\zeta\). Using asymptotic equivalence of polychromatic arm events ([22]), there exists a constant \(C\) (changing from line to line) so that \[\begin{split}&\mathbb{P}(\{e\in\gamma\}\cap\{e\text{ is between open circuits }\mathcal{C}_{k},\,\mathcal{C}_{k+1}\ \}\cap\{J=1\})\\ &\leq C\sum_{i_{1}=0}^{\lfloor\log_{2}M\rfloor}\pi_{3}(1,2^{i_{1} })\pi_{4}(2^{i_{1}+1},2^{\lfloor\log_{2}M\rfloor})\\ &\leq C\sum_{i_{1}=1}^{\lfloor\log_{2}M\rfloor}\pi_{3}(1,2^{i_{1 }})\pi_{4}(2^{i_{1}},2^{\lfloor\log_{2}M\rfloor})\leq C\pi_{3}(1,2^{\lfloor \log_{2}M\rfloor}),\end{split} \tag{3.12}\] where the last two inequalities follow from quasi-multiplicativity (3.1a) and Proposition 3.8, respectively. Now, suppose \(J\geq 2\). Again, we have a polychromatic 3-arm event from \(B_{1}(e)\) to \(\partial B_{2^{i_{1}}}(e)\), just as in the case when \(J=1\). We now describe arm events between the successive annuli. Fixing \(j\) such that \(1\leq j\leq j+1\leq J\), we show there are at least \(\max\{j,4\}\) polychromatic arms between \(\partial B_{2^{i_{j}+1}}(e)\) and \(\partial B_{2^{i_{j+1}}}(e)\). To see this, by the definition of \(M\) in Theorem 3.10 and the construction of the subsequence \(\{i_{j}\}\) below (3.10), \(B_{2^{i_{j}+1}}(e)\) intersects at least \(j\) edge-disjoint open circuits. And each pair of circuits gives at least two open arms to distance \(M\) since \(\mathcal{I}_{i}\) and \(\mathcal{I}_{i+2}\) are always disjoint (Proposition 2.5iv). The remaining two closed arms are obtained from \(\zeta\) going from \(\partial B_{2^{i_{j}+1}}(e)\) toward \(\mathcal{A}\) and \(\mathcal{B}\). They form two closed dual paths that connect to \(\partial B_{2^{i_{j+1}}}(e)\). This uses again the fact that \(\zeta\) must intersect \(B_{2^{i_{j}+1}}(e)\) by the definition of the subsequence\(\{i_{j}\}\). To summarize, we have shown that on the event \[\{e\in\gamma\}\cap\{e\text{ is between open circuits }\mathcal{C}_{k},\, \mathcal{C}_{k+1}\ \}\cap\{J\geq 2\},\] the following holds: there is a polychromatic 3-arm event between \[B_{1}(e)\] and \[\partial B_{2^{i_{1}}}(e)\] , and \[\text{there are at least $\max\{j,4\}$ polychromatic arms between $\partial B_{2^{i_{j+1}}}(e)$}\] (3.13) and \[\partial B_{2^{i_{j+1}}}(e)\] for \[1\leq j\leq J\] Finally, we split the situation into two subcases: whether \(J\leq 10r_{0}\) or otherwise. If \(J\leq 10r_{0}\), there cannot be any open dual edges in the portion of \(\zeta\) between \(B_{2^{i_{J}+1}}(e)\) and \(B_{2^{\lfloor\log_{2}M\rfloor}}(e)\). Hence, there must be two closed dual paths from \(B_{2^{i_{J}+1}}(e)\) to \(B_{2^{\lfloor\log_{2}M\rfloor}}(e)\), obtained by following \(\zeta\) toward \(\mathcal{A}\) and \(\mathcal{B}\). At the same time, there must also be two open paths from \(B_{2^{i_{J}+1}}(e)\) to \(B_{2^{\lfloor\log_{2}M\rfloor}}(e)\) from by an open circuit as \(J\geq 2\). Hence, we have the following, where the first inequality comes from (3.13), and the second inequality comes from Proposition 3.8, \[\mathbb{P}(\{e\in\gamma\}\cap\{e\text{ is between open }\mathcal{C}_{k},\mathcal{C}_{k+1}\ \}\cap\{2\leq J\leq 10r_{0}\}) \tag{3.14}\] \[\leq\sum_{i_{1}=0}^{\lfloor\log_{2}M\rfloor}\sum_{i_{2}=i_{1}+1} ^{\lfloor\log_{2}M\rfloor}\cdots\sum_{i_{J}=i_{J-1+1}}^{\lfloor\log_{2}M \rfloor}\pi_{3}(1,2^{i_{1}})\pi_{4}(2^{i_{1}+1},2^{i_{2}})\ldots\pi_{4}(2^{i_ {J-1}+1},2^{i_{J}})\pi_{4}(2^{i_{J}+1},2^{\lfloor\log_{2}M\rfloor})\] \[\leq C\pi_{3}(1,2^{\lfloor\log_{2}M\rfloor}).\] In the other case when \(10r_{0}\leq J\leq 100r_{0}\), the number of open circuits intersecting the box \(B_{2^{i_{J}+1}}(e)\) is at least \(10r_{0}\). Thus, we have \[\mathbb{P}(\{e\in\gamma\}\cap\{e\text{ is between open }\mathcal{C}_{k},\mathcal{C}_{k+1}\ \}\cap\{10r_{0}\leq J\leq 100r_{0}\}) \tag{3.15}\] \[\leq\sum_{i_{1}=0}^{\lfloor\log_{2}M\rfloor}\sum_{i_{2}=i_{1}+1} ^{\lfloor\log_{2}M\rfloor}\cdots\sum_{i_{J}=i_{J-1}+1}^{\lfloor\log_{2}M \rfloor}\pi_{3}(1,2^{i_{1}})\pi_{4}(2^{i_{1}+1},2^{i_{2}})\ldots\pi_{4}(2^{i _{J-1}+1},2^{i_{J}})\pi_{r_{0}}^{\prime}(2^{i_{J}+1},M)\] \[\leq\sum_{i_{1}=0}^{\lfloor\log_{2}M\rfloor}\sum_{i_{2}=i_{1}+1} ^{\lfloor\log_{2}M\rfloor}\cdots\sum_{i_{J}=i_{J-1}+1}^{\lfloor\log_{2}M \rfloor}\pi_{3}(1,2^{i_{1}})\pi_{4}(2^{i_{1}+1},2^{i_{2}})\ldots\pi_{4}(2^{i _{J-1}+1},2^{i_{J}})\pi_{3}(2^{i_{J}+1},M)\Big{(}\frac{2^{i_{J}+1}}{M}\Big{)} ^{\varepsilon}\] \[\leq C\pi_{3}(1,M)\] where the second inequality follows from Proposition 3.9, and the third inequality follows from Proposition 3.8. Combining (3.11),(3.12),(3.14), and (3.15) proves (3.6). Next, we show (3.7). Suppose \(e\in\gamma\cap\mathcal{C}_{k}\) for some \(k\in\{1,\ldots,K\}\). Since \(\gamma\) lives on the primal lattice, \(\mathcal{C}_{k}\) is open. By Proposition 2.5(v), there exists a dual path \(\zeta_{e}\) between \(e\) and \(\mathcal{A}\) which consists of closed edges, except for the places where it crosses the open circuits among \(\mathcal{C}_{1},\ldots,\mathcal{C}_{K}\). If \(k=1\), or \(k>1\) and \(\mathcal{C}_{k-1}\) is closed, then there is a polychromatic 3-arm event around \(e\) to distance \(M\). The two open arms come from following \(\mathcal{C}_{k}\) in both directions, and the closed arm is obtained by following \(\zeta_{e}\) to either \(\mathcal{A}\), or to \(\mathcal{C}_{k-1}\) and then around either \(\mathcal{A}\) or \(\mathcal{B}\). Otherwise, suppose \(k>2\), and \(\mathcal{C}_{k-1}\) is open. We will further split the estimate into two cases, depending on \(\mathcal{C}_{k+1}\): 1. \(\mathcal{C}_{k+1}\) does not exist, or it is open, or 2. \(\mathcal{C}_{k+1}\) is closed. The arguments for these cases are similar to the proof of (3.6). However, unlike \(\zeta\) which connects \(\mathcal{A}\) and \(\mathcal{B}\), \(\zeta_{e}\) only goes from \(e\) to the set \(\mathcal{A}\). However, the combination of \(\zeta_{e}\) and the portion of \(\gamma\) between \(e\) and \(\mathcal{B}\) will form a connection between \(\mathcal{A}\) and \(\mathcal{B}\). This will essentially play the role of \(\zeta\) in the previous estimate (3.6). In case (1), let us again define the sequence of annuli \(\{A_{i}^{(e)}\}\) as in (3.10). Then, we extract a subsequence \(\{i_{j}\}_{j=1}^{J}\) such that each of these annuli contains at least one dual open edge along \(\zeta_{e}\). Again, we truncate the random variable \(J\) to be no greater than \(100r_{0}\). If \(J=0\), then we get a polychromatic 3-arm event from \(B_{1}(e)\) to distance \(2^{\lfloor\log_{2}M\rfloor}\)-two open arms from following \(\mathcal{C}_{k}\) and a closed dual arm from following \(\zeta_{e}\) to \(B_{2^{\lfloor\log_{2}M\rfloor}}(e)\). Next, we will assume \(J\geq 1\). By construction, there is a polychromatic 3-arm event from \(B_{1}(e)\) to \(\partial B_{2^{i_{1}}}(e)\), constructed from the open circuit \(\mathcal{C}_{k}\) and the closed dual arm, \(\zeta_{e}\). Next, we show that there must be a polychromatic 4-arm arm event between \(\partial B_{2^{i_{1}+1}}(e)\) and \(\partial B_{2^{i_{2}}}(e)\) (or \(\partial B_{2^{\lfloor\log_{2}M\rfloor}}(e)\) if \(J=1\)). First, because of \(\zeta_{e}\), there must be a closed arm. If we follow \(\gamma\) from \(e\) toward \(\mathcal{B}\), it will either encounter \(\mathcal{C}_{k+1}\),which is assumed to be open, or \(\mathcal{B}\) if \(\mathcal{C}_{k+1}\) does not exist. This gives us another open arm. Lastly, because \(B_{2^{i_{1}+1}}(e)\) contains an open dual edge along \(\zeta_{e}\), \(\partial B_{2^{i_{1}+1}}(e)\) must intersect an open circuit \(\mathcal{C}_{i}\) for some \(1\leq i\leq k-1\), and this gives us two more open arms. We note that \(\mathcal{C}_{i}\) in the previous sentence and the segment of \(\gamma\) must be disjoint by Proposition 2.5(vii), thus they indeed give three vertex-disjoint open arms. Finally after this, for each \(2\leq j\leq J-1\), there will always be at least \(\max\{4,j\}\) number of polychromatic arms between \(\partial B_{2^{i_{j}+1}}(e)\) and \(\partial B_{2^{i_{j}+1}}(e)\). The 4 appears inside the maximum simply because the three open arms from the previous step are actually to distance \(M\), hence paired with \(\zeta_{e}\), there must be at least 4 arms. The \(j\) here is because there must be at least \(j\) edge-disjoint open circuits intersecting the box \(B_{2^{i_{j}+1}}(e)\). With these, the rest of the arguments can be read off starting at the argument after (3.13). In case (2), compared to case (1), when choosing our subsequence \(\{i_{j}\}_{j=1}^{J}\), in addition to the annuli which cover the open edges along \(\zeta_{e}\), we also require them to cover the edge where \(\gamma\) crosses \(\mathcal{C}_{k+1}\), and we will denote this edge as \(\widetilde{e}\). Again, let us look at the arm event between \(\partial B_{2^{i_{1}+1}}(e)\) and \(\partial B_{2^{i_{2}}}(e)\). First, if \(\widetilde{e}\in A_{i_{1}}\), then there are three closed arms and 2 open arms between \(\partial B_{2^{i_{1}+1}}(e)\) and \(\partial B_{2^{i_{2}}}(e)\). The closed arms are from \(\zeta_{e}\) and \(\mathcal{C}_{k+1}\), and the open arms are from \(\mathcal{C}_{k}\). In addition for \(2\leq j\leq J-1\), between \(\partial B_{2^{i_{j}+1}}(e)\) and \(\partial B_{2^{i_{j}+1}}(e)\), there must be at least \(\max\{j,4\}\) number of polychromatic arms. This gives the same statement as in case (1), and the rest of the argument directly follows s before. Otherwise if \(\widetilde{e}\in A_{i_{j}}\) for some \(j\geq 2\). Then, we have 1 closed arm, and 3 open arms between \(\partial B_{2^{i_{1}+1}}(e)\) and \(\partial B_{2^{i_{2}}}(e)\), which follows from the same argument as in case 1. And again, between \(\partial B_{2^{i_{j}+1}}(e)\) and \(\partial B_{2^{i_{j}+1}}(e)\), there must be at least \(\max\{j,4\}\) number of polychromatic arms. Because \(B_{2^{i_{j}+1}}(e)\) must intersect at least \(j\) edge-disjoint open circuits or \(j-1\) edge-disjoint open circuits and a closed circuit. This is again similar to (3.13), and the exact same argument would follow as over there. Next, we will show (3.8). The argument is again similar to (3.6). The first difference is that \(\gamma\) will now play the role of \(\zeta\) from the proof of (3.6). Let us again fix a sequence of annuli around \(e\) just like in (3.10). We extract a (possibly empty) subsequence \(\{i_{j}\}_{j=1}^{J}\) such that each of these annuli contains at least one closed edge along \(\gamma\), and again truncate \(J\) to be no larger than \(100r_{0}\). The second difference is how we obtain our polychromatic 3-arm event from \(B_{1}(e)\) to \(\partial B_{2^{i_{1}}}(e)\), or from \(B_{1}(e)\) to \(\partial B_{2^{\lfloor\log_{2}M\rfloor}}(e)\) if \(J=0\). In this case, the two open paths are from \(\gamma\). To see the closed path, if the edge \(e\) is on a closed circuit, then it is clear. Otherwise, by Proposition 2.5, \(e\) is connected to \(\zeta\) by a closed path, then, we can follow \(\zeta\) to \(\partial B_{2^{i_{1}}}(e)\), or to \(\partial B_{2^{\lfloor\log_{2}M\rfloor}}(e)\) if \(J=0\). We note that if \(J=0\), we have the polychromatic \(3\)-arm events described in the paragraph above. If \(J\geq 1\), the polychromatic \(3\)-arm event from \(B_{1}(e)\) to \(\partial B_{2^{i_{1}}}(e)\) is also described in the previous paragraph. And all the arm events outside of the box \(B_{2^{i_{1}+1}}(e)\) are exactly the same as in (3.6), except the words "open" and "closed" are exchanged (because \(\gamma\) now plays the role of \(\zeta\) in (3.6)). This means we again have (3.13), and the rest of the argument follows as the argument after (3.13). We omit the details here. For (3.9), without the loss of generality, we will assume that \(e\) is between \(\mathcal{A}\) and \(\mathcal{C}_{1}\), and the exact same argument can be applied to the other case. First, note that there exists at least one open path and one closed path from \(e\) to distance \(M\). The open path is because \(e\) is connected to \(\mathcal{A}\) by \(\gamma\). The closed path exists because by Proposition 2.5(ix), there is a closed dual path from \(e\) to \(\zeta\), which is connected to \(\mathcal{A}\) by a closed path. Now, if \(\mathcal{C}_{1}\) is open, then we are done since this gives another open arm from \(e\) reaching distance \(M\). Otherwise, suppose \(\mathcal{C}_{1}\) is closed, then \(\gamma\) must cross \(\mathcal{C}_{1}\) at some edge \(\widetilde{e}\). Recall the annuli around \(e\) defined in (3.10). Suppose \(\widetilde{e}\not\in\cup A_{i}\), then there will be a third open arm from \(B_{1}(e)\) to distance \(2^{\lfloor\log_{2}M\rfloor}\), by following \(\gamma\) toward \(\mathcal{C}_{1}\). By quasi-multiplicativity (3.1b), we are done. Otherwise if \(\widetilde{e}\in A_{i^{*}}\) for some random index \(i^{*}\in[1,\lfloor\log_{2}M\rfloor]\), there will be a polychromatic \(3\)-arm event from \(B_{1}(e)\) to \(\partial B_{2^{i_{*}}}(e)\) and a \(4\)-arm polychromatic event from \(\partial B_{2^{i_{*}+1}}(e)\) to \(\partial B_{2^{\lfloor\log_{2}M\rfloor}}(e)\). The additional two arms are coming from \(\mathcal{C}_{1}\), which must intersect \(B_{2^{\lfloor\log_{2}M\rfloor}}(e)\). Then, the same calculation from (3.12) gives our desired estimate. We finish this section by proving Theorems 1.1 and 1.2. Proof of Theorem 1.1.: Let \(\gamma\) be the geodesic constructed in Proposition 2.5, where \(\mathcal{A}=\{(0,0)\}\), and \(\mathcal{B}=\partial B_{R}\). Let \(|\gamma|\) be the number of edges in \(\gamma\). Let \(E(B_{R})\) be the set of edges with at least one endpoint in the interior of \(B_{R}\). Then, by Theorem 3.10 (in the penultimate inequality), followed by Lemma 3.7, \[\mathbb{E}[\mathcal{N}_{R}]\leq\mathbb{E}[|\gamma|]=\sum_{e\in E(B_{R})} \mathbb{P}(e\in\gamma)\] \[=\sum_{k=0}^{R}\sum_{\begin{subarray}{c}e\in E(B_{R})\\ \operatorname{dist}(e,\{0\})\vee\operatorname{dist}(e,\partial B_{R})=k \end{subarray}}\mathbb{P}(e\in\gamma)\leq C\sum_{k=1}^{R}R\pi_{3}(k)\leq CR^{2 }\pi_{3}(R).\] In the penultimate equality, we simply subsumed the \(k=0\) term into the \(k=1\) term, bounding \(1\) by a constant times \(\pi_{3}(1)\). Proof of Theorem 1.2.: Let \(\mathfrak{b}\) be a box of side length \(2(d+|\mathcal{A}|+|\mathcal{B}|)\) which encloses both \(\mathcal{A}\) and \(\mathcal{B}\), and let \(x\) be its center so that \(\mathfrak{b}=B_{d+|\mathcal{A}|+|\mathcal{B}|}(x)\). Then, for \(k\geq 0\), let \(\mathfrak{b}_{k}=B_{2^{k}(d+|\mathcal{A}|+|\mathcal{B}|)}(x)\). For \(k\geq 1\), let \(D_{k}\) be the event on which 1. There exists an open circuit \(\mathcal{C}\) in \(\mathfrak{b}_{2k-1}\setminus\mathfrak{b}_{2k-2}\), and 2. There exists a closed dual circuit \(\mathcal{D}\) in \(\mathfrak{b}_{2k}\setminus\mathfrak{b}_{2k-1}\). The events \(D_{k}\) are pairwise independent, and the RSW theorem implies that there exists a constant \(c>0\), independent of \(d,|\mathcal{A}|\), and \(|\mathcal{B}|\) so that \[\mathbb{P}(D_{k})\geq 1-e^{-c}\qquad\forall k\geq 1. \tag{3.16}\] Next, we argue that on the event \(D_{k}\), all geodesics from \(\mathcal{A}\) to \(\mathcal{B}\) (and in particular, the geodesic in Proposition 2.5) are contained in \(\mathfrak{b}_{2k}\). To do this, let \(\mathcal{C}\) and \(\mathcal{D}\) be a choice of the open and closed circuits guaranteed in the event \(D_{k}\), respectively. Let \(\gamma\) be a geodesic from \(\mathcal{A}\) to \(\mathcal{B}\). If \(\gamma\) contains an edge \(e\) whose relative interior lies in \(\mathsf{ext}(\mathcal{C})\), it must be open. Otherwise, since \(\gamma\) both starts and ends in \(\mathsf{int}(\mathcal{C})\), the edge \(e\) lies along a portion of \(\gamma\) lying in \(\mathsf{ext}(\mathcal{C})\) and connecting two points of \(\mathcal{C}\). An alternate path between the two points is obtained by following the open circuit \(\mathcal{C}\) between those points. This path bypasses the closed edge \(e\), so \(\gamma\) cannot be a geodesic. Therefore, on the event \(D_{k}\), \(\gamma\) cannot move outside \(\mathfrak{b}_{2k}\), or else it must cross \(\mathcal{D}\) and therefore take a closed edge whose relative interior lies in \(\mathsf{ext}(\mathcal{C})\). For \(\omega\in\Omega\), let \(T(\omega)=\min\{k:\omega\in D_{k}\}\). Then, for each \(\lambda>0\) and \(n\geq 1\), \[\mathbb{P}(\mathcal{N}_{\mathcal{A},\mathcal{B}}\geq\lambda d^{2}\pi_{3}(d)) \leq\mathbb{P}(\mathcal{N}_{\mathcal{A},\mathcal{B}}\geq\lambda d^{2}\pi_{3}(d ),T\leq n)+\mathbb{P}(T>n). \tag{3.17}\] By (3.16), and independence of the events \(\{D_{k}\}_{k\geq 1}\), for any \(n\geq 1\), \[\mathbb{P}(T>n)\leq\mathbb{P}\Big{(}\bigcap_{k=1}^{n}D_{k}^{c}\Big{)}=\prod_{ k=1}^{n}(1-\mathbb{P}(D_{k}))\leq e^{-cn}. \tag{3.18}\] On the other hand, letting \(\gamma\) be the geodesic from \(\mathcal{A}\) to \(\mathcal{B}\) in our construction and using Chebyshev's inequality, \[\mathbb{P}(\mathcal{N}_{\mathcal{A},\mathcal{B}}\geq\lambda d^{2}\pi_{3}(d),T \leq n)\leq\mathbb{P}(\mathcal{N}_{\mathcal{A},\mathcal{B}}\geq\lambda d^{2} \pi_{3}(d),\gamma\subseteq\mathfrak{b}_{2n})\leq\frac{\mathbb{E}[\mathcal{N}_{ \mathcal{A},\mathcal{B}}\mathds{1}(\gamma\subseteq\mathfrak{b}_{2n})]}{ \lambda d^{2}\pi_{3}(d)}. \tag{3.19}\] Let \(\mathcal{S}(\mathfrak{b}_{k})\) be the set of edges in \(\mathcal{B}_{k}\) that do not connect two vertices of \(\mathcal{A}\) or two vertices of \(\mathcal{B}\). Using Lemma 3.11, \[\mathbb{E}[\mathcal{N}_{\mathcal{A},\mathcal{B}}\mathds{1}( \gamma\subseteq\mathfrak{b}_{2n})]=\sum_{e\in\mathcal{S}(\mathfrak{b}_{2n})} \mathbb{P}(e\in\gamma)\leq C\sum_{e\in\mathcal{S}(\mathfrak{b}_{2n})}\pi_{3}( \min(\operatorname{dist}(e,\mathcal{A}),\operatorname{dist}(e,\mathcal{B})))\] \[=\sum_{k=1}^{4^{n+1}(d+|\mathcal{A}|+|\mathcal{B}|)}\sum_{e\in \mathcal{S}(\mathfrak{b}_{2n}):\operatorname{dist}(e,\mathcal{A})\wedge \operatorname{dist}(e,\mathcal{B})=k}\pi_{3}(k)\leq\sum_{k=1}^{4^{n+1}(d+| \mathcal{A}|+|\mathcal{B}|)}C(|\mathcal{A}|+|\mathcal{B}|)k\pi_{3}(k)\] \[\leq C(|\mathcal{A}|+|\mathcal{B}|)(4^{n}(d+|\mathcal{A}|+| \mathcal{B}|))^{2}\pi_{3}(4^{n+1}(d+|\mathcal{A}|+|\mathcal{B}|))\leq C(4^{n} d)^{2}(|\mathcal{A}|+|\mathcal{B}|)^{3}\pi_{3}(d),\] where \(C>0\) is a constant independent of \(d,|\mathcal{A}|\), and \(|\mathcal{B}|\), and the second-to-last inequality follows by Lemma 3.7. In the last equality, we have used \((d+|\mathcal{A}|+|\mathcal{B}|)\leq 2d(|\mathcal{A}|+|\mathcal{B}|)\), which holds because \(d\geq 1\) and \(|\mathcal{A}|+|\mathcal{B}|\geq 2\). Then, using (3.17),(3.18), and (3.19), \[\mathbb{P}(\mathcal{N}_{\mathcal{A},\mathcal{B}}\geq\lambda d^{2}\pi_{3}(d)) \leq e^{-cn}+\frac{C(4^{n}d)^{2}(|\mathcal{A}|+|\mathcal{B}|)^{3}\pi_{3}(d)}{ \lambda d^{2}\pi_{3}(d)}\leq e^{-cn}+C(|\mathcal{A}|+|\mathcal{B}|)^{3}\frac{ 16^{n}}{\lambda}.\] The proof is complete upon setting \(n=\lfloor\frac{1}{2}\log_{16}\lambda\rfloor\). ## 4. Proofs for general edge-weights Throughout this section, we assume the edge-weight distribution function \(F\) satisfies (1.1) and additionally one of following conditions from Theorem 1.4: \[\limsup_{n\to\infty}F^{-1}(p_{2^{n+1}})/F^{-1}(p_{2^{n}})<1, \tag{4.1}\] \[\liminf_{n\to\infty}F^{-1}(p_{2^{2n}})/F^{-1}(p_{2^{n}})>0, \tag{4.2}\] where \(p_{R}\) is defined in (1.7). ### Correlation length and verification of examples The lemma below provides various properties of the correlation length \(L(\cdot)\) defined in Section 1.3. **Lemma 4.1**.: _The following statements hold._ 1. _There exists_ \(c>0\) _such that_ \[cR\leq L(p_{R})\leq R\quad\text{for all $R\geq 1$.}\] (4.3) 2. _(Kesten's scaling relation) There exists_ \(c>0\) _such that_ \[c\leq L(p)^{2}\pi_{4}(1,L(p))(p-p_{\mathrm{c}})\leq 1/c\quad\text{for all $p>p_{ \mathrm{c}}$.}\] (4.4) 3. _There exists_ \(c>0\) _such that_ \[c\Big{(}\frac{R}{r}\Big{)}^{-(2-c)}\leq\frac{p_{R}-p_{\mathrm{c}}}{p_{r}-p_{ \mathrm{c}}}\leq(1/c)\Big{(}\frac{R}{r}\Big{)}^{-c}\quad\text{for all $R\geq r\geq 1$.}\] (4.5) 4. _There exists_ \(c>0\) _such that_ \[cR^{-(2-c)}\leq p_{R}-p_{\mathrm{c}}\leq(1/c)R^{-c}\quad\text{for all $R\geq 1$.}\] (4.6) Proof.: Part 1 follows from [12, eq. (2.10)]. Part 2 appears as [15, Prop. 34]. Part 3: Making using of the previous parts, we have \[\frac{p_{R}-p_{\mathrm{c}}}{p_{r}-p_{\mathrm{c}}}\stackrel{{\eqref {eq:c_1}}}{{\leq}}C\frac{L(p_{r})^{2}\pi_{4}(1,L(p_{r}))}{L(p_{R})^{2}\pi_{4}(1,L(p_{R}))}\stackrel{{\eqref{eq:c_2}}}{{\leq}}C\frac{r^{2}\pi_{4} (1,cr)}{(cR)^{2}\pi_{4}(1,R)}\stackrel{{\eqref{eq:c_2}}}{{\leq}}C \frac{r^{2}\pi_{4}(1,r)}{R^{2}\pi_{4}(1,R)}. \tag{4.7}\] By analogous reasoning, we also have \[\frac{p_{R}-p_{\mathrm{c}}}{p_{r}-p_{\mathrm{c}}}\geq c\frac{r^{2}\pi_{4}(1,r )}{R^{2}\pi_{4}(1,R)}. \tag{4.8}\] Now study the ratio of \(\pi_{4}\) quantities: \[\frac{1}{\pi_{1}(r,R)}\leq\frac{1}{\pi_{4}(r,R)}\stackrel{{ \eqref{eq:c_1}}}{{\leq}}\frac{\pi_{4}(1,r)}{\pi_{4}(1,R)}\stackrel{{ \eqref{eq:c_1}}}{{\leq}}\frac{C}{\pi_{4}(r,R)}\leq\frac{C\pi_{1}(r,R)}{\pi_{ 5}(r,R)},\] where the final inequality is due to the BK inequality. Inserting these estimates into (4.7) and (4.8) yields \[c\frac{r^{2}}{R^{2}\pi_{1}(r,R)}\leq\frac{p_{R}-p_{\mathrm{c}}}{p_{r}-p_{ \mathrm{c}}}\leq C\frac{r^{2}\pi_{1}(r,R)}{R^{2}\pi_{5}(r,R)}.\] Finally, use the bounds provided by Propositions 3.3 and 3.4: \[\frac{r^{2}}{R^{2}\pi_{1}(r,R)}\stackrel{{\eqref{eq:c_1}}}{{\geq }}\frac{r^{2}}{R^{2}(r/R)^{c}}=\Big{(}\frac{R}{r}\Big{)}^{-(2-c)}\] \[\text{while}\quad\frac{r^{2}\pi_{1}(r,R)}{R^{2}\pi_{5}(r,R)} \stackrel{{\eqref{eq:c_2}}}{{\leq}}C\frac{r^{2}(R/r)^{-c}}{R^{2 }((R/r)^{-2})}\leq(1/c)\Big{(}\frac{R}{r}\Big{)}^{-c}.\] The two previous displays together yield (4.5). Part 4 follows from part 3 upon setting \(r=1\). For completeness, we next check that the distributions in Example 1.5 have the claimed properties. Proof of Example 1.5.: Part (a): \(F(t)=p_{\rm c}+Ct^{\alpha}\) for \(t\in[0,h]\), where \(C,\alpha,h>0\). For \(p>p_{\rm c}\) sufficiently close to \(p_{\rm c}\), we have \(F^{-1}(p)=\big{(}(p-p_{\rm c})/C\big{)}^{1/r}\). So by (4.5), there exists \(c>0\) such that \[F^{-1}(p_{2^{n+1}})/F^{-1}(p_{2^{n}})\leq(1/c)2^{-c/r}<1\quad\text{for all $n$ sufficiently large.}\] Hence (4.1) holds. Part (b): \(F(t)=p_{\rm c}\) for some \(h>0\). Choose \(h^{\prime}>h\) such that \(F(h^{\prime})>p_{\rm c}\). Then for all \(p>p_{\rm c}\) sufficiently close to \(p_{\rm c}\), we have \(h\leq F^{-1}(p)\leq h^{\prime}\). Hence (4.2) holds. Part (c): \(F(t)=p_{\rm c}+Ce^{-t^{-\alpha}}\) for \(t\in(0,h]\), where \(C,\alpha,h>0\). Then for \(p>p_{\rm c}\) sufficiently close to \(p_{\rm c}\), we have \[F^{-1}(p)=\Big{(}\frac{1}{\log(C/(p-p_{\rm c}))}\Big{)}^{1/\alpha}.\] In particular, \[\frac{F^{-1}(p_{2^{2n}})}{F^{-1}(p_{2^{n}})}=\Big{(}\frac{\log C-\log(p_{2^{n} }-p_{\rm c})}{\log C-\log(p_{2^{2n}}-p_{\rm c})}\Big{)}^{1/\alpha}.\] Since \(p_{R}\searrow p_{\rm c}\) as \(R\to\infty\), we have \[\liminf_{n\to\infty}\frac{F^{-1}(p_{2^{2n}})}{F^{-1}(p_{2^{n}})}=\Big{(}\liminf _{n\to\infty}\frac{\log(p_{2^{n}}-p_{\rm c})}{\log(p_{2^{2n}}-p_{\rm c})} \Big{)}^{1/\alpha}.\] So define \(\beta_{n}\) implicitly by \(p_{2^{n}}-p_{\rm c}=(p_{2^{2n}}-p_{\rm c})^{\beta_{n}}\), which makes \[\frac{\log(p_{2^{n}}-p_{\rm c})}{\log(p_{2^{2n}}-p_{\rm c})}=\beta_{n}.\] Therefore, in order to establish (4.2), it suffices to show that \[\liminf_{n\to\infty}\beta_{n}>0.\] For this observe that \[\big{(}c(2^{2n})^{-(2-c)}\big{)}^{\beta_{n}}\stackrel{{\eqref{eq: 0}}}{{\leq}}(p_{2^{2n}}-p_{\rm c})^{\beta_{n}}=p_{2^{n}}-p_{\rm c}\stackrel{{ \eqref{eq:0}}}{{\leq}}(1/c)(2^{n})^{-c}.\] Isolating just the first and last expressions, we see that \[\beta_{n}(\log_{2}c-2n(2-c)) \leq\log_{2}(1/c)-nc\] \[\implies \liminf_{n\to\infty}\beta_{n}\geq\frac{c}{2(2-c)}>0.\] Part (d): \(F\) is such that \(F^{-1}(p_{2^{n}})=e^{-\sqrt{n}}\) for each \(n\geq 1\). We then have \[\frac{F^{-1}(p_{2^{n+1}})}{F^{-1}(p_{2^{n}})}=\frac{e^{-\sqrt{n+1}}}{e^{-\sqrt {n}}}\to 1\quad\text{as $n\to\infty$},\] as well as \[\frac{F^{-1}(p_{2^{2n}})}{F^{-1}(p_{2^{n}})}=\frac{e^{-\sqrt{2n}}}{e^{-\sqrt{ n}}}\to 0\quad\text{as $n\to\infty$}.\] Therefore, neither (4.1) nor (4.2) holds. ### Bounds for passage times of geodesics across annuli We start this subsection by citing the following result. **Lemma 4.2**.: _[_7_, Cor. 2.3]_ _Let \(F\) be the distribution function of a nonnegative random variable with \(F(0)=\frac{1}{2}\). Given an integer \(K\geq 2\), there exists \(C>0\) such that, for all \(n\) and \(p\) with \(L(p)\leq 2^{n}\),_ \[\mathbb{P}\Big{(}T_{(K)}(n)\geq\lambda F^{-1}(p)\Big{(}\frac{2^{n}}{L(p)} \Big{)}^{2}\Big{)}\leq e^{-C\lambda}+\exp\Bigl{(}-C\frac{2^{n}}{L(p)}\Bigr{)}, \quad\text{ for }\lambda\geq 0,\] _where \(T_{(K)}(n)\) is the minimal passage time between the left and right sides of \([-K2^{n},K2^{n}]\times[-2^{n},2^{n}]\), among all paths that remain in this rectangle._ In the next result, we use Lemma 4.2 to obtain a result about _maximal_ passage times of geodesics across annuli. Before making a definition, we make the following technical convention. For \(k\geq 1\), let \(E_{k}\) be the set of edges with both endpoints in the set \(B_{2^{k+1}}\setminus B_{2^{k}}\cup\partial B_{2^{k}}\). We also set \(E_{0}\) to be the edges with both endpoints in \(B_{2}\). Given a path \(\gamma\) connecting the origin to \(\partial B_{2^{n+1}}\), for \(k\in\{0,\dots,n\}\), we set \[T_{k}(\gamma)=\begin{cases}\sum_{e\in\gamma\cap(E_{0}\cup E_{1})}t_{e}&k=0\\ \sum_{e\in\gamma\cap(E_{k-1}\cup E_{k}\cup E_{k+1})}t_{e}&1\leq k<n\\ \sum_{e\in\gamma\cap(E_{n-1}\cup E_{n})}t_{e}&1\leq k=n.\end{cases}\] Then, for \(0\leq k\leq n\), define \[T_{k,n}^{\max}=\sup_{\gamma\in\operatorname{Geo}(0,\partial B_{2^{n+1}})}T_{k }(\gamma). \tag{4.9}\] **Lemma 4.3**.: _Let \(F\) be the distribution function of a nonnegative random variable with \(F(0)=\frac{1}{2}\). There exists \(c>0\) such that for all \(0\leq k\leq n\) and \(p>p_{\mathrm{c}}\) with \(L(p)\leq 2^{k}\),_ \[\mathbb{P}\Big{(}T_{k,n}^{\max}\geq\lambda\Big{(}\frac{2^{k}}{L(p)}\Big{)}^{2} F^{-1}(p)\Big{)}\leq e^{-c\lambda}+\frac{1}{c}\exp\Big{(}-c\frac{2^{k}}{L(p)} \Big{)}.\] Proof.: We may assume that \(k\geq 2\) because \(L(p)\geq 1\), so for \(k=0,1\), the statement can be made trivial by making \(c\) small. To prove the bound, we will replace \(T_{k,n}^{\max}\) by the sum of nine first-passage values crossing nine different rectangles. For an illustration, see Figure 5. To describe the setup, we break the annulus \(B_{2^{k-1}}\setminus B_{2^{k-2}}\cup\partial B_{2^{k-2}}\) into four pieces (which are not mutually disjoint). These pieces below correspond to the top, right, bottom, and left of the annulus, numbered in that order. 1. \([-2^{k-1},2^{k-1}]\times[2^{k-2},2^{k-1}]\) 2. \([2^{k-2},2^{k-1}]\times[-2^{k-1},2^{k-1}]\) 3. \([-2^{k-1},2^{k-1}]\times[-2^{k-1},-2^{k-2}]\) 4. \([-2^{k-1},-2^{k-2}]\times[-2^{k-1},2^{k-1}]\). Let \(\mathsf{T}_{1}\) be the minimal passage time of all paths crossing from the left boundary of \([-2^{k-1},2^{k-1}]\times[2^{k-2},2^{k-1}]\) to the right boundary and staying in the box. Let \(\mathsf{T}_{2}\) be the minimal passage time of all paths crossing from the top edge of \([2^{k-2},2^{k-1}]\times[-2^{k-1},2^{k-1}]\) to the bottom and staying in the box. Let \(\mathsf{T}_{3}\) and \(\mathsf{T}_{4}\) be defined similarly for the bottom and left parts of the annulus. If we consider paths that achieve these minima, we note that parts of these four paths can be concatenated to form a circuit in the annulus \(B_{2^{k-1}}\setminus B_{2^{k-2}}\cup\partial B_{2^{k-2}}\), which has passage time less no greater than \(\sum_{i=1}^{4}\mathsf{T}_{i}\). We call this circuit \(\mathcal{C}_{1}\). Next, we partition \(B_{2^{k+3}}\setminus B_{2^{k+2}}\cup\partial B_{2^{k+2}}\) into four, nondisjoint pieces and let \(\mathsf{T}_{5},\ldots,\mathsf{T}_{8}\) be defined similarly. In the cases \(k=n\) or \(k=n-1\), we disregard these four variables and set their values to \(0\). Again, the minimal paths form a circuit in the annulus \(B_{2^{k+3}}\setminus B_{2^{k+2}}\cup\partial B_{2^{k+2}}\), which has passage time no greater than \(\sum_{i=1}^{4}\mathsf{T}_{i+4}\). We call this circuit \(\mathcal{C}_{2}\). Let \(\mathsf{T}_{9}\) be the minimal passage time of all paths crossing from the left side of the box \([2^{k-2},2^{k+3}]\times[-2^{k-2},2^{k-2}]\) to the right, and staying in the box. In the case \(k=n-1\), we replace this box with \([2^{n-3},2^{n+1}]\times[-2^{n-3},2^{n-3}]\), an in the case \(k=n\), we replace this box with \([2^{n-2},2^{n+1}]\times[-2^{n-2},2^{n-2}]\). Choose a minimal path achieving passage time \(\mathsf{T}_{9}\) and call it \(\gamma^{\prime}\). We now claim that \[T_{k,n}^{\max}\leq\sum_{i=1}^{9}\mathsf{T}_{i}. \tag{4.10}\] Suppose, by way of contradiction, that there exists a geodesic \(\gamma:0\to\partial B_{2^{n+1}}\) with \(T_{k}(\gamma)>\sum_{i=1}^{9}\mathsf{T}_{i}\). We claim that we can find a new path \(\tilde{\gamma}:0\to\partial B_{2^{n+1}}\) with passage time strictly smaller than \(T(\gamma)\), contradicting the optimally of \(\gamma\). The path \(\gamma\) must always cross \(\mathcal{C}_{1}\) and must also cross \(\mathcal{C}_{2}\) (when \(k<n-1\)) as it passes from \(0\) to \(\partial B_{2^{n+1}}\). Let \(x\in\mathcal{C}_{1}\) denote the first vertex that \(\gamma\) shares in common with \(\mathcal{C}_{1}\) when traversing along the vertices of \(\gamma\), beginning from the origin. Accordingly, let \(\gamma^{x}:0\to x\) denote this segment of \(\gamma\). Similarly, let \(y\in C_{2}\) denote the last vertex that \(\gamma\) shares in common with \(C_{2}\) and let \(\gamma^{y}:y\to\partial B_{2^{n+1}}\) denote the corresponding segment of \(\gamma\) (In the case \(k=n-1\) or \(k=n\), \(y\) is the last vertex of \(\gamma\)-the unique vertex on \(\partial B_{2^{n+1}}\)). Let \(\gamma_{x,y}\) denote the segment of \(\gamma\) connecting \(x\) and \(y\). Let \(w\) be the first point of \(\gamma\) on \(\partial B_{2^{k-1}}\), and let \(z\) be the last point of \(\gamma\) on \(\partial B_{2^{k+2}}\) (in the case \(k=n-1\) of \(k=n\), \(z\) is the last point of \(\gamma\)). Let \(\gamma_{w,z}\) denote the portion of \(\gamma\) connecting \(w\) and \(z\). Then, \(T_{k}(\gamma)\leq T(\gamma_{w,z})\) because all edges on \(\gamma\) that also lie in \(E_{k}\) must be contained in the portion of \(\gamma\) between \(w\) and \(z\). We claim now that \(\gamma_{w,z}\subseteq\gamma_{x,y}\) so that \(T_{k}(\gamma)\leq T(\gamma_{x,y})\). Equivalently, \(x\) preceedes (or is equal to) \(w\), and \(z\) preceedes (or is equal to) \(y\) in the path \(\gamma\) as it travels from \(0\) to \(\partial B_{2^{n+1}}\). This follows because \(\mathcal{C}_{1}\subseteq B_{2^{k-1}}\setminus B_{2^{k-2}}\cup\partial B_{2^{k -2}}\), so Lemma 6.1 implies that the Jordan curve defined by \(\partial B_{2^{k-1}}\) encloses \(\mathcal{C}_{1}\). Since these two curves Figure 5. Because the weights are all non-negative, the sum of passage values of \(\mathsf{T}_{1},+\cdots+\mathsf{T}_{9}\) is greater than or equal to \(T_{k,n}^{\max}\), which is shown as the thick dotted line along the geodesic (thin dotted line). both enclose \(0\), the path \(\gamma\) cannot meet \(B_{2^{k-1}}\) before it meets \(\mathcal{C}_{1}\). Similarly, \(\mathcal{C}_{2}\) encloses the Jordan curve defined by \(\partial B_{2^{k+2}}\), so \(y\) cannot preceede \(z\). Next, define the path \(\gamma^{\prime\prime}\) from \(x\) to \(y\) as follows (int the case \(k\geq n-1\)), \(\gamma^{\prime\prime}\) will be a path from \(x\) to \(\partial B_{2^{n+1}}\). Follow the path \(\mathcal{C}_{1}\) from \(x\) until meeting the path \(\gamma^{\prime}\). Then, follow \(\gamma^{\prime}\) until reaching the circuit \(\mathcal{C}_{2}\) (in the case \(k<n-1\)) or until reaching \(\partial B_{2^{n+1}}\) (in the case \(k\geq n-1\)). In the case \(k<n-1\), we then, follow \(\mathcal{C}_{2}\) until reaching \(y\). It follows that \(\gamma^{\prime\prime}\) is a self-avoiding path consisting of edges in \(\mathcal{C}_{1},\mathcal{C}_{2}\), and \(\gamma^{\prime}\), so by our assumption, \[T(\gamma^{\prime\prime})\leq\sum_{i=1}^{9}\mathsf{T}_{i}<T_{k}(\gamma).\] Define \(\tilde{\gamma}\) to be the concatenation of \(\gamma_{x},\gamma^{\prime\prime}\), and \(\gamma_{y}\) (ignoring the \(\gamma_{y}\) part in the case \(k\geq n-1\)). Then, \[T(\tilde{\gamma})=T(\gamma_{x})+T(\gamma^{\prime\prime})+T(\gamma_{y})<T( \gamma_{x})+T_{k}(\gamma)+T(\gamma_{y})\leq T(\gamma_{x})+T(\gamma_{x,y})+T( \gamma_{y})=T(\gamma),\] giving the desired contradiction, and thus proving (4.10). Then, by a simple union bound, \[\mathbb{P}\Big{(}T_{k,n}^{\max}\geq\lambda\Big{(}\frac{2^{k}}{L(p)}\Big{)}^{2 }F^{-1}(p)\Big{)}\leq\sum_{i=1}^{9}\Big{(}\mathsf{T}_{i}\geq\frac{\lambda}{9} \Big{(}\frac{2^{k}}{L(p)}\Big{)}^{2}F^{-1}(p)\Big{)},\] and the result now follows from Lemma 4.2. **Corollary 4.4**.: _Let \(F\) be the distribution funciton of a nonnegative random variable with \(F(0)=\frac{1}{2}\). There exists \(c,C>0\) so that for for all \(0\leq k\leq n\) and all \(1\leq j\leq 2^{k}\),_ \[\mathbb{P}\Big{(}T_{k,n}^{\max}\geq j^{3}F^{-1}(p_{\lfloor 2^{k}/j\rfloor}) \Big{)}\leq Ce^{-cj}.\] Proof.: The result can be obtained by first making the substitutions \(p=p_{\lfloor 2^{k}/j\rfloor}\) and \(\lambda=j\) into the inequality given as the result of Lemma 4.3. Then one can use the bounds \(cR\leq L(p_{R})\leq R\) from (4.3) to obtain the desired result. **Lemma 4.5**.: _Assume (4.1). There exists \(r,a,c_{1},c_{2},C_{1}>0\) so that for all \(j,k\) satisfying \(2\leq j\leq c_{1}2^{c_{2}k}\) we have_ \[\mathbb{P}\Big{(}T_{k,n}^{\max}\geq F^{-1}(p_{\lfloor 2^{k}/j^{r}\rfloor}) \Big{)}\leq C_{1}e^{-aj}.\] Proof.: To prove the result we show that there exists \(r,c_{1},c_{2}>0\) so that for all \(k\) satisfying \(2\leq j\leq c_{1}2^{c_{2}k}\) we have \[F^{-1}(p_{\lfloor 2^{k}/j^{r}\rfloor})\geq j^{3}F^{-1}(p_{\lfloor 2^{k}/j \rfloor})\] Then the result follows from Corollary 4.4. In fact, our choice of constants will be so that \(c_{1},c_{2}<1\) and \(c_{2}r=1\) so that the condition \(j\leq c_{1}2^{c_{2}k}\) implies \(j^{r}\leq 2^{k}\), and \(p_{\lfloor 2^{k}/j^{r}\rfloor}\) is well-defined. By assumption (4.1), there exists \(L>1\) and \(N>0\) so that, for any \(m\geq N\), we have \(F^{-1}(p_{2^{m-1}})/F^{-1}(p_{2^{m}})\geq L\). Then for non-negative integers \(m\) and \(\ell\) satisfying \(m-\ell\geq N\), iterating this inequality yields \[F^{-1}(p_{2^{m-\ell}})/F^{-1}(p_{2^{m}})\geq L^{\ell}. \tag{4.11}\] We fix a large \(\alpha>0\) to be specified later, and choose the constants \(c_{1},c_{2}\) by \[c_{2}=\frac{1}{1+\alpha},\qquad-\frac{1}{c_{2}}\log_{2}c_{1}=N.\] With these choices of constants, whenever \(j\leq c_{1}2^{c_{2}k}\), we have \[k\geq\frac{\log_{2}j-\log_{2}c_{1}}{c_{2}}=\log_{2}j+\log_{2}j^{\alpha}+N\geq \lfloor\log_{2}j\rfloor+\lfloor\log_{2}j^{\alpha}\rfloor+N.\] Now, apply (4.11) with \(m=k-\lfloor\log_{2}j\rfloor\) and \(\ell=\lfloor\log_{2}j^{\alpha}\rfloor\), along with the simple estimate that \(x/2\leq\lfloor x\rfloor\leq x\) and Equation (4.3), which allows us to remove the floor function below. This yields \[F^{-1}(p_{\lfloor 2^{k}/j^{1+\alpha}\rfloor})\geq j^{\frac{\log_{2}L}{2}\alpha}F^ {-1}(p_{\lfloor 2^{k}/j\rfloor}).\] Now, choose \(\alpha\) so that \(j^{\frac{\log_{2}L}{2}\alpha}\geq j^{3}\) for all \(j\geq 2\). The result follows with \(r=1+\alpha\). We now prove a result of a similar type as Lemma 4.5, this time assuming (4.2). **Lemma 4.6**.: _Assume (4.2). There exist constants \(c,a,C,u>0\) so that, for all \(n\geq 0\), \(0\leq k\leq n\), and \(1\leq\lambda\leq c8^{k}\),_ \[\mathbb{P}(T_{k,n}^{\max}\geq\lambda F^{-1}(p_{2^{k}}))\leq Ce^{-a\lambda^{u}}.\] Proof.: By the assumption (4.2) as well as the fact that \(F\) is nondecreasing, there exists \(\rho>0\) and \(N\geq 1\) so that, for all \(k\geq N\), \[\frac{F^{-1}(p_{2^{k+1}})}{F^{-1}(p_{2^{k}})}\geq\frac{F^{-1}(p_{2^{2k}})}{F^{ -1}(p_{2^{k}})}\geq\rho.\] Then, as long as \(1\leq\lambda\leq 2^{k-N}\), we have \(k-\lfloor\log_{2}\lambda\rfloor\geq N\), and \[F^{-1}(p_{\lfloor 2^{k}/\lambda\rfloor})/F^{-1}(p_{2^{k}})\leq F^{-1}(p_{2^{k- \lfloor\log_{2}\lambda\rfloor}})/F^{-1}(p_{2^{k}})\leq 1/\rho^{\lfloor\log_{2} \lambda\rfloor}\leq\widetilde{C}\lambda^{s} \tag{4.12}\] for constants \(\widetilde{C},s>0\). Then, using (4.12) followed by Corollary 4.4, \[\mathbb{P}\Big{(}T_{k,n}^{\max}\geq\lambda^{3+s}F^{-1}(p_{2^{k}})\Big{)}\leq \mathbb{P}\Big{(}T_{k,n}^{\max}\geq(\widetilde{C}^{-1/3}\lambda)^{3}F^{-1}(p_ {\lfloor 2^{k}/\lambda\rfloor})\Big{)}\leq Ce^{-a\lambda},\] for constants \(C,a>0\). Replacing \(\lambda\) with \(\lambda^{1/(3+s)}\), we have \[\mathbb{P}(T_{k,n}^{\max}\geq\lambda F^{-1}(p_{2^{k}}))\leq Ce^{-a\lambda^{1/(3 +s)}},\] and this holds so long as \(1\leq\lambda^{1/(3+s)}\leq 2^{k-N}\), which holds when \(1\leq\lambda\leq 2^{-N(3+s)}8^{k}\). ### Bounding the number of closed edges of a geodesic between annuli For \(n\geq 1\), define \[R(n)=B_{2^{n+1}}\setminus B_{2^{n}}\cup\partial B_{2^{n}},\qquad\text{and} \qquad S(n)=B_{2^{n+2}}\setminus B_{2^{n-1}}\cup\partial B_{2^{n-1}}.\] We also define \[S^{\prime}(n)=B_{2^{n+1}}\setminus B_{2^{n-1}}\cup\partial B_{2^{n-1}}.\] For and edge \(e\in E(\mathbb{Z}^{2})\) with both endpoints in \(R(n)\) and \(p>p_{\rm c}\), let \(A_{n}(p,e)\) be the event that all of the following occur: 1. \(U_{e}\in(p_{\rm c},p]\), 2. there are two vertex-disjoint \(p\)-open paths from \(e\) to \(\partial B_{2^{n-1}}(e)\) that remain in \(S(n)\), and 3. there are two vertex-disjoint \(p_{\rm c}\)-closed dual paths from \(e^{\star}\) to \(\partial B_{2^{n}}(e)\) (disjoint from the \(p\)-open paths above). We also define events \(A^{\prime}_{n}(p,e)\) similarly, the only differences being that there is only one vertex-disjoint \(p\)-open path, and that path, along with the two \(p_{\mathrm{c}}\)-closed paths, are requried to remain inside \(B_{2^{n+1}}\). Define \[N_{n}(p)=\sum_{e\subset R(n)}\mathbf{1}_{A_{n}(p,e)}\qquad\text{and}\qquad N^{ \prime}_{n}(p)=\sum_{e\subset R(n)}\mathbf{1}_{A^{\prime}_{n}(p,e)}\] We borrow the following result from [7, Lem. 2.2], whose original idea goes back to the work of Kiss [17]. The only substantive difference is our arm events are defined inside annuli while their arm events are defined inside of rectangles. We also note that this lemma is a general result for critical two-dimensional percolation (i.e. not depending on the edge weights \(t_{e}\)) because it only makes reference to the uniform random variables \(U_{e}\). **Lemma 4.7**.: _[_7_, Lem. 2.2]_ _There exists \(C>0\) such that for all \(n\geq 1\) and \(p>p_{\mathrm{c}}\) with \(L(p)\leq 2^{n}\), and all \(\lambda\geq 0\),_ \[\mathbb{P}\Big{(}N_{n}(p)\geq\lambda\Big{(}\frac{2^{n}}{L(p)}\Big{)}^{2}\Big{)} \leq e^{-C\lambda},\quad\text{and}\quad\mathbb{P}\Big{(}N^{\prime}_{n}(p)\geq \lambda\Big{(}\frac{2^{n}}{L(p)}\Big{)}^{2}\Big{)}\leq e^{-C\lambda}.\] As before, for \(k\geq 0\), let \(E_{k}\) be the set of edges with both endpoints in the set \(B_{2^{k+1}}\backslash B_{2^{k}}\cup\partial B_{2^{k}}\), with the convention that \(E_{0}\) is the set of edges with both endpoints in \(B_{2}\). We will let \(\gamma_{n}\) be the geodesic between \(\mathcal{A}=\{0\}\) and \(\mathcal{B}=\partial B_{2^{n+1}}\) as constructed in Proposition 2.5. For \(0\leq k\leq n\), let \[D_{k,n}=\#\{e\in\gamma_{n}\cap E_{k}:t_{e}>0\} \tag{4.13}\] Furthermore, for \(1\leq k\leq n\), let \[M_{k,n}=\begin{cases}\max\{U_{e}:e\in\gamma_{n}\cap(E_{k-1}\cup E_{k}\cup E_{ k+1})\}&k<n\\ \max\{U_{e}:e\in\gamma_{n}\cap(E_{n-1}\cup E_{n})\}&k=n.\end{cases} \tag{4.14}\] We now prove the following corollary to Lemma 4.7. **Corollary 4.8**.: _Assume (1.1). Then there exists a constant \(c^{\prime}>0\) (independent of the choice of edge-weight distribution) so that, for all \(n\in\mathbb{Z}_{\geq 0}\), \(1\leq k\leq n\), and \(\lambda\geq 0\) and \(1\leq j\leq 2^{k}\),_ \[\mathbb{P}(D_{k,n}\geq\lambda,M_{k,n}\leq p_{\lfloor 2^{k}/j\rfloor})\leq e^{- c^{\prime}\lambda/j^{2}}.\] Proof.: The statement is trivial for \(\lambda=0\), so since \(D_{k,n}\) is an integer, we take \(\lambda\geq 1\) without loss of generality. For this, we show that \[\{D_{k,n}\geq\lambda,M_{k,n}\leq p_{\lfloor 2^{k}/j\rfloor}\}\subseteq \begin{cases}\{N_{k}(p_{\lfloor 2^{k}/j\rfloor})\geq\lambda\}&k<n\\ \{N^{\prime}_{k}(p_{\lfloor 2^{k}/j\rfloor})\geq\lambda\}&k=n.\end{cases} \tag{4.15}\] Before proving (4.15), we show here why this is sufficient to prove the lemma. Using (4.3), which tells us that \(L(p_{R})\geq cR\) for some constant \(c>0\), we have \[\mathbb{P}(N_{k}(p_{\lfloor 2^{k}/j\rfloor})\geq\lambda) =\mathbb{P}\Big{(}N_{k}(p_{\lfloor 2^{k}/j\rfloor})\geq\Big{(} \frac{L(p_{\lfloor 2^{k}/j\rfloor})}{2^{k}}\Big{)}^{2}\lambda\Big{(}\frac{2^{k}}{L(p_{ \lfloor 2^{k}/j\rfloor})}\Big{)}^{2}\Big{)}\] \[\leq\mathbb{P}\Big{(}N_{k}(p_{\lfloor 2^{k}/j\rfloor})\geq\frac{c \lambda}{j^{2}}\Big{(}\frac{2^{k}}{L(p_{\lfloor 2^{k}/j\rfloor})}\Big{)}^{2}\Big{)},\] and the same holds with \(N_{k}\) replaced with \(N^{\prime}_{k}\). The lemma then follows directly from Lemma 4.7. With this observation, we return to proving (4.15). We first consider the case \(k<n\). Assume that \(D_{k,n}\geq\lambda\) and \(M_{k,n}\leq p_{\lfloor 2^{k}/j\rfloor}\). For each edge \(e\in\gamma_{n}\cap E_{k}\) with \(t_{e}>0\), we verify that the event \(A_{k}(p,e)\) occurs with \(p=p_{\lfloor 2^{k}/j\rfloor}\). It must be the case that \(U_{e}\leq p\) by the assumption \(M_{k,n}\leq p_{\lfloor 2^{k}/j\rfloor}\). This assumption also guarantees there are two disjoint \(p\)-open paths formed from the two portions of the geodesic \(\gamma_{n}\) traveling from the endpoints of \(e\) to the first time they meet \(\partial B_{2^{k-1}}\cup\partial B_{2^{k+2}}\). These paths each travel a distance at least \(2^{k-1}\) and therefore must meet \(\partial B_{2^{k-1}}(e)\). Since \(t_{e}>0\), Proposition 2.5(x) implies that \(e^{\star}\) belongs to a dual closed circuit that encloses the origin, and the closed circuit does not intersect \(\gamma_{n}\) at a location other than \(e^{\star}\). Hence, there exist two \(p_{\text{c}}\)-closed dual arms to distance at least \(2^{k}\), disjoint from the two \(p\)-open arms, and \(A_{n}(p,e)\) occurs. Thus, since there are at least \(\lambda\) of these edges \(e\), \[N_{k}(p_{\lfloor 2^{k}/j\rfloor})=\sum_{e\subseteq R(n)}\mathbf{1}_{A_{n}(p_{ \lfloor 2^{k}/j\rfloor},e)}\geq\lambda.\] The case \(k=n\) is similar, this time working with the events \(A^{\prime}_{n}(p,e)\) instead of \(A_{n}(p,e)\). Since the geodesic \(\gamma_{n}\) is near the boundary \(\partial B_{2^{n+1}}\) for edges \(e\in E_{n}\), we consider the single \(p\)-open path starting backward from \(e\) and traveling until the first time it reaches \(\partial B_{2^{n-1}}\). This path must stay inside \(B_{2^{n+1}}\). The two \(p_{\text{c}}\)-closed paths are again formed by the closed circuit containing \(e^{\star}\), and the requirement that they stay inside \(B_{2^{n+1}}\) is met by the construction of the circuit in Proposition 2.5(x). Before the next proposition, we make the following elementary observation. **Lemma 4.9**.: _Let \(Np<1\). Then, if \(\chi\) has the binomial distribution with parameters \(N\in\mathbb{N}\) and \(p\in(0,1)\),_ \[\mathbb{P}(\chi\geq\lambda)\leq e\times(Np)^{\lambda}\] Proof.: This is a straightforward calculation. \[\mathbb{P}(\chi\geq\lambda)=\sum_{k=\lceil\lambda\rceil}^{n}{N\choose k}p^{k}( 1-p)^{N-k}\leq\sum_{k=\lceil\lambda\rceil}^{n}\frac{N^{k}}{k!}p^{k}\leq e\times (Np)^{\lceil\lambda\rceil}\leq e\times(Np)^{\lambda}.\qed\] **Proposition 4.10**.: _Assume either (4.1) or (4.2). Then, there exists \(C,c,s>0\) so that_ \[\mathbb{P}(D_{k,n}\geq\lambda)\leq Ce^{-c\lambda^{s}}\] _for all \(\lambda\geq 0\) and \(0\leq k\leq n\)._ Proof.: We start by noting that \(D_{k,n}\) is trivially upper bounded by the size of \(E_{k}\), which is bounded by \(C^{\prime}4^{k}\) for some constant \(C^{\prime}\). Combined with the fact that \(D_{k,n}\) is an integer, we may take \(1\leq\lambda\leq C^{\prime}4^{k}\). By bounding the total number of edges by a deterministic \(O(1)\) term, in the first \(K_{0}\), without loss of generality, we may prove the result for all \(k\geq K_{0}\) for some \(K_{0}\geq 1\) to be determined later. We handle the cases of assuming (4.1) and (4.2) separately, with the number \(K_{0}\) being chosen differently in each case. **Case 1: Assuming (4.1)** Let \(r,a,c_{1},c_{2},C_{1}\) be chosen as in Lemma 4.5, and fix \(\eta\in(0,c_{2})\). Observe that \[\begin{split}\mathbb{P}\Big{(}D_{k,n}\geq\lambda\Big{)}& =\mathbb{P}\Big{(}D_{k,n}\geq\lambda,M_{k,n}\leq p_{\lfloor 2^{k}/2^{r} \rfloor}\Big{)}\\ &+\sum_{j=2}^{\lfloor 2^{\eta k}\rfloor}\mathbb{P}\Big{(}D_{k,n} \geq\lambda,p_{\lfloor 2^{k}/j^{r}\rfloor}<M_{k,n}\leq p_{\lfloor 2^{k}/(j+1)^{r} \rfloor}\Big{)}\\ &+\mathbb{P}\Big{(}D_{k,n}\geq\lambda,M_{k,n}>p_{\lfloor 2^{k}/ \left(\lfloor 2^{\eta k}\rfloor+1\right)^{r}\rfloor}\Big{)}\end{split} \tag{4.16}\] We bound each of the three terms in (4.16). By Corollary 4.8, there exists \(c>0\) so that \[\mathbb{P}\Big{(}D_{k,n}\geq\lambda,M_{k,n}\leq p_{\lfloor 2^{k}/2^{r} \rfloor}\Big{)}\leq e^{-c\lambda}.\] Since \(\eta<c_{2}\), there exists \(K_{0}\geq 0\) so that \((\lfloor 2^{\eta k}\rfloor+1)\leq c_{1}2^{c_{2}k}\) for all \(k\geq K_{0}\). Next, recalling the definition of \(T_{k,n}^{\max}\) (4.9), if \(M_{k,n}>p\), then \(T_{k,n}^{\max}\geq F^{-1}(p)\). Then, by Lemma 4.5, for such \(k\), \[\begin{split}\mathbb{P}\Big{(}D_{k,n}\geq\lambda,M_{k,n}>p_{ \lfloor 2^{k}/\left(\lfloor 2^{\eta k}\rfloor+1\right)^{r}\rfloor}\Big{)}& \leq\mathbb{P}\Big{(}M_{k,n}>p_{\lfloor 2^{k}/\left(\lfloor 2^{\eta k }\rfloor+1\right)^{r}\rfloor}\Big{)}\\ &\leq\mathbb{P}\Big{(}T_{k,n}^{\max}\geq F^{-1}(p_{\lfloor 2^{k} /\left(\lfloor 2^{\eta k}\rfloor+1\right)^{r}\rfloor})\Big{)}\\ &\leq C_{1}e^{-a(\lfloor 2^{\eta k}\rfloor+1)}\leq Ce^{-c \lambda^{u}},\end{split}\] where the last inequality holds because we have assumed that \(\lambda\leq C^{\prime}4^{k}\) at the beginning of the proof, and \(u\) is a sufficiently small fixed number. It remains to bound the middle term on the right-hand side of (4.16). Using Holder's inequality, \[\begin{split}&\mathbb{P}\Big{(}D_{k,n}\geq\lambda,p_{\lfloor 2 ^{k}/j^{r}\rfloor}<M_{k,n}\leq p_{\lfloor 2^{k}/(j+1)^{r}\rfloor}\Big{)}\\ &\leq\mathbb{P}\Big{(}D_{k,n}\geq\lambda,M_{k,n}\leq p_{\lfloor 2 ^{k}/(j+1)^{r}\rfloor}\Big{)}^{1/2}\mathbb{P}\Big{(}M_{k,n}>p_{\lfloor 2^{k}/j^{r} \rfloor}\Big{)}^{1/2}.\end{split} \tag{4.17}\] By Corollary 4.8, we may choose \(c>0\) so that \[\mathbb{P}\Big{(}D_{k,n}\geq\lambda,M_{k,n}\leq p_{\lfloor 2^{k}/(j+1)^{r} \rfloor}\Big{)}\leq e^{-c\lambda/(j+1)^{2r}}. \tag{4.18}\] On the other hand, because \(j\leq\lfloor 2^{\eta k}\rfloor<c_{1}2^{c_{2}k}\), Lemma 4.5 implies \[\mathbb{P}\Big{(}M_{k,n}>p_{\lfloor 2^{k}/j^{r}\rfloor}\Big{)}\leq\mathbb{P} \Big{(}T_{k,n}^{\max}\geq F^{-1}(p_{\lfloor 2^{k}/j^{r}\rfloor})\Big{)}\leq Ce^{- cj}. \tag{4.19}\] We now consider two cases. In the first case, suppose that \(\lfloor\lambda^{\frac{1}{2r+1}}\rfloor\leq\lfloor 2^{\eta k}\rfloor\). Then, using (4.17)-(4.19), and adjusting the constants \(C,c\) as needed due to the square root, we obtain \[\begin{split}&\sum_{j=2}^{\lfloor 2^{\eta k}\rfloor}\mathbb{P} \Big{(}D_{k,n}\geq\lambda,p_{\lfloor 2^{k}/j^{r}\rfloor}<M_{k,n}\leq p_{ \lfloor 2^{k}/(j+1)^{r}\rfloor}\Big{)}\leq C\sum_{j=2}^{\lfloor 2^{\eta k} \rfloor}e^{-c\lambda/(j+1)^{2r}}e^{-cj}\\ &\leq C\sum_{j=2}^{\lfloor\lambda^{\frac{1}{2r+1}}\rfloor-1}e^{- c\lambda/(j+1)^{2r}}e^{-cj}+C\sum_{j=\lfloor\lambda^{\frac{1}{2r+1}}\rfloor}^{ \lfloor 2^{\eta k}\rfloor}e^{-c\lambda/(j+1)^{2r}}e^{-cj}.\end{split} \tag{4.20}\] In the first sum on the right in (4.20), approximate each summand by substituting largest value of \(j\). In the second sum, discard the first factor because it is bounded above by \(1\). Then, we obtain the following bound for (4.20) by adjusting the constants as needed: \[Ce^{-c\lambda^{\frac{1}{2r+1}}}\sum_{j=2}^{\lfloor\lambda^{\frac{1}{2r+1}} \rfloor-1}e^{-cj}+C\sum_{j=\lfloor\lambda^{\frac{1}{2r+1}}\rfloor}^{\lfloor 2 ^{\eta k}\rfloor}e^{-cj}\leq Ce^{-c\lambda^{\frac{1}{2r+1}}}.\] This gives the desired bound. In the second case where \(\lfloor\lambda^{\frac{1}{2r+1}}\rfloor>\lfloor 2^{\eta k}\rfloor\), we directly have that \[C\sum_{j=2}^{\lfloor 2^{\eta k}\rfloor}e^{-c\lambda/(j+1)^{2r}}e^{-cj}\leq Ce ^{-c\lambda/(\lfloor 2^{\eta k}\rfloor+1)^{2r}}\leq Ce^{-c\lambda/(\lfloor\lambda^{ \frac{1}{2r+1}}\rfloor+1)^{2r}}\leq Ce^{-\frac{c}{2^{2r}}\lambda^{\frac{1}{2 r+1}}},\] where the last inequality holds because \(\lfloor\lambda^{\frac{1}{2r+1}}\rfloor+1\leq 2\lambda^{\frac{1}{2r+1}}\) for all \(\lambda\geq 1\). **Case 2: Assuming (4.2):** Let \(\beta\) be a fixed nonnegative integer, to be chosen later. Recall the coupling from uniform random variables \(U_{e}\), where \(t_{e}=F^{-1}(U_{e})\). Then, \(t_{e}>0\) if and only if \(U_{e}>p_{\rm c}\). Define \[D^{1}_{k,n} =\#\{e\in\gamma_{n}\cap E_{k}:U_{e}\in(p_{\rm c},p_{2^{\beta k}})\},\] \[D^{2}_{k,n} =\#\{e\in\gamma_{n}\cap E_{k}:U_{e}\geq p_{2^{\beta k}}\}.\] Then, \[\mathbb{P}(D_{k,n}\geq\lambda)\leq\mathbb{P}(D^{1}_{k,n}\geq\lambda/2)+ \mathbb{P}(D^{2}_{k,n}\geq\lambda/2). \tag{4.21}\] We handle the two terms on the right-hand side of (4.21) separately. For the first term, we trivially bound \(D^{1}_{k,n}\) by total the number of edges \(e\in E_{k}\) with \(U_{e}\in(p_{\rm c},p_{2^{\beta k}})\). The random number \(D^{1}_{k,n}\) has the binomial distribution with parameters \(N\leq C_{1}4^{k}\) and \(p=p_{2^{\beta k}}-p_{c}\leq C_{2}2^{-\alpha\beta k}\) (4.6), and constants \(C_{1},C_{2},\alpha>0\). Fix \(\delta>0\), and choose the integer \(\beta\geq 2\) sufficiently large so that \(1-\alpha\beta\leq-\delta\). By Lemma 4.9, there exists a constant \(C>0\) so that \[\mathbb{P}(D^{1}_{k,n}\geq\lambda/2)\leq Ce^{-\delta k\lambda/2}\leq Ce^{- \delta\lambda/2},\] as long as we choose \(k\geq K_{0}\geq 1\). This gives an appropriate bound on the first term in (4.21). For the second term, we first argue that the assumption (4.2) implies that for any integer \(\beta\geq 1\), \[\liminf_{n\to\infty}\frac{F^{-1}(p_{2^{\beta n}})}{F^{-1}(p_{2^{n}})}>0. \tag{4.22}\] From monotonicity of \(F\), if (4.22) holds for some \(\beta\), then it holds for all \(\alpha<\beta\). Thus, it suffices to prove the statement for \(\beta=2^{j}\). Note that \[\liminf_{n\to\infty}\frac{F^{-1}(p_{2^{2j_{n}}})}{F^{-1}(p_{2^{n}})}=\liminf_ {n\to\infty}\prod_{i=1}^{j}\frac{F^{-1}(p_{2^{2i}n})}{F^{-1}(p_{2^{2i-1}n})}= \Big{(}\liminf_{n\to\infty}\frac{F^{-1}(p_{2^{2n}})}{F^{-1}(p_{2^{n}})} \Big{)}^{j}>0.\] Hence, there exists a constant \(C_{\beta}>0\) so that \[F^{-1}(p_{2^{\beta k}})\geq C_{\beta}F^{-1}(p_{2^{k}}) \tag{4.23}\] for all \(k\geq 1\). Next, we make the definition \[Z_{k,n}=\max_{\gamma\in\operatorname{Geo}(0,\partial B_{2^{n+1}})}\#\{e\in\gamma \cap E_{k}:F^{-1}(U_{e})\geq F^{-1}(p_{2^{\beta k}})\},\] and observe that \(D_{k,n}^{2}\leq Z_{k,n}\). Observe further that \(F^{-1}(p_{2^{\beta k}})Z_{k,n}\leq T_{k,n}^{\max}\). Then, by (4.23) and Lemma 4.6, whenever \(C_{\beta}\lambda/2\leq c^{\prime}8^{k}\) for some fixed constant \(c^{\prime}\), \[\mathbb{P}(D_{k,n}^{2}\geq\lambda/2) \leq\mathbb{P}(Z_{k,n}\geq\lambda/2)\leq\mathbb{P}(T_{k,n}^{ \max}\geq\lambda F^{-1}(p_{2^{\beta k}})/2)\] \[\leq\mathbb{P}(T_{k,n}^{\max}\geq(C_{\beta}\lambda/2)F^{-1}(p_{2 ^{k}}))\leq Ce^{-c\lambda^{s}}.\] This gives an upper bound for the second term in (4.21). If we choose \(K_{0}\) so that for all \(k\geq K_{0}\), \(\frac{C_{\beta}C^{\prime}}{2}4^{k}\leq c^{\prime}8^{k}\), then the condition \(C_{\beta}\lambda/2\leq c^{\prime}8^{k}\) is guaranteed by the assumption \(\lambda\leq C^{\prime}4^{k}\). ### Spacings within and between circuits Recall the sets \(E_{k}\) of edges having both endpoints in \(B_{2^{k+1}}\setminus B_{2^{k}}\cup\partial B_{2^{k}}\) for \(k\geq 1\), and we define \(E_{0}\) to be the set of edges with both endpoints in \(B_{2}\). Next, for a circuit \(\mathcal{C}\), define \[\mathfrak{i}(\mathcal{C}) =\min\{k\geq 0:E_{k}\cap\mathcal{C}\neq\emptyset\}\] \[\mathfrak{o}(\mathcal{C}) =\max\{k\geq 0:E_{k}\cap\mathcal{C}\neq\emptyset\}\] In each of the following lemmas, we let \(\mathcal{I}_{1},\mathcal{I}_{2},\ldots\) be the sequence of successive disjoint innermost collection of open circuits enclosing the origin. On the event \(\Omega_{\infty}\), this sequence is infinite. The rigorous construction of such sequences follows just as in Section 7, except that we only consider circuits surrounding \(\{0\}\), without a second reference set. Alternatively, one can construct this set of circuits directly using Proposition 2.5 by considering the two sets \(\mathcal{A}=\{0\}\) and \(\mathcal{B}=\partial B_{R}\), constructing the set of successive edge-disjoint innermost open circuits surrounding \(0\) but keeping \(\partial B_{R}\) in the exterior, then letting \(R\to\infty\) (note that there can be no circuits with \(\partial B_{R}\) in the interior with \(0\) in the exterior). In other words, for \(R\geq 1\), let \[Y_{R}=\sup\{j\geq 1:\mathcal{I}_{j}\subseteq B_{R}\}. \tag{4.24}\] Then, since we chose each circuit to be the successive innermost circuit, \(\mathcal{I}_{1},\ldots,\mathcal{I}_{Y_{R}}\) is the sequence from Proposition 2.5, with \(\mathcal{A}=\{0\}\), and \(\mathcal{B}=\partial B_{R}\). The results in this subsection make no assumptions on the edge-weight distribution. In fact, they are general statements about critical percolation on \(\mathbb{Z}^{2}\), without reference to a distribution function \(F\). **Lemma 4.11**.: _For \(j\geq 1\), \(\mathfrak{o}(\mathcal{I}_{j})\) (and therefore also \(\mathfrak{i}(\mathcal{I}_{j})\)) is stochastically dominated by the sum of \(j\) i.i.d. nondegenerate geometric random variables supported on \(\{1,2,\ldots\}\). In particular, for each \(k\geq 1\), there exsits a constant \(C=C(k)\) so that \(\mathbb{E}[\mathfrak{i}(\mathcal{I}_{j})^{k}]\leq\mathbb{E}[\mathfrak{o}( \mathcal{I}_{j})^{k}]\leq Cj^{k}\) for all \(k\geq 1\)._ Proof.: Consider the sequence of disjoint annuli \(A_{k}=B_{2^{k+1}}\setminus B_{2^{k}}\) for \(k\geq 1\). the events \[\Omega_{k}=\{\exists\text{ an open circuit, all of whose vertices lie in }A_{k}\}\] are mutually independent, and the probability of this seqeunc of events is bounded away from \(0\) and \(1\) by the RSW theorem. If \(\Omega_{k}\) occurs for some \(k\geq 1\), choose exactly one circuit \(\mathcal{C}\) in some measurable fashion, and note that \(\mathfrak{o}(\mathcal{C})=k\). Let \(\mathcal{C}_{1},\mathcal{C}_{2},\ldots\) be the infinite sequence of circuits constructed in this manner. Note that each \(\mathcal{C}_{j+1}\) encloses and is vertex-disjoint (and therefore also edge-disjoint) from \(\mathcal{C}_{j}\). Since \(\mathcal{I}_{1},\mathcal{I}_{2},\ldots\) was defined as the sequence of successive innermost open circuits, it follows by induction that each \(\mathcal{I}_{j}\) is enclosed by \(\mathcal{C}_{j}\), so \(\mathfrak{o}(\mathcal{C}_{j})\geq\mathfrak{o}(\mathcal{I}_{j})\). Now, the circuit \(\mathcal{C}_{j}\) lies entirely in a single annulus \(B_{2^{k+1}}\setminus B_{2^{k}}\) for some \(k\), and the values \(\mathfrak{o}(\mathcal{C}_{j})\) is equal to that \(k\). Consider the \(\{0,1\}\)-valued sequence \(\{\eta_{k}\}_{k\geq 1}\) defined by \(\eta_{k}=1\) if \(\Omega_{k}\) occurs, and \(\eta_{k}=0\) otherwise. Then, \(\mathfrak{o}(\mathcal{C}_{1})=\inf\{k\geq 1:\eta_{k}=1\}\), and \(\mathfrak{o}(\mathcal{C}_{j})=\inf\{k>\mathfrak{o}(\mathcal{C}_{j}):\eta_{k}=1\}\). Letting \(p=\inf_{j}\mathbb{P}(\Omega_{j})>0\), the independence of the events \(\Omega_{j}\) implies that \(\mathfrak{o}(\mathcal{C}_{j})\) is stochastically dominated by the sum of \(j\) i.i.d. geometric random variables with success probability \(p\). The bound on the moments follows immediately. **Lemma 4.12**.: _There exists \(C,c>0\) so that, for all \(j\geq 1\)_ \[\mathbb{P}(\mathfrak{o}(\mathcal{I}_{j})-\mathfrak{i}(\mathcal{I}_{j})\geq t )\leq Cje^{-ct}.\] Proof.: When \(\mathfrak{i}(\mathcal{I}_{j})=k\), and \(\mathfrak{o}(\mathcal{I}_{j})\geq t+k\), there exists an open path from \(\partial B_{2^{k+1}}\) to \(\partial B_{2^{k+t}}\) formed by following a portion of \(\mathcal{I}_{j}\). We use \(\{\mathcal{A}\leftrightarrow\mathcal{B}\}\) to denote the event that \(\mathcal{A}\) is connected to \(\mathcal{B}\) by an open path. Then, \[\mathbb{P}(\mathfrak{o}(\mathcal{I}_{j})-\mathfrak{i}(\mathcal{I }_{j})\geq t) =\sum_{k=0}^{\infty}\mathbb{P}(\{\mathfrak{o}(\mathcal{I}_{j})- \mathfrak{i}(\mathcal{I}_{j})\geq t\}\cap\{\mathfrak{i}(\mathcal{I}_{j})=k\})\] \[\leq\sum_{k=0}^{\infty}\mathbb{P}(\{\partial B_{2^{k+1}}\leftrightarrow \partial B_{2^{k+t}}\}\cap\{\mathfrak{i}(\mathcal{I}_{j})\geq k\}).\] Observe that \(\{\mathfrak{i}(\mathcal{I}_{j})\geq k\}\) is a decreasing event (in the sense that adding more open edges cannot change the event from not occuring to occuring), while each \(\{\partial B_{2^{k+1}}\leftrightarrow\partial B_{2^{k+t}}\}\) is an increasing event. So by the FKG inequality, followed by a bound on the \(1\)-arm probability (Proposition 3.3), we have \[\mathbb{P}(\mathfrak{o}(\mathcal{I}_{j})-\mathfrak{i}(\mathcal{I}_{j})\geq t )\leq\sum_{k=0}^{\infty}\mathbb{P}(\partial B_{2^{k+1}}\leftrightarrow\partial B _{2^{k+t}})\mathbb{P}(\mathfrak{i}(\mathcal{I}_{j})\geq k)\leq Ce^{-ct} \mathbb{E}\mathfrak{i}(\mathcal{I}_{j}).\] The result now follows from Lemma 4.11. **Lemma 4.13**.: _There exists \(C,c>0\) so that, for all \(j\geq 1\),_ \[\mathbb{P}(\mathfrak{i}(\mathcal{I}_{j+1})-\mathfrak{o}(\mathcal{I}_{j})\geq t )\leq Cje^{-ct}.\] Proof.: Let \(\{\mathcal{A}\stackrel{{\rm cd}}{{\leftrightarrow}}\mathcal{B}\}\) denote the event that there exists a closed dual connection from the set \(\mathcal{A}\) to the set \(\mathcal{B}\). By Proposition 2.5(vi), for each \(j\geq 1\), there exists a closed dual connection from \(\mathcal{I}_{j}\) to \(\mathcal{I}_{j+1}\). Hence, using a bound on the critical \(1\)-arm probability (Proposition 3.3), \[\sum_{k=0}^{\infty}\mathbb{P}(\{\mathfrak{i}(\mathcal{I}_{j+1})- \mathfrak{o}(\mathcal{I}_{j})\geq t\}\cap\{\mathfrak{o}(\mathcal{I}_{j})=k\}) \leq\sum_{k=0}^{\infty}\mathbb{P}(\{\partial B_{2^{k+1}}\stackrel{{ \rm cd}}{{\leftrightarrow}}\partial B_{2^{k+t}}\}\cap\{\mathfrak{o}( \mathcal{I}_{j})=k\})\] \[\leq Ce^{-ct}\Big{(}\sum_{k=0}^{\infty}(k+1)^{2}\mathbb{P}( \mathfrak{o}(\mathcal{I}_{j})=k)\Big{)}^{1/2}\Big{(}\sum_{k=0}^{\infty}\frac{ 1}{(k+1)^{2}}\Big{)}^{1/2}\leq Ce^{-ct}\sqrt{\mathbb{E}\mathfrak{o}(\mathcal{ I}_{j})^{2}}.\] The result now follows by Lemma 4.11. **Lemma 4.14**.: _There exists \(C,c>0\) so that, for all \(j\geq 0\),_ \[\mathbb{P}(\mathfrak{o}(\mathcal{I}_{j+1})-\mathfrak{i}(\mathcal{I}_{j})\geq t) \leq Cje^{-ct}.\] Proof.: This is a Corollary of Lemmas 4.12 and 4.13: \[\mathbb{P}(\mathfrak{o}(\mathcal{I}_{j+1})-\mathfrak{i}(\mathcal{ I}_{j})\geq t)\] \[\leq\mathbb{P}(\mathfrak{o}(\mathcal{I}_{j+1})-\mathfrak{i}( \mathcal{I}_{j+1})+\mathfrak{i}(\mathcal{I}_{j+1})-\mathfrak{o}(\mathcal{I}_ {j})+\mathfrak{o}(\mathcal{I}_{j})-\mathfrak{i}(\mathcal{I}_{j})\geq t)\] \[\leq\mathbb{P}(\mathfrak{o}(\mathcal{I}_{j+1})-\mathfrak{i}( \mathcal{I}_{j+1})\geq t/3)+\mathbb{P}(\mathfrak{i}(\mathcal{I}_{j+1})- \mathfrak{o}(\mathcal{I}_{j})\geq t/3)+\mathbb{P}(\mathfrak{o}(\mathcal{I}_ {j})-\mathfrak{i}(\mathcal{I}_{j})\geq t/3).\qed\] The next result estimates the distance between the last circuit inside the box and the boundary of the box of radius \(2^{n+1}\). **Lemma 4.15**.: _Recall the notation \(Y_{R}\) from (4.24). Then, there exist constants \(C,c>0\) so that_ \[\mathbb{P}(n-\mathfrak{o}(\mathcal{I}_{Y_{2^{n+1}}})\geq t)\leq Ce^{-ct}.\] Proof.: On the event \(n-\mathfrak{o}(\mathcal{I}_{Y_{2^{n+1}}})\geq t\), Proposition 2.5(vi) implies that there must be a closed crossing from from \(B_{2^{n-t+1}}\) to \(\partial B_{2^{n+1}}\) and so the bound follows from a bound on the \(1\)-arm probability from Proposition 3.3. **Lemma 4.16**.: _There exists \(C,c>0\) so that, for all \(n\geq 1\),_ \[\mathbb{P}(n-\mathfrak{i}(\mathcal{I}_{Y_{2^{n+1}}})\geq t)\leq Cn^{5/2}e^{-ct}.\] Proof.: \[\mathbb{P}(n-\mathfrak{i}(\mathcal{I}_{Y_{2^{n+1}}})\geq t) =\mathbb{P}(n-\mathfrak{i}(\mathcal{I}_{Y_{2^{n+1}}})-\mathfrak{o}( \mathcal{I}_{Y_{2^{n+1}}})+\mathfrak{o}(\mathcal{I}_{Y_{2^{n+1}}})\geq t)\] \[\leq\mathbb{P}(n-\mathfrak{o}(\mathcal{I}_{Y_{2^{n+1}}})\geq t/2) +\mathbb{P}(\mathfrak{o}(\mathcal{I}_{Y_{2^{n+1}}})-\mathfrak{i}(\mathcal{I}_ {Y_{2^{n+1}}})\geq t/2)\] The first term on the right-hand side is bounded by \(Ce^{-ct}\) by Lemma 4.15. Using Lemma 4.12 and Holder's inequality, \[\mathbb{P}(\mathfrak{o}(\mathcal{I}_{Y_{2^{n+1}}})-\mathfrak{i}( \mathcal{I}_{Y_{2^{n+1}}})\geq t/2)=\sum_{k}\mathbb{P}(\mathfrak{o}(\mathcal{ I}_{k})-\mathfrak{i}(\mathcal{I}_{k})\geq t/2,Y_{2^{n+1}}=k)\] \[\leq C\sum_{k}\sqrt{k}e^{-ct}\sqrt{\mathbb{P}(Y_{2^{n+1}}=k)}=C \sum_{k}\frac{\sqrt{k}}{k^{2}}e^{-ct}k^{2}\sqrt{\mathbb{P}(Y_{2^{n+1}}=k)}\] \[\leq Ce^{-ct}\Big{(}\sum_{k}k^{4}\mathbb{P}(Y_{2^{n+1}}=k)\Big{)} ^{1/2}=Ce^{-ct}(\mathbb{E}Y_{2^{n+1}}^{4})^{1/2}.\] We conclude the proof by showing that \(\mathbb{E}Y_{2^{n+1}}^{4}\leq Cn^{5}\). Decompose \(Y_{2^{n+1}}\) as \(\sum_{i=0}^{n}V_{i}\), where \(V_{i}\) is the number of circuits \(\mathcal{I}_{j}\) in the sequence \(\mathcal{I}_{1},\dots,\mathcal{I}_{Y_{2^{n+1}}}\) with \(\mathfrak{o}(\mathcal{I}_{j})=i\). There is a deterministic constant \(C\) so that \((\sum_{i=0}^{n}x_{i})^{4}\leq Cn^{4}\sum_{i=1}^{n}x_{i}^{4}\) for all \(n\geq 1\) and nonnegative numbers \(x_{0},\dots,x_{n}\). Then, \[\mathbb{E}Y_{2^{n+1}}^{4}=\mathbb{E}\Big{(}\sum_{i=0}^{n}V_{i}\Big{)}^{4}\leq Cn ^{4}\sum_{i=0}^{n}\mathbb{E}V_{i}^{4}\leq Cn^{5},\] where the last inequality uses the fact that \(\mathbb{E}[V_{i}^{4}]\) is a bounded sequence, which we now prove. Define the event \[A_{i}=\{\exists\text{ a closed circuit whose vertices lie in }B_{2^{i}}\setminus B_{2^{i-1}}\] and \(\exists\) a closed dual left-right crossing of \([2^{i-1},2^{i+2}]\times[-2^{i-1},2^{i-1}]\) \(\}\) By the FKG inequality and the RSW theorem, \(\mathbb{P}(A_{i})\geq\varepsilon\) for some constant \(\varepsilon>0\). On the event \(A_{i}\), there exists no open circuit \(\mathcal{C}\) with \(\mathfrak{o}(\mathcal{C})=i\). To see this, note that the closed circuit in \(A_{k}\), which we call \(\mathcal{D}\), must be enclosed by \(\mathcal{C}\) by Lemma 6.9. The closed dual left-right crossing of \([2^{i-1},2^{i+2}]\times[-2^{i-1},2^{i-1}]\) must cross from the interior of \(\mathcal{C}\) to the exterior and must therefore cross the open circuit \(\mathcal{C}\), a contradiction. Hence \[\mathbb{P}(\exists\text{ an open circuit }\mathcal{C}\text{ with }\mathfrak{o}( \mathcal{C})=i\text{ })\leq\mathbb{P}(A_{i}^{c})\leq 1-\varepsilon.\] Then, using the BK inequality, for all \(i\geq 0\), \[\mathbb{P}(V_{i}\geq k)=\mathbb{P}(\exists k\text{ edge-disjoint open circuits }\mathcal{C}\text{ with }\mathfrak{o}(\mathcal{C})=i\text{ })\leq(1-\varepsilon)^{k}, \tag{4.25}\] from which the uniform upper bound for \(\mathbb{E}[V_{i}^{4}]\) follows. ### Controlling the number of closed edges between circuits We recall here again that \(\gamma_{n}\) is the geodesic from \(\mathcal{A}=\{0\}\) to \(\mathcal{B}=\partial B_{2^{n+1}}\) from Proposition 2.5. For any \(j\geq 1\), define \[X_{j}=\#\{e\in\gamma_{n}\cap E(\operatorname{int}(\mathcal{I}_{j+1})\cap \operatorname{ext}(\mathcal{I}_{j})):t_{e}>0\}, \tag{4.26}\] where \(E(\operatorname{int}(\mathcal{I}_{j+1})\cap\operatorname{ext}(\mathcal{I}_{j}))\) is the set of edges with at least one endpoint in the open set \(\operatorname{int}(\mathcal{I}_{j+1})\cap\operatorname{ext}(\mathcal{I}_{j})\). We define \(X_{0}\) similarly, interpreting \(\mathcal{I}_{i}\) to consist of the single point \(0\). **Proposition 4.17**.: _Assume either (4.1) or (4.2). Then, there exists \(C,c,s>0\) so that, for all \(j\geq 0\) and \(\lambda\geq 0\),_ \[\mathbb{P}(X_{j}\geq\lambda)\leq Cj^{5/3}e^{-c\lambda^{s}}.\] Proof.: Using the consistency of the geodesics \(\gamma_{n}\) between open circuits (Proposition 2.5(viii)), \[\mathbb{P}(X_{j}\geq\lambda) \leq\mathbb{P}\Big{(}\sum_{k=\operatorname{i}(\mathcal{I}_{j})}^ {\mathfrak{o}(\mathcal{I}_{j+1})}D_{k,\mathfrak{o}(\mathcal{I}_{j+1})}\geq \lambda\Big{)}\] \[=\sum_{a=0}^{\infty}\sum_{b=0}^{\infty}\mathbb{P}\Big{(}\sum_{k=a }^{a+b}D_{k,a+b}\geq\lambda,\operatorname{i}(\mathcal{I}_{j})=a,\mathfrak{o}( \mathcal{I}_{j+1})-\operatorname{i}(\mathcal{I}_{j})=b\Big{)}\] \[\leq\sum_{a=0}^{\infty}\sum_{b=0}^{\infty}\mathbb{P}\Big{(}\sum_{ k=a}^{a+b}D_{k,a+b}\geq\lambda\Big{)}^{1/3}\mathbb{P}\Big{(}\mathfrak{i}( \mathcal{I}_{j})=a\Big{)}^{1/3}\mathbb{P}\Big{(}\mathfrak{o}(\mathcal{I}_{j+ 1})-\operatorname{i}(\mathcal{I}_{j})=b\Big{)}^{1/3}.\] Then by a union bound on the sum inside the probability combined with Proposition 4.10 and Lemma 4.14, the above is bounded from above by \[Cj^{1/3}\Big{(}\sum_{a=0}^{\infty}\mathbb{P}\Big{(}\operatorname{i}( \mathcal{I}_{j})=a\Big{)}^{1/3}\Big{)}\sum_{b=0}^{\infty}(b+1)^{1/3}e^{-c_{0} \left(\frac{\lambda}{b+1}\right)^{s}}e^{-c_{1}b}\] \[\leq Cj^{1/3}\Big{(}\sum_{a=0}^{\infty}\mathbb{P}\Big{(} \operatorname{i}(\mathcal{I}_{j})\geq a\Big{)}^{1/3}\Big{)}\sum_{b=0}^{\infty} e^{-c_{0}\left(\frac{\lambda}{b+1}\right)^{s}}e^{-c_{2}b}, \tag{4.27}\] where the constants changed from line to line. To bound the sum in \(a\), we use Lemma 4.11. Namely, \(\mathfrak{i}(\mathcal{I}_{j})\leq\mathfrak{o}(\mathcal{I}_{j})\) is stochastically dominated by the sum of \(j\) i.i.d. geometric random variables, which we will denote as \(X_{i}\). Then, by a union bound, \[\mathbb{P}(\mathfrak{i}(\mathcal{I}_{j})\geq a)^{1/3}\leq\mathbb{P}\Big{(} \sum_{i=1}^{j}X_{i}\geq a\Big{)}^{1/3}\leq j^{1/3}\mathbb{P}(X_{i}\geq a/j)^{1 /3}\leq Cj^{1/3}e^{-ca/j},\] for constants \(c,C>0\). Summing \(e^{-ca/j}\) in \(a\) yields \((1-e^{-c/j})^{-1}\), which is bounded by a constant times \(j\), as can be seen from a Taylor expansion. Thus, (4.27) is bounded from above by \[Cj^{5/3}\sum_{b=0}^{\infty}e^{-c_{0}\left(\frac{\lambda}{b+1}\right)^{s}}e^{-c _{2}b} \tag{4.28}\] Now we bound the final sum using a similar approach as in the proof of Proposition 4.10: \[\begin{split}&\sum_{b=0}^{\infty}e^{-c_{0}\left(\frac{\lambda}{b +1}\right)^{s}}e^{-c_{2}b}=\sum_{b=0}^{\lceil\sqrt{\lambda}\rceil-1}e^{-c_{0} \left(\frac{\lambda}{b+1}\right)^{s}}e^{-c_{2}b}+\sum_{b=\lceil\sqrt{\lambda} \rceil}^{\infty}e^{-c_{0}\left(\frac{\lambda}{b+1}\right)^{s}}e^{-c_{2}b}\\ &\leq\sum_{b=0}^{\lceil\sqrt{\lambda}\rceil-1}e^{-c_{0}\left( \frac{\lambda}{\lceil\sqrt{\lambda}\rceil}\right)^{s}}e^{-c_{2}b}+\sum_{b= \lceil\sqrt{\lambda}\rceil}^{\infty}e^{-c_{2}b}\leq e^{-c_{3}\lambda^{s/2}}. \end{split} \tag{4.29}\] Substituting into (4.28) completes the proof. We next define a random variable that controls the number of nonzero edges on a geodesic between the last open circuit \(\mathcal{I}_{Y_{2^{n+1}}}\) and \(\partial B_{2^{n+1}}\). \[X^{n}=\#\{e\in\gamma_{n}\cap E(B_{2^{n+1}}\cap\operatorname{ext}(\mathcal{I}_ {Y_{2^{n+1}}})):t_{e}>0\}. \tag{4.30}\] In what follows we give a tail bound for the variable \(X^{n}\). **Proposition 4.18**.: _Assume either (4.1) or (4.2). There exists \(C,c,s>0\) so that for each \(n\geq 1\) and \(\lambda\geq 0\)_ \[\mathbb{P}(X^{n}\geq\lambda)\leq Cn^{5/4}e^{-c\lambda^{s}}.\] Proof.: Using Proposition 4.10 and Lemma 4.16, \[\begin{split}\mathbb{P}(X^{n}\geq\lambda)&\leq \mathbb{P}\Big{(}\sum_{k=\mathfrak{i}(\mathcal{I}_{Y_{2^{n+1}}})}^{n}D_{k,n} \geq\lambda\Big{)}=\sum_{a=0}^{n}\mathbb{P}\Big{(}\sum_{k=n-a}^{n}D_{k,n}\geq \lambda,n-\mathfrak{i}(\mathcal{I}_{Y_{2^{n+1}}})=a\Big{)}\\ &\leq\sum_{a=0}^{n}\mathbb{P}\Big{(}\sum_{k=n-a}^{n}D_{k,n}\geq \lambda\Big{)}^{1/2}\mathbb{P}(n-\mathfrak{i}(\mathcal{I}_{Y_{2^{n+1}}})=a)^{1 /2}\\ &\leq C\sum_{a=0}^{\infty}(a+1)^{1/2}e^{-c\left(\frac{\lambda}{a +1}\right)^{s}}n^{5/4}e^{-ca}\leq Cn^{5/4}e^{-c_{1}\lambda^{s/2}},\end{split}\] where the last line follows from the computation in (4.29). We conclude this section by proving our last main result. Proof of Theorem 1.4.: As \(\pi_{3}(cR)\asymp\pi_{3}(R)\) for any positive constant \(c\) (see Remark 3.2), it suffices to prove the statement when \(R\) is a power of \(2\). Setting \(R=2^{n+1}\), it suffices to prove the statement when for the geodesic \(\gamma_{n}\) constructed from Proposition 2.5. In this formulation, the statement to be proven is that there exist constants \(c,C>0\) so that for all \(\lambda\geq 1\) and \(R\geq 1\), \[\mathbb{P}(|\gamma_{n}|\geq\lambda 2^{(n+1)\varepsilon}\pi_{3}(2^{n+1}))\leq Ce ^{-c(\log(\lambda)+n)^{s}}. \tag{4.31}\] Let \((\mathcal{I}_{i})_{i=1}^{Y_{2^{n+1}}}\) denote the collection of open circuits from Proposition 2.5 with \(\mathcal{A}=\{0\}\) and \(\mathcal{B}=\partial B_{2^{n+1}}\). We let \(\zeta_{n}\) denote the dual path from Proposition 2.5 that is closed except for those edges which cross one of the circuits \(\mathcal{I}_{j}\). From the construction, \(\mathcal{I}_{1},\ldots,\mathcal{I}_{Y_{2^{n+1}}}\) are the successive innermost edge-disjoint open circuits in the box \(B_{2^{n+1}}\), and these satisfy \(0\in\mathsf{int}(\mathcal{I}_{i})\subset\mathsf{int}(\mathcal{I}_{i+1})\) for all \(i=1,...,Y_{2^{n+1}}\). Next, recall (4.26) and (4.30), and define \[\mathfrak{X}_{n}=\max\left\{X^{n},\max_{1\leq i<Y_{2^{n+1}}}X_{i}\right\}\] to serve as an upper bound on the number of nonzero edges a geodesic will take between two consecutive open circuits \(\mathcal{I}_{i}\) and \(\mathcal{I}_{i+1}\) or between the last open circuit \(\mathcal{I}_{Y_{2^{n+1}}}\) and \(\partial B_{2^{n+1}}\). Finally set \(x_{n}^{\lambda}=(\log(\lambda)+n)^{1/3}\). We prove (4.31) by bounding \(\mathbb{P}\Big{(}|\gamma_{n}|\geq\lambda(2^{n+1})^{2+\varepsilon}\pi_{3}(2^{n +1}),\mathfrak{X}_{n}\leq x_{n}^{\lambda}\Big{)}\) and \(\mathbb{P}(\mathfrak{X}_{n}\geq x_{n}^{\lambda})\) separately. For an edge \(e\subseteq B_{2^{n+1}}\), let \(M=\min\{\operatorname{dist}(e,0),\operatorname{dist}(e,\partial B_{2^{n+1}})\}\). We first claim that for every such edge, \[\mathbb{P}(\{e\in\gamma_{n}\}\cap\{\mathfrak{X}_{n}\leq x_{n}^{\lambda}\}) \leq C\pi_{3}^{(x_{n}^{\lambda})}(M) \tag{4.32}\] where \(\pi_{3}^{(k)}\) is the probability of a \(3\)-arm polychromatic event to distance \(M\) with at most \(k\) "defects" along each of the arms. This means that there exists three disjoint arms to distance \(M\); two of the arms are primal paths which are open except at at most \(k\) edges each. The other arm is a dual arm that is closed except at at most \(k\) edges. The key estimate that will be used is [21, Prop. 18] which gives \[\pi_{3}^{(k)}(M)\leq(C(1+\log M))^{k}\pi_{3}(M) \tag{4.33}\] for all \(k\geq 0\) and some constant \(C>0\). For benefit of the reader, we note that [21, Prop. 18] is stated for a fixed \(k\) so that \(\pi_{3}^{(k)}(M)\leq C_{k}(\log M)^{k}\pi_{3}(M)\), but (4.33) follows from the proof, as there is is shown that \(C_{k}=C_{k-1}C^{\prime}\) for some constant \(C^{\prime}\). To show (4.32), we will split it into several cases. Let \(A_{1}\) denote the event that \(e\) is on a segment of \(\gamma_{n}\) that lies between two successive open circuits, between \(0\) and the first \(\mathcal{I}_{1}\) or between \(\mathcal{I}_{Y_{2^{n+1}}}\) and \(\partial B_{2^{n+1}}\), and there are no closed edges along this segment. Let \(A_{2}\) denote the event that \(e\) lies on one of the open circuits \(\mathcal{I}_{1},\ldots,\mathcal{I}_{Y_{2^{n+1}}}\). Let us also define \(A=A_{1}\cup A_{2}\). First, on the event \(A^{c}\), we show there must be a closed path from \(e\) to distance \(M\). To simplify the notation, we will refer to \((0,0)\) and \(\partial B_{2^{n+1}}\) as open circuits. By Proposition 2.5(x), the dual edge \(e^{\star}\) lies on a closed circuit that encloses the origin, and this closed circuit does not intersect the geodesic \(\gamma_{n}\) at any other edge. Since the dual circuit encloses the origin, it reaches to distance \(M\). There are two additional disjoint arms obtained by following \(\gamma_{n}\) from each endpoint of \(e\) toward the two open circuits, then circling around the origin. These paths contain at most \(\mathfrak{X}_{n}\) many closed edges. Hence, \[\mathbb{P}(\{e\in\gamma_{n}\}\cap\{\mathfrak{X}_{n}\leq x_{n}^{\lambda}\} \cap A^{c})\leq C\pi_{3}^{(x_{n}^{\lambda})}(M).\] Next, for the event \(A_{1}\), the estimate below follows exactly from the same proof for (3.6): \[\mathbb{P}(\{e\in\gamma_{n}\}\cap\{\mathfrak{X}_{n}\leq x_{n}^{\lambda}\}\cap A_ {1})\leq C\pi_{3}(M).\] And on the event \(A_{2}\), we get \[\mathbb{P}(\{e\in\gamma_{n}\}\cap\{\mathfrak{X}_{n}\leq x_{n}^{\lambda}\}\cap A _{2})\leq C\pi_{3}(M),\] which essentially follows from the proof of (3.7). Specifically, Proposition 2.5(v) ensures the existence of a dual path \(\zeta_{e}\) from \(e\) to \(0\) which consists of closed edges, except for the open edges where \(\zeta_{e}\) crosses each of the circuits \(\mathcal{I}_{j}\). Here we choose the subsequence of annuli \(\{A_{i_{j}}\}_{j=0}^{J}\) to cover the open edges along \(\zeta_{e}\) and the first defect encountered by \(\gamma\) going from \(e\) to \(\mathcal{B}\). Note that by Proposition 2.5(x), this first defect must be on a closed circuit, thus the argument is the same as case (2) from the proof of (3.7). Letting \(E(B_{2^{n+1}})\) be the set of edges with at least one endpoint in the interior of \(B_{2^{n+1}}\), we have that \[\mathbb{P}\Big{(}|\gamma_{n}|\geq\lambda(2^{n+1})^{2+\varepsilon} \pi_{3}(2^{n+1}),\mathfrak{X}_{n}\leq x_{n}^{\lambda}\Big{)}\] \[\leq\mathbb{P}\Big{(}\sum_{e\in B_{2^{n+1}}}\mathds{1}(e\in \gamma_{n},\mathfrak{X}_{n}\leq x_{n}^{\lambda})\geq\lambda(2^{n+1})^{2+ \varepsilon}\pi_{3}(2^{n+1})\Big{)}\] \[\leq(\lambda(2^{n+1})^{2+\varepsilon}\pi_{3}(2^{n+1}))^{-1}\sum_ {e\in B_{2^{n+1}}}\mathbb{P}(e\in\gamma_{n},\mathfrak{X}_{n}\leq x_{n}^{ \lambda})\] \[=(\lambda(2^{n+1})^{2+\varepsilon}\pi_{3}(2^{n+1}))^{-1}\sum_{k=0 }^{2^{n+1}}\sum_{\begin{subarray}{c}e\in E(B_{2n+1})\\ \mathrm{dist}(e,\{0\})\vee\mathrm{dist}(e,\partial B_{2^{n+1}})=k\end{subarray}} \mathbb{P}(e\in\gamma_{n},\mathfrak{X}_{n}\leq x_{n}^{\lambda})\] \[\leq(\lambda(2^{n+1})^{2+\varepsilon}\pi_{3}(2^{n+1}))^{-1}C\sum_ {k=0}^{2^{n+1}}2^{n+1}\pi_{3}^{(x_{n}^{\lambda})}(k)\] \[\leq(\lambda(2^{n+1})^{2+\varepsilon}\pi_{3}(2^{n+1}))^{-1}(Cn)^{ x_{n}^{\lambda}}\sum_{k=1}^{2^{n+1}}2^{n+1}\pi_{3}(k)\qquad\text{(by \eqref{eq:A33})}\] \[\leq\frac{(Cn)^{x_{n}^{\lambda}}}{\lambda(2^{n+1})\varepsilon}= \frac{(e^{\log(Cn)})^{x_{n}^{\lambda}}}{\lambda 2^{(n+1)\varepsilon}}\leq\frac{e^{C(\log( \lambda)+n)^{2/3}}}{\lambda 2^{(n+1)\varepsilon}}\leq C^{\prime}\exp\Bigl{(}-c^{\prime}(\log( \lambda)-n)\Bigr{)}.\] In the sixth line, we also absorbed the \(k=0\) term into the \(k=1\) term, bounding \(1\) by a constant times \(\pi_{3}(1)\). On the other hand, we have \[\mathbb{P}(\mathfrak{X}_{n}\geq x_{n}^{\lambda})\leq\mathbb{P}[\mathfrak{X}_{ n}\geq x_{n}^{\lambda},Y_{2^{n+1}}\leq(\log(\lambda)+n)^{2}]+\mathbb{P}[Y_{2^{n+1}} \geq(\log(\lambda)+n)^{2}] \tag{4.34}\] We bound each of the terms on the right-hand side of (4.34), starting with the latter. As in the proof of Lemma 4.16, decompose \(Y_{2^{n+1}}\) as \(\sum_{i=0}^{n}V_{i}\), where \(V_{i}\) is the number of circuits \(\mathcal{I}_{j}\) in the sequence \(\mathcal{I}_{1},\ldots,\mathcal{I}_{Y_{2^{n+1}}}\) with \(\mathfrak{o}(\mathcal{I}_{j})=i\). By a union bound and (4.25), \[\mathbb{P}(Y_{2^{n+1}}\geq(\log(\lambda)+n)^{2}) \leq\sum_{i=0}^{n}\mathbb{P}\Big{(}V_{i}\geq\frac{(\log(\lambda) +n)^{2}}{n}\Big{)}\] \[\leq n\exp\Bigl{(}-c\frac{(\log(\lambda)+n)^{2}}{n}\Big{)} \leq Ce^{-c(\log(\lambda)+n)},\] where \(C,c>0\) are constants changing from line to line. We decompose the other term in (4.34) as \[\mathbb{P}(\mathfrak{X}_{n}\geq x_{n}^{\lambda},Y_{2^{n+1}}\leq(\log(\lambda)+n) ^{2})\leq\mathbb{P}(\max_{1\leq i\leq Y_{2^{n+1}}}X_{i}\geq x_{n}^{\lambda},Y_{ 2^{n+1}}\leq(\log(\lambda)+n)^{2})+\mathbb{P}(X^{n}\geq x_{n}^{\lambda}).\] First, Proposition 4.18 implies that \[\mathbb{P}(X^{n}\geq x_{n}^{\lambda})\leq C_{1}n^{5/4}e^{-c_{1}(\log(\lambda)+ n)^{s}}\leq C_{1}^{\prime}e^{-c_{1}^{\prime}(\log(\lambda)+n)^{s^{\prime}}}\] for constants \(C_{1}^{\prime},c_{1}^{\prime},s^{\prime}>0\). On the other hand, we use Proposition 4.17 to get, for constants \(C,c,s\) that may change from line to line, \[\mathbb{P}(\max_{1\leq i\leq Y_{2^{n+1}}}X_{i}\geq(\log(\lambda)+ n)^{1/3},Y_{2^{n+1}}\leq(\log(\lambda)+n)^{2})\] \[\leq\mathbb{P}(\max_{1\leq i\leq(\log(\lambda)+n)^{2}}X_{i}\geq( \log(\lambda)+n)^{1/3})\] \[\leq\sum_{1\leq i\leq(\log(\lambda)+n)^{2}}\mathbb{P}(X_{i}\geq( \log(\lambda)+n)^{1/3})\leq Ce^{-c(\log(\lambda)+n)^{s}}\sum_{1\leq i\leq(\log (\lambda)+n)^{2}}i^{5/3}\] \[\leq C(\log(\lambda)+n)^{11/3}e^{-c(\log(\lambda)+n)^{s}}\leq Ce^ {-c(\log(\lambda)+n)^{s}}.\qed\] ## 5. Kesten's separation results and consequences In the remaining three sections of the paper, we develop the topological theory needed to rigorously prove Proposition 2.5. Kesten [13] developed a general theory of percolation on lattices embedded in \(\mathbb{R}^{d}\) and satisfying appropriate conditions. Here we review how general separation theorems proved in [13] apply to the square lattice. A _mosaic_ (Definition 2 in Section 2.2 of [13]) is a graph \(\mathcal{M}\) embedded in \(\mathbb{R}^{2}\) satisfying the following three conditions. Here, we consider each edge as a set of points in \(\mathbb{R}^{2}\). 1. \(\mathcal{M}\) has no loops. 2. All edges of \(\mathcal{M}\) are bounded, and every compact set of \(\mathbb{R}^{2}\) intersects only finitely many edges of \(\mathcal{M}\). 3. Any two distinct edges of \(\mathcal{M}\) are either disjoint or their intersection consists of a single vertex of the graph. An important fact, as noted by Kesten, is that any mosaic is a planar graph. A mosaic \(\mathcal{M}\) splits \(\mathbb{R}^{2}\) into connected components called faces. For a face \(F\) of \(\mathcal{M}\), to _close-pack_\(F\) means to add an edge between any pair of vertices on the boundary of \(F\) that are not yet connected ([13], Definition 3 in Section 2.2). Given a subset \(\mathcal{F}\) of faces, let \(\mathcal{G}\) and \(\mathcal{G}^{\star}\) be the graphs obtained from \(\mathcal{M}\) by close-packing all faces in \(\mathcal{F}\) and not in \(\mathcal{F}\), respectively. Then \((\mathcal{G},\mathcal{G}^{\star})\) is called a _matching pair_; we consider a specific example in Lemma 5.3 which is illustrated in Figures 6 and 7. **Remark 5.1**.: As an example, let \(\mathcal{M}\) be the triangular lattice. Every face in \(\mathcal{M}\) is a triangle, so no matter the choice of \(\mathcal{F}\), the induced matching pair is \((\mathcal{M},\mathcal{M})\). As we will see, this self-similarity does not hold for the primal lattice. This is one reason why working with the primal lattice is technically more challenging than the triangular lattice. The following theorem relates site percolation on the graph \(\mathcal{G}\) to site percolation on \(\mathcal{G}^{\star}\). In site percolation, the open cluster of an open vertex \(v\) is the set of all vertices connected to \(v\) by a path whose every vertex is open. **Theorem 5.2**.: _[_13_, Cor. 2.2]_ _Let \((\mathcal{G},\mathcal{G}^{\star})\) be a matching pair constructed from a mosaic \(\mathcal{M}\) and a set of faces \(\mathcal{F}\). Consider site percolation on the graph \(\mathcal{G}\). Also, assume the following._ 1. _There exists_ \(z<\infty\) _such that there are at most_ \(z\) _edges of_ \(\mathcal{G}\) _incident to any vertex of_ \(\mathcal{G}\)_._ 2. _All edges of_ \(\mathcal{G}\) _have finite diameter, and every compact set of_ \(\mathbb{R}^{d}\) _intersects only finitely many edges of_ \(\mathcal{G}\)_._ 3. \(\mathcal{G}\) _is connected._ _Then, if \(W(v)\), the open cluster of the vertex \(v\), is nonempty and bounded, then there exists a closed circuit \(J\) on \(\mathcal{G}^{\star}\) surrounding \(v\)._ In the setting of the present paper, instead of site percolation, we study edge percolation on \(\mathbb{Z}^{2}\). However, we may still apply Theorem 5.2 in the appropriate setting by noting that edge percolation on a graph \(\mathcal{G}\) is equivalent to site percolation on its _covering graph_\(\widetilde{\mathcal{G}}\) (The vertex set of \(\widetilde{\mathcal{G}}\) is the set of midpoints of the edges of \(\mathcal{G}\), and two vertices in \(\widetilde{\mathcal{G}}\) are connected by an edge if the corresponding edges in \(\mathcal{G}\) share an endpoint). This observation was first made by Fisher and Essam [8, 9], and is detailed in [13, Sec. 2.5]. In the setting of the primal lattice \(\mathbb{Z}^{2}\), let \(\mathcal{M}\) be the mosaic defined as follows and depicted in Figure 6. We place a vertex of \(\mathcal{M}\) at the midpoint of each edge of the primal lattice; that is, points of the form \((m,n+\frac{1}{2})\) and \((m+\frac{1}{2},n)\) for integers \(m,n\). Each vertex \(v\) of \(\mathcal{M}\) has edges connecting to four other vertices, namely \(v\pm(\frac{1}{2},\frac{1}{2})\) and \(v\pm(\frac{1}{2},-\frac{1}{2})\). Let \(\mathcal{F}\) be the set of faces of \(\mathcal{M}\) that contain a point of \(\mathbb{Z}^{2}\), and let \((\widetilde{\mathcal{G}},\widetilde{\mathcal{G}}^{\star})\) be the associated matching pair, shown in Figure 7. Then, we have the following lemma. **Lemma 5.3**.: \(\widetilde{\mathcal{G}}\) _is the covering graph for the primal lattice, and \(\widetilde{\mathcal{G}}^{\star}\) is the covering graph for the dual lattice._ Proof.: We prove that \(\widetilde{\mathcal{G}}\) is the covering graph for the primal lattice, and the other statement follows analogously. It is helpful to refer to Figure 7. Both graphs have the same vertex set by definition. In \(\mathcal{M}\), each vertex \(v\) is connected by an edge to four other vertices. The corresponding edge \(e\) in the primal lattice shares an endpoint with these edges; they are the edges connected to \(e\) that are perpendicular to \(e\). The two additional edges that share an endpoint with the edges of \(e\) are perpendicular to \(e\). The two additional edges that share an endpoint with the edges of \(e\) are perpendicular to \(e\) are perpendicular to \(e\). The two additional edges that share an endpoint with the edges of \(e\) are perpendicular to \(e\) are perpendicular to \(e\). The two additional edges that share an endpoint with the edges of \(e\) are perpendicular to \(e\) are perpendicular to \(e\) are perpendicular to \(e\). The two additional edges that share an endpoint with the edges of \(e\) are perpendicular to \(e\) are perpendicular to \(e\) are perpendicular to \(e\). The two additional edges that share an endpoint with the edges of \(e\ endpoint with \(e\) in the primal lattice correspond to the vertices in \(\mathcal{M}\) whose connections to \(v\) are added when constructing \(\widetilde{\mathcal{G}}\). **Definition 5.4**.: For a set \(W\) of primal edges, we define the dual boundary of \(W\), denoted \(\operatorname{Bd}^{\star}(W)\), as the collection of dual edges \(\hat{e}\) such that the associated primal edge \(e\) is not in \(W\), but shares at least one endpoint with an edge of \(W\). Similarly, for a set \(W^{\star}\) of dual edges, the primal boundary of the open cluster is defined as the set of primal edges whose dual edge shares at least one endpoint with an edge of \(\operatorname{Bd}(\hat{e})\). We say that a connected set of open edges \(W\) is a complete open cluster if no other open edges in \(\mathbb{Z}^{2}\) are connected to an endpoint of an edge in \(W\) by an open path. **Lemma 5.5**.: _If \(W\) is a complete open (closed) cluster of primal edges, then \(\operatorname{Bd}^{\star}(W)\) consists entirely of dual closed (open) edges. Similarly, if \(W^{\star}\) is a closed (open) cluster of dual edges, then \(\operatorname{Bd}(W^{\star})\) consists entirely of primal open (closed) edges._ Proof.: Let \(W\) be an open cluster of primal edges, and assume to the contrary that \(\operatorname{Bd}^{\star}(W)\) has a dual open edge \(\hat{e}\). Then, the associate primal edge \(e\) is open, and by definition \(e\) is connected to an edge of \(W\). Hence, \(e\in W\), a contradiction to the definition of \(\operatorname{Bd}^{\star}(W)\). Let \(\mathcal{G}\) be a graph and \(\widetilde{\mathcal{G}}\) its associated covering graph. Let \(\widetilde{r}=(\widetilde{v}_{1},\widetilde{e}_{2},\ldots,\widetilde{e}_{k}, \widetilde{v}_{k})\) be a path on \(\widetilde{\mathcal{G}}\). For each \(i\), let \(e_{i}\) be the associated edge on \(\mathcal{G}\) corresponding to the vertex \(\widetilde{v}_{i}\) on \(\widetilde{\mathcal{G}}\). As noted in Comment (iii) in Section 2.5 of [13], for suitable choices \(v_{0}\) and \(v_{k}\) of endpoints of \(e_{1}\) and \(e_{k}\), \(r=(v_{0},e_{1},\ldots,e_{k},v_{k})\) is a path on \(\mathcal{G}\). However, \(r\) is not necessarily a self-avoiding path, even if \(\widetilde{r}\) is. Nevertheless, if \(\widetilde{r}\) is a circuit, then the associated path \(r\) on \(\mathcal{G}\) contains a circuit. Finally, with \(\phi:\mathcal{G}\to\widetilde{\mathcal{G}}\) denoting the map sending edges of \(\mathcal{G}\) to vertices of \(\widetilde{\mathcal{G}}\), and \(W(e)\) the open cluster of the edge \(e\) in \(\mathcal{G}\), [13, Prop. 3.1] gives \(\phi(W(e))=W(\phi(e))\). We therefore obtain the following lemma as a corollary of Theorem 5.2 and Lemma 5.3. See Figure 8 for clarity. **Lemma 5.6**.: _Let \(W\subseteq\mathbb{Z}^{2}\) be a collection of vertices that are each connected to each other by an open path and such that for \(v\notin W\), \(v\) is not connected to any vertex in \(W\) by an open path. Then, if the set \(W\) is bounded, there exists a closed circuit \(\mathcal{D}\) on the dual lattice that encloses \(W\). If \(W\) consists of more than a single point, then we can consider \(W\) as a complete open cluster of edges, and the circuit can be chosen to consist only of edges of \(\operatorname{Bd}^{\star}(W)\). By duality and symmetry, the same holds with the roles of the primal and dual lattices reversed and/or with the roles of "open" and "closed" reversed._ Proof.: If \(W\) consists of a single vertex \(v\), then there are no open edges incident to \(v\). Then, the dual edges to each of the four closed edges incident to \(v\) forms a dual closed circuit surrounding \(v\). Now, assume that \(W\) has more than one point so that we can consider it as a complete open cluster of edges. We perform a modification of the environment as follows. Set all edges whose dual does not lie on \(\operatorname{Bd}^{\star}(W)\) to open. We claim that in the modified environment, \(W\) is still a complete open cluster of edges. By definition, all primal edges \(e\) sharing an endpoint with an edge in \(W\) are either already in \(W\), or their dual lies in \(\operatorname{Bd}^{\star}(W)\). If the dual edge \(\widehat{e}\) lies in \(\operatorname{Bd}^{\star}(W)\), then by Lemma 5.5, \(e\) is closed. Thus, a vertex outside \(W\) cannot connect to a vertex in \(W\) by an open path without taking a closed edge in \(\operatorname{Bd}^{\star}(W)\). Since \(W\) is still a complete open cluster, the remarks above the theorem imply that, in the modified environment, there exist a closed dual circuit surrounding \(W\). But these edges are also closed in the original environment. **Lemma 5.7**.: _Let \(\mathcal{D}\) be a closed circuit. Assume that there exists two disjoint connected sets \(\mathcal{A}\) and \(\mathcal{B}\) of dual vertices lying in \(\operatorname{int}(\mathcal{D})\) such that all edges connecting vertices of \(\mathcal{A}\) are closed and the same with edges connecting vertices in \(\mathcal{B}\). Let \(W_{\mathcal{A}}\) be the cluster of dual vertices connected by a closed dual path to \(\mathcal{A}\), and let \(W_{\mathcal{B}}\) be the cluster of closed dual vertices connected by a closed dual path to \(\mathcal{B}\). Then, if \(W_{\mathcal{A}}\neq W_{\mathcal{B}}\), there exists an open circuit \(\mathcal{C}\) contained in \(\operatorname{int}(\mathcal{D})\) that contains \(W_{\mathcal{A}}\) in its interior and \(W_{\mathcal{B}}\) in its exterior, or vise versa._ Figure 8. On the left is a picture of an open cluster of open edges in \(\mathcal{G}\) (blue/dark). On the right is the associated cluster of open vertices in \(\widetilde{\mathcal{G}}\), connected by a blue/dark path. By Theorem 5.2, there exists a closed circuit in \(\widetilde{\mathcal{G}}^{\star}\) surrounding this open cluster. This circuit of closed vertices is shown on the right (red/light). The set of edges on the left associated to the closed vertices on the right is not a circuit, but contains a circuit surrounding the original open cluster of edges. _By duality, this lemma also holds by interchanging the roles of open/closed and simultaneously interchanging the roles of primal/dual._ Figure 9 gives an illustration of Lemma 5.7. Proof.: We modify the environment so that all dual edges with at least one endpoint in \(\mathsf{ext}(\mathcal{D})\) (along with their primal counterparts) are set to closed. Let \(W_{\mathcal{D}}\) be the infinite cluster of points connected to \(\mathcal{D}\) by a closed dual path in this modified environment, and adjust \(W_{\mathcal{A}}\) and \(W_{\mathcal{B}}\) to include these dual edges in the case that one of the clusters connects to \(\mathcal{D}\). We note that both clusters cannot connect to \(\mathcal{D}\) because then, there is a closed dual path from \(\mathcal{A}\) to \(\mathcal{D}\), then from \(\mathcal{D}\) to \(\mathcal{B}\), a contradiction. We consider two cases: **Case 1:**\(W_{\mathcal{A}}=W_{\mathcal{D}}\) or \(W_{\mathcal{B}}=W_{\mathcal{D}}\). Without loss of generality, say \(W_{\mathcal{B}}=W_{\mathcal{D}}\). Then, \(W_{\mathcal{A}}\) must be bounded because it cannot cross \(\mathcal{D}\). Lemma 5.6 implies that there exists an open circuit \(\mathcal{C}\) containing \(W_{\mathcal{A}}\) in its interior. Vertices in \(W_{\mathcal{B}}\) cannot lie on the circuit \(\mathcal{C}\) because \(\mathcal{C}\) is open (and therefore primal). \(W_{\mathcal{B}}\) contains points in the exterior of \(\mathcal{C}\) because \(W_{\mathcal{B}}=W_{\mathcal{D}}\) is unbounded by assumption. Furthermore, \(W_{\mathcal{B}}\) cannot contain points in the interior of \(\mathcal{C}\); otherwise, the closed dual path from the points of \(W_{\mathcal{B}}\) in \(\mathsf{ext}(\mathcal{C})\) to the points of \(W_{\mathcal{B}}\) in \(\mathsf{int}(\mathcal{C})\) crosses \(\mathcal{C}\), a contradiction because \(\mathcal{C}\) is open. Hence, \(\mathcal{C}\) contains \(W_{\mathcal{A}}\) in its interior and \(W_{\mathcal{B}}\) in its exterior. **Case 2:**\(W_{\mathcal{B}}\neq W_{\mathcal{D}}\) and \(W_{\mathcal{A}}\neq W_{\mathcal{D}}\). We modify the environment further to obtain the previous case. We first argue that there exists an infinite closed dual path from either \(W_{\mathcal{A}}\) or \(W_{\mathcal{B}}\) that avoids the other. To see this, take any dual path from \(W_{\mathcal{A}}\). There is a last vertex in \(W_{\mathcal{A}}\cup W_{\mathcal{B}}\): the portion of the path starting from that vertex is the desired path. Set all the edges on this path to closed. Thus, we have reduced to the previous case. Since edges are only being changed to closed in this modified environment, the open circuit we obtain is still open in the original environment. For the benefit of Section 6, we note that Theorem 5.2 and Lemmas 5.6 and 5.7 are deterministic statements that hold for any given configuration on the graphs. ## 6. Topological details concerning circuits and paths Here we provide rigorous justification of certain topological constructions used throughout the paper. The first four lemmas will be applications of the Jordan-Schonflies theorem, which Figure 9. Two disjoint clusters of closed edges, \(W_{\mathcal{A}}\) and \(W_{\mathcal{B}}\), which lie inside a circuit \(\mathcal{D}\) and are separated by an open circuit \(\mathcal{C}\) (blue/thin) says that for every Jordan curve \(\mathcal{J}\subset\mathbb{R}^{2}\), there is a homeomorphism \(f\colon\mathbb{R}^{2}\to\mathbb{R}^{2}\) such that \(f(\mathcal{J})\) is the unit circle. In particular, because homeomorphisms preserve path-connectedness, the sets \(\mathsf{int}(\mathcal{J})\) and \(\mathsf{ext}(\mathcal{J})\) are separately path-connected, and for every \(z\in\mathcal{J}\), the set \(\{z\}\cup\mathsf{int}(\mathcal{J})\cup\mathsf{ext}(\mathcal{J})\) is path-connected. Recall our terminology that \(\mathcal{J}_{1}\) is _enclosed_ by \(\mathcal{J}_{2}\) if \(\mathsf{int}(\mathcal{J}_{1})\subseteq\mathsf{int}(\mathcal{J}_{2})\). **Lemma 6.1**.: _Let \(\mathcal{J}_{1},\mathcal{J}_{2}\) be two Jordan curves so that \(\mathcal{J}_{1}\subseteq\mathcal{J}_{2}\cup\mathsf{int}(\mathcal{J}_{2})\). Then, \(\mathcal{J}_{2}\) encloses \(\mathcal{J}_{1}\)._ Proof.: Suppose not. Then, there exists \(x\in\mathsf{int}(\mathcal{J}_{1})\cap\mathsf{int}(\mathcal{J}_{2})^{c}\), and there exists an infinite path from \(x\) that stays entirely in \(\mathsf{ext}(\mathcal{J}_{2})\) (except for possibly the initial point). Since \(x\in\mathsf{int}(\mathcal{J}_{1})\), \(x\) must cross \(\mathcal{J}_{1}\subseteq\mathcal{J}_{2}\cup\mathsf{int}(\mathcal{J}_{2})\), a contradiction. **Lemma 6.2**.: _Let \(\mathcal{J}_{1}\) and \(\mathcal{J}_{2}\) be Jordan curves, neither of which encloses the other. If \(\mathsf{int}(\mathcal{J}_{1})\cap\mathsf{int}(\mathcal{J}_{2})\) is nonempty, then the following statements hold:_ 1. _Both_ \(\mathsf{int}(\mathcal{J}_{1})\cap\mathcal{J}_{2}\) _and_ \(\mathsf{ext}(\mathcal{J}_{1})\cap\mathcal{J}_{2}\) _are nonempty._ 2. \(\mathcal{J}_{1}\cap\mathcal{J}_{2}\) _contains at least two distinct points._ Proof.: By assumption, we can find \(x_{1}\in\mathsf{int}(\mathcal{J}_{1})\setminus\mathsf{int}(\mathcal{J}_{2})\), \(x_{2}\in\mathsf{int}(\mathcal{J}_{2})\setminus\mathsf{int}(\mathcal{J}_{1})\), and \(y\in\mathsf{int}(\mathcal{J}_{1})\cap\mathsf{int}(\mathcal{J}_{2})\). Since \(x_{1}\) and \(y\) both belong to \(\mathsf{int}(\mathcal{J}_{1})\), there is a continuous curve in the plane starting at \(y\), ending at \(x_{1}\), and remaining entirely in \(\mathsf{int}(\mathcal{J}_{1})\). But since \(y\) belongs to \(\mathsf{int}(\mathcal{J}_{2})\) while \(x\) does not, this curve must contain a point lying on \(\mathcal{J}_{2}\). On the other hand, because \(x_{2}\notin\mathsf{int}(\mathcal{J}_{1})\), there is a continuous curve in the plane starting at \(x_{2}\), extending to infinity, and remaining entirely in \(\mathsf{ext}(\mathcal{J}_{1})\cup\{x_{2}\}\). Given that \(x_{2}\in\mathsf{int}(\mathcal{J}_{2})\), this curve must intersect \(\mathcal{J}_{2}\), hence \(\mathcal{J}_{2}\cap\mathsf{ext}(\mathcal{J}_{1})\) is nonempty. We have now seen that \(\mathcal{J}_{2}\) intersects both \(\mathsf{int}(\mathcal{J}_{1})\) and \(\mathsf{ext}(\mathcal{J}_{1})\), thereby proving part 1. Now take some \(z_{\mathrm{int}}\in\mathcal{J}_{2}\cap\mathsf{int}(\mathcal{J}_{1})\) and \(z_{\mathrm{ext}}\in\mathcal{J}_{2}\cap\mathsf{ext}(\mathcal{J}_{1})\). Any continuous curve between \(z_{\mathrm{int}}\) and \(z_{\mathrm{ext}}\) must intersect \(\mathcal{J}_{1}\), and by definition \(\mathcal{J}_{2}\) offers two such curves which are disjoint (except at \(z_{\mathrm{int}}\) and \(z_{\mathrm{ext}}\), of course). This observation implies part 2. In the following results, \(\mathsf{Ball}_{\varepsilon}(z)\) denotes the open ball centered at \(z\in\mathbb{R}^{2}\) with radius \(\varepsilon>0\). **Lemma 6.3**.: _Let \(\mathcal{J}\) be a Jordan curve. Consider two adjacent boxes on the square lattice, and let \(e\) be the edge connecting them. Assume that all of \(e\) lies on \(\mathcal{J}\), and further assume that \(\mathsf{int}(\mathcal{J})\) has empty intersection with the interiors of each the two boxes. Then, exactly one of the dual neighbors of \(e\) lies in \(\mathsf{int}(\mathcal{J})\), and the other dual neighbor lies in \(\mathsf{ext}(\mathcal{J})\)._ Proof.: For such an edge \(e\), let \(z\) be its center point. It is immediate that the set \(\mathsf{Ball}_{1/4}(z)\setminus\mathcal{J}\) has two connected components. \(\mathcal{J}\) is the boundary of its interior and exterior, so any open set intersecting \(\mathcal{J}\) has nonempty intersection with both \(\mathsf{int}(\mathcal{J})\) and \(\mathsf{ext}(\mathcal{J})\). Hence, one of the components of \(\mathsf{Ball}_{1/4}(z)\setminus\mathcal{J}\) must be contained in \(\mathsf{int}(\mathcal{J})\) and the other is contained in \(\mathsf{ext}(\mathcal{J})\). Let \(z_{1}^{\star}\) and \(z_{2}^{\star}\) be the dual neighbors of \(e\). The dual edge \(e^{\star}\) connects \(z_{1}^{\star}\) to one of these connected components without crossing \(\mathcal{J}\), and also connects \(z_{2}^{\star}\) to the other connected component without crossing \(\mathcal{J}\). The following is similar to Lemma 6.3 but applies in a more general setting. **Lemma 6.4**.: _For any Jordan curve \(\mathcal{J}\), any \(z\in\mathcal{J}\), and any \(\varepsilon>0\), there exists an open set \(U\subseteq\mathsf{Ball}_{\varepsilon}(z)\) containing \(z\) such that \(U\setminus\mathcal{J}\) has exactly two connected components._ Proof.: Let \(f\colon\mathbb{R}^{2}\to\mathbb{R}^{2}\) be a homeomorphism such that \(f(\mathcal{J})\) is the unit circle. Then \(f(\mathsf{Ball}_{\varepsilon}(z))\) is an open set containing \(f(z)\). Therefore, we can find \(\delta\in(0,2)\) small enough that \(\mathsf{Ball}_{\delta}(f(z))\subseteq f(\mathsf{Ball}_{\varepsilon}(z))\). Clearly \(\mathsf{Ball}_{\delta}(f(z))\setminus f(\mathcal{J})\) has two connected components, and so the claim is satisfied by taking \(U=f^{-1}\big{(}\mathsf{Ball}_{\delta}(f(z))\big{)}\). **Lemma 6.5**.: _Let \(\mathcal{J}\) be a Jordan curve, and let \(\varphi\colon[0,1]\to\mathbb{R}^{2}\) be a continuous curve such that \(\varphi(0)\) and \(\varphi(1)\) are distinct points on \(\mathcal{J}\), and \(\varphi(t)\in\mathsf{int}(\mathcal{J})\) for all \(t\in(0,1)\). Denote by \(\mathcal{J}_{1}\) the Jordan curve starting at \(\varphi(0)\), proceeding in some direction along \(\mathcal{J}\) until reaching \(\varphi(1)\), and then following \(\varphi\) back to \(\varphi(0)\). Let \(\mathcal{J}_{2}\) be the same but proceeding along \(\mathcal{J}\) in the other direction. We can then decompose the interior of \(\mathcal{J}\) as the disjoint union_ \[\mathsf{int}(\mathcal{J})=\mathsf{int}(\mathcal{J}_{1})\uplus\mathsf{int}( \mathcal{J}_{2})\uplus\{\varphi(t):\,t\in(0,1)\}. \tag{6.1}\] Figure 10 gives a pictoral representation of Lemma 6.5. Proof.: First we argue that the right-hand side of (6.1) is contained in the left-hand side. As a trivial first step, we assumed that \(\varphi(t)\in\mathsf{int}(\mathcal{J})\) for all \(t\in(0,1)\). Next, consider any \(x\in\mathsf{int}(\mathcal{J}_{1})\) and any continuous curve \(\phi\) starting at \(x\) and extending to infinity. We wish to show that \(\phi\) must intersect \(\mathcal{J}\). Now, \(\phi\) must intersect \(\mathcal{J}_{1}\); if this intersection is a point on \(\mathcal{J}\), then we are done. Otherwise the point of intersection is \(\phi(t)\) for some \(t\in(0,1)\); since this point belongs to \(\mathsf{int}(\mathcal{J})\), we again conclude that \(\phi\) must eventually intersect \(\mathcal{J}\). We have thus argued that \(\mathsf{int}(\mathcal{J}_{1})\subseteq\mathsf{int}(\mathcal{J})\), and of course \(\mathsf{int}(\mathcal{J}_{2})\subseteq\mathsf{int}(\mathcal{J})\) also holds by symmetric reasoning. Furthermore, \(\mathsf{int}(\mathcal{J}_{1})\) and \(\mathsf{int}(\mathcal{J}_{2})\) must be disjoint by the following logic. Since \(\mathcal{J}_{1}\) and \(\mathsf{int}(\mathcal{J}_{1})\) are disjoint, we have \[(\mathcal{J}_{1}\cup\mathcal{J})\cap\mathsf{int}(\mathcal{J}_{1})=\mathcal{J }\cap\mathsf{int}(\mathcal{J}_{1})\subseteq\mathcal{J}\cap\mathsf{int}( \mathcal{J})=\varnothing.\] Meanwhile, because \(\mathcal{J}\) and \(\mathsf{ext}(\mathcal{J})\) are disjoint, we have \[(\mathcal{J}_{1}\cup\mathcal{J})\cap\mathsf{ext}(\mathcal{J}) =(\mathcal{J}_{1}\setminus\mathcal{J})\cap\mathsf{ext}(\mathcal{ J})\] \[=\{\gamma(t):\,t\in(0,1)\}\cap\mathsf{ext}(\mathcal{J})\subseteq \mathsf{int}(\mathcal{J})\cap\mathsf{ext}(\mathcal{J})=\varnothing.\] Considering that \(\mathcal{J}_{2}\subseteq\mathcal{J}_{1}\cup\mathcal{J}\), it follows from these two observations that \[\mathcal{J}_{2}\cap\mathsf{int}(\mathcal{J}_{1})=\varnothing=\mathcal{J}_{2} \cap\mathsf{ext}(\mathcal{J}). \tag{6.2}\] Now take any \(x\in\mathsf{int}(\mathcal{J}_{1})\) and \(z\in\mathcal{J}\setminus\mathcal{J}_{2}\). Since \(z\) necessarily belongs to \(\mathcal{J}_{1}\), there exists a continuous path \(\phi\) from \(x\) to \(z\) remaining entirely in \(\mathsf{int}(\mathcal{J}_{1})\cap\{z\}\). By the first equality in (6.2), this path never intersects \(\mathcal{J}_{2}\). Next we extend \(\phi\) from \(z\) to infinity using a path remaining entirely in \(\mathsf{ext}(\mathcal{J})\cap\{z\}\); by the second equality in (6.2), this extension also avoids \(\mathcal{J}_{2}\). We have thus constructed a path disjoint from \(\mathcal{J}_{2}\) that starts at \(x\) and extends to infinity, implying that \(x\in\operatorname{\mathsf{ext}}(\mathcal{J}_{2})\). In particular, \(x\) does not belong to \(\operatorname{\mathsf{int}}(\mathcal{J}_{2})\). All that remains to show is that the left-hand side of (6.1) is contained in the right-hand side. So suppose \(x\in\operatorname{\mathsf{int}}(\mathcal{J})\) but belongs to neither \(\operatorname{\mathsf{int}}(\mathcal{J}_{2})\) nor \(\{\varphi(t):\,t\in(0,1)\}\). It follows that \(x\notin\mathcal{J}_{2}\) since \(\mathcal{J}_{2}\cap\operatorname{\mathsf{int}}(\mathcal{J})\) is equal to \(\{\varphi(t):\,t\in(0,1)\}\). This leaves only the possibility that \(x\in\operatorname{\mathsf{ext}}(\mathcal{J}_{2})\), meaning there is a continuous curve \(\phi\) starting at \(x\) and extending to infinity which never intersects \(\mathcal{J}_{2}\). But because \(x\in\operatorname{\mathsf{int}}(\mathcal{J})\), this curve must intersect \(\mathcal{J}\), necessarily at some point belonging to \(\mathcal{J}\setminus\mathcal{J}_{2}\subseteq\mathcal{J}_{1}\). Let \(z\) be the first intersection of \(\phi\) with \(\mathcal{J}\) upon leaving \(x\), and let \(\phi_{x\to z}\) be portion of \(\phi\) from \(x\) to \(z\). In particular, \(z\) is the unique intersection point of \(\phi_{x\to z}\) with \(\mathcal{J}\). Furthermore, \(\phi_{x\to z}\) can only intersect \(\mathcal{J}_{1}\) at \(\mathcal{J}_{1}\setminus\mathcal{J}_{2}\subseteq\mathcal{J}\), meaning \(z\) is also the unique intersection point of \(\phi_{x\to z}\) with \(\mathcal{J}_{1}\). **Claim 6.6**.: _For all \(\varepsilon>0\) small enough (depending on \(z\)), we have_ \[\mathcal{J}\cap\operatorname{\mathsf{Ball}}_{\varepsilon}(z)=\mathcal{J}_{1} \cap\operatorname{\mathsf{Ball}}_{\varepsilon}(z). \tag{6.3}\] Proof.: Being a Jordan curve, \(\mathcal{J}_{2}\) is closed, and so \(\mathbb{R}^{2}\setminus\mathcal{J}_{2}\) is open. Considering that \(z\in\mathbb{R}^{2}\setminus\mathcal{J}_{2}\), we can thus choose \(\varepsilon_{0}>0\) such that \(B_{\varepsilon}(z)\cap\mathcal{J}_{2}\) is empty whenever \(\varepsilon\leq\varepsilon_{0}\). For such \(\varepsilon\), using the fact that \(\mathcal{J}\setminus\mathcal{J}_{2}=\mathcal{J}_{1}\setminus\mathcal{J}_{2}\), we now have \[\mathcal{J}\cap\operatorname{\mathsf{Ball}}_{\varepsilon}(z)=(\mathcal{J} \setminus\mathcal{J}_{2})\cap\operatorname{\mathsf{Ball}}_{\varepsilon}(z)=( \mathcal{J}_{1}\setminus\mathcal{J}_{2})\cap\operatorname{\mathsf{Ball}}_{ \varepsilon}(z)=\mathcal{J}_{1}\cap\operatorname{\mathsf{Ball}}_{\varepsilon} (z),\] where the first and last equalities use \(\operatorname{\mathsf{Ball}}_{\varepsilon}(z)\cap\mathcal{J}_{2}=\varnothing\). Now let \(\varepsilon>0\) be small enough that (6.3) holds, and then take \(U\) as in Lemma 6.4. In particular, we have \[\mathcal{J}\cap U=\mathcal{J}\cap\operatorname{\mathsf{Ball}}_{\varepsilon}(z )\cap U=\mathcal{J}_{1}\cap\operatorname{\mathsf{Ball}}_{\varepsilon}(z)\cap U =\mathcal{J}_{1}\cap U. \tag{6.4}\] Because \(U\) has nonempty intersection with both \(\operatorname{\mathsf{int}}(\mathcal{J})\) and \(\operatorname{\mathsf{ext}}(\mathcal{J})\), we conclude that the two connected components of \(U\setminus\mathcal{J}\) are precisely \(U\cap\operatorname{\mathsf{int}}(\mathcal{J})\) and \(U\cap\operatorname{\mathsf{ext}}(\mathcal{J})\). But \(U\setminus\mathcal{J}_{1}=U\setminus\mathcal{J}\) by (6.4), and so by the exact same logic, these two connected components are also expressible as \(U\cap\operatorname{\mathsf{int}}(\mathcal{J}_{1})\) and \(U\cap\operatorname{\mathsf{ext}}(\mathcal{J}_{1})\). In particular, \(U\cap\operatorname{\mathsf{int}}(\mathcal{J}_{1})\) is equal to either \(U\cap\operatorname{\mathsf{int}}(\mathcal{J})\) or \(U\cap\operatorname{\mathsf{ext}}(\mathcal{J})\). Considering that \(\operatorname{\mathsf{int}}(\mathcal{J}_{1})\subset\operatorname{\mathsf{int}}( \mathcal{J})\), it must be the former: \[U\cap\operatorname{\mathsf{int}}(\mathcal{J}_{1})=U\cap\operatorname{\mathsf{ int}}(\mathcal{J}). \tag{6.5}\] To complete the proof, we return to our consideration of \(\phi_{x\to z}\). This curve starts in \(\operatorname{\mathsf{int}}(\mathcal{J})\) and only encounters \(\mathcal{J}\) at the terminal point \(z\), and so it must otherwise remain in \(\operatorname{\mathsf{int}}(\mathcal{J})\). Since \(\phi_{x\to z}\) intersects \(U\cap\operatorname{\mathsf{int}}(\mathcal{J})\), it follows from (6.5) that \(\phi_{x\to z}\) intersects \(\operatorname{\mathsf{int}}(\mathcal{J}_{1})\), meaning that \(x\) is connected to some element of \(\operatorname{\mathsf{int}}(\mathcal{J}_{1})\) by a continuous curve avoiding \(\mathcal{J}_{1}\). Hence \(x\in\operatorname{\mathsf{int}}(\mathcal{J}_{1})\), as needed. **Lemma 6.7**.: _Let \(\mathcal{J}_{1}\) be a Jordan curve, and let \(\Phi\colon[0,1]\to\mathbb{R}^{2}\) be a continuous curve such that \(\Phi(0)\) and \(\Phi(1)\) are distinct points on \(\mathcal{J}_{1}\), and \(\Phi(t)\in\operatorname{\mathsf{ext}}(\mathcal{J}_{1})\) for all \(t\in(0,1)\). Let \(\varphi_{a}\) denote the portion of \(\mathcal{J}_{1}\) proceeding clockwise from \(\Phi(0)\) to \(\Phi(1)\), and let \(\varphi_{b}\) be the same but proceeding counterclockwise. We parameterize these functions \(\varphi_{a},\varphi_{b}\colon[0,1]\to\mathbb{R}^{2}\) such that \(\varphi_{a}(0)=\varphi_{b}(0)=\Phi(0)\) and \(\varphi_{a}(1)=\varphi_{b}(1)=\Phi(1)\). Denote by \(\mathcal{J}_{a}\) the Jordan curve starting at \(\Phi(0)\), following \(\varphi_{a}\) until reaching \(\Phi(1)\), and then following \(\Phi\) back to \(\Phi(0)\). Let \(\mathcal{J}_{b}\) be the same but using \(\varphi_{b}\). Then either_ \[\operatorname{\mathsf{int}}(\mathcal{J}_{a})=\operatorname{\mathsf{int}}( \mathcal{J}_{1})\uplus\operatorname{\mathsf{int}}(\mathcal{J}_{b})\uplus\{ \varphi_{b}(t):\,t\in(0,1)\}\] (6.6a) _or_ \[\operatorname{\mathsf{int}}(\mathcal{J}_{b})=\operatorname{\mathsf{int}}( \mathcal{J}_{1})\uplus\operatorname{\mathsf{int}}(\mathcal{J}_{a})\uplus\{ \varphi_{a}(t):\,t\in(0,1)\}. \tag{6.6b}\] Figure 11 gives a graphical depiction of Lemma 6.7. Proof.: We divide our argument into four steps. **Step 1**: Show that \(\mathsf{int}(\mathcal{J}_{a})\cap\mathsf{ext}(\mathcal{J}_{1})\) is nonempty. For any \(y\in\mathcal{J}_{a}\setminus\mathcal{J}_{1}\), we have \(y\in\mathsf{ext}(\mathcal{J}_{1})\) by assumption. Since \(\mathsf{ext}(\mathcal{J}_{1})\) is open, there is \(\varepsilon>0\) small enough that \(\mathsf{Ball}_{\varepsilon}(y)\subseteq\mathsf{ext}(\mathcal{J}_{1})\). Because \(y\in\mathcal{J}_{a}\), \(\mathsf{Ball}_{\varepsilon}(y)\) has nonempty intersection with \(\mathsf{int}(\mathcal{J}_{a})\). This completes the first step. **Step 2**: Show that \(\mathsf{int}(\mathcal{J}_{a})\cap\mathsf{int}(\mathcal{J}_{b})\) is nonempty. By the first step, there exists \(x\in\mathsf{int}(\mathcal{J}_{a})\cap\mathsf{ext}(\mathcal{J}_{1})\). In particular, there exists a path \(\phi\) starting at \(x\), extending to infinity, and remaining entirely in \(\mathsf{ext}(\mathcal{J}_{1})\). But because \(x\in\mathsf{int}(\mathcal{J}_{a})\), this path must intersect \(\mathcal{J}_{a}\). Denote by \(z\in\mathcal{J}_{a}\) the last intersection point, and let \(\phi_{z\to\infty}\) be the portion of \(\phi\) starting at \(z\) and extending to infinity; with this choice, \(\phi_{z\to\infty}\) never again intersects \(\mathcal{J}_{a}\). Now note that \[\mathcal{J}_{1}\cap\mathsf{ext}(\mathcal{J}_{1})=\mathcal{J}_{2}\cap\mathsf{ ext}(\mathcal{J}_{1})=\{\Phi(t):\,t\in(0,1)\}. \tag{6.7}\] By construction we have \(z\in\mathsf{ext}(\mathcal{J}_{1})\), and so we can choose \(\varepsilon>0\) sufficiently small that \(\mathsf{Ball}_{\varepsilon}(z)\subseteq\mathsf{ext}(\mathcal{J}_{1})\), so as to guarantee \[\mathcal{J}_{1}\cap\mathsf{Ball}_{\varepsilon}(z)=\mathcal{J}_{1}\cap\mathsf{ ext}(\mathcal{J}_{1})\cap\mathsf{Ball}_{\varepsilon}(z)=\mathcal{J}_{2}\cap \mathsf{ext}(\mathcal{J}_{1})\cap\mathsf{Ball}_{\varepsilon}(z)=\mathcal{J}_ {2}\cap\mathsf{Ball}_{\varepsilon}(z). \tag{6.8}\] By Lemma 6.4, there exists a open set \(U\subseteq\mathsf{Ball}_{\varepsilon}(z)\) containing \(z\) such that \(U\setminus\mathcal{J}_{1}\) has exactly two components. Moreover, (6.8) can be specialized this open set: \[\mathcal{J}_{a}\cap U=\mathcal{J}_{a}\cap\mathsf{Ball}_{\varepsilon}(z)\cap U =\mathcal{J}_{b}\cap\mathsf{Ball}_{\varepsilon}(z)\cap U=\mathcal{J}_{b}\cap U. \tag{6.9}\] Because \(U\) has nonempty intersection with both \(\mathsf{int}(\mathcal{J}_{a})\) and \(\mathsf{ext}(\mathcal{J}_{a})\), we conclude that the two connected components of \(U\setminus\mathcal{J}_{a}\) are precisely \(U\cap\mathsf{int}(\mathcal{J}_{a})\) and \(U\cap\mathsf{ext}(\mathcal{J}_{a})\). But \(U\setminus\mathcal{J}_{a}=U\setminus\mathcal{J}_{b}\) by (6.9), and so by the exact same logic, these two connected components are also expressible as \(U\cap\mathsf{int}(\mathcal{J}_{b})\) and \(U\cap\mathsf{ext}(\mathcal{J}_{b})\). In particular, \(U\cap\mathsf{ext}(\mathcal{J}_{b})\) is equal to either \(U\cap\mathsf{int}(\mathcal{J}_{a})\) or \(U\cap\mathsf{ext}(\mathcal{J}_{a})\). We claim it must be the latter. Indeed, since \(\phi_{z\to\infty}\) extends to infinity yet never intersects \(\mathcal{J}_{a}\cup\mathcal{J}_{1}\supseteq\mathcal{J}_{b}\) except at its starting point \(z\), it must otherwise remain in \(\mathsf{ext}(\mathcal{J}_{a})\cap\mathsf{ext}(\mathcal{J}_{b})\). Hence \(U\cap\mathsf{ext}(\mathcal{J}_{a})\) and \(U\cap\mathsf{ext}(\mathcal{J}_{b})\) have nonempty intersection, forcing the two sets to be equal. This leaves us to conclude that \(U\cap\mathsf{int}(\mathcal{J}_{a})=U\cap\mathsf{int}(\mathcal{J}_{b})\); in particular, \(\mathsf{int}(\mathcal{J}_{a})\cap\mathsf{int}(\mathcal{J}_{b})\) is nonempty. Figure 11. A depiction of Lemma 6.7: The curve \(\mathcal{J}_{1}\) is depicted in black/thick and the curve \(\Phi\) is depicted in gray/thin. In this case, (6.6a) holds. **Step 3**: Argue that one of \(\mathcal{J}_{a}\) and \(\mathcal{J}_{b}\) must enclose the other. Consider the two portions of \(\mathcal{J}_{b}\) between \(\Phi(0)\) and \(\Phi(1)\). The portion formed by the curve \(\Phi\) is shared entirely with \(\mathcal{J}_{a}\). The other portion of \(\mathcal{J}_{b}\) intersects \(\mathcal{J}_{a}\) only at the endpoints \(\Phi(0)\) and \(\Phi(1)\), and so this portion otherwise remains either entirely in \(\mathsf{int}(\mathcal{J}_{a})\) or entirely in \(\mathsf{ext}(\mathcal{J}_{a})\). Putting these two observations together, we determine that either \(\mathcal{J}_{b}\subseteq\mathcal{J}_{a}\cup\mathsf{int}(\mathcal{J}_{a})\) or \(\mathcal{J}_{b}\subseteq\mathcal{J}_{a}\cup\mathsf{ext}(\mathcal{J}_{a})\). In either case, by (the contrapositive of) Lemma 6.2(a), one of \(\mathcal{J}_{a}\) and \(\mathcal{J}_{b}\) encloses the other. For concreteness, let us assume \(\mathcal{J}_{a}\) encloses \(\mathcal{J}_{b}\), since the reverse scenario would use exactly the same argument. **Step 4**: Appeal to Lemma 6.5. Recall that \(\varphi_{b}\) is the portion of \(\mathcal{J}_{b}\) which is not shared with \(\mathcal{J}_{a}\) except at the endpoints \(\varphi_{b}(0)=\Phi(0)\) and \(\varphi_{b}(1)=\Phi(1)\). Consider any \(y=\varphi_{b}(t)\) for \(t\in(0,1)\). Since \(y\notin\mathcal{J}_{a}\), we must have either \(y\in\mathsf{int}(\mathcal{J}_{a})\) or \(y\in\mathsf{ext}(\mathcal{J}_{a})\). We claim the former is true. Indeed, because \(y\in\mathcal{J}_{b}\), every open neighborhood of \(y\) must intersect both \(\mathsf{int}(\mathcal{J}_{b})\) and \(\mathsf{ext}(\mathcal{J}_{b})\). But we have assumed \(\mathsf{int}(\mathcal{J}_{b})\subseteq\mathsf{int}(\mathcal{J}_{a})\), and so every open neighborhood of \(y\) must intersect \(\mathsf{int}(\mathcal{J}_{a})\), forcing \(y\in\mathsf{int}(\mathcal{J}_{a})\). We have thus shown that \[\{\varphi_{b}(t):\,t\in(0,1)\}\subseteq\mathsf{int}(\mathcal{J}_{a}).\] Therefore, we are in the setting of Lemma 6.5 with \(\mathcal{J}=\mathcal{J}_{a}\), \(\mathcal{J}_{2}=\mathcal{J}_{b}\), and \(\varphi=\varphi_{b}\). Indeed, in this notation, \(\mathcal{J}_{1}\) is the union of \(\varphi_{b}\) with some portion of \(\mathcal{J}_{a}\) (namely the clockwise arc of \(\mathcal{J}_{1}\) between \(\Phi(0)\) and \(\Phi(1)\)), while \(\mathcal{J}_{2}=\mathcal{J}_{b}\) is the union of \(\varphi_{b}\) with the complementary portion of \(\mathcal{J}_{a}\) (namely \(\Phi\) itself). See Figure 11 for reference. The desired conclusion now follows from (6.1). Recall Definition 2.1 for circuits. **Lemma 6.8**.: _If \(\mathcal{J}_{1}\) and \(\mathcal{J}_{2}\) are Jordan curves such that \(\mathsf{int}(\mathcal{J}_{1})\subsetneq\mathsf{int}(\mathcal{J}_{2})\), then the area of \(\mathsf{int}(\mathcal{J}_{1})\) is strictly less than that of \(\mathsf{int}(\mathcal{J}_{2})\). Furthermore, if \(\mathcal{C}\) is a circuit, then the area of \(\mathsf{int}(\mathcal{C})\) is a positive integer._ Proof.: For the first claim of the lemma, it suffices to find some open set \(U\subseteq\mathsf{ext}(\mathcal{J}_{1})\cap\mathsf{int}(\mathcal{J}_{2})\). By assumption there exists \(x\in\mathsf{int}(\mathcal{J}_{2})\setminus\mathsf{int}(\mathcal{J}_{1})\). If \(x\in\mathsf{ext}(\mathcal{J}_{1})\), then take \(U\) to be any open set which contains \(x\) and remains in \(\mathsf{ext}(\mathcal{J}_{1})\cap\mathsf{int}(\mathcal{J}_{2})\). Otherwise we must have \(x\in\mathcal{J}_{1}\), and then we take some open set \(V\subseteq\mathsf{int}(\mathcal{J}_{2})\) containing \(x\). This set \(V\) must contain some \(y\in\mathsf{ext}(\mathcal{J}_{1})\), and so we take \(U\) to be any open set containing \(y\) which remains in \(V\cap\mathsf{ext}(\mathcal{J}_{1})\subseteq\mathsf{ext}(\mathcal{J}_{1})\cap \mathsf{int}(\mathcal{J}_{2})\). For the second claim, let us consider the case when \(\mathcal{C}\) is a primal circuit; the case of a dual circuit is entirely analogous. For every \(x\in\mathbb{Z}^{2}\), the set \(U_{x}=x+(0,1)^{2}\) can have no intersection with \(\mathcal{C}\). (This is because \(\mathcal{C}\) consists entirely of edges between nearest-neighbor vertices.) And clearly \(U_{x}\) is connected, and so it must lie either entirely in \(\mathsf{int}(\mathcal{C})\) or entirely in \(\mathsf{ext}(\mathcal{C})\). Since each \(U_{x}\) has area \(1\) and the area of \(\mathbb{R}^{2}\setminus\biguplus_{x\in\mathbb{Z}^{2}}U_{x}\) is equal to \(0\), we conclude that the area of \(\mathsf{int}(\mathcal{C})\) is precisely the number of \(x\) such that \(U_{x}\subseteq\mathsf{int}(\mathcal{C})\). This number is positive because the interior of any Jordan curve is open. **Lemma 6.9**.: _Let \(\mathcal{P}\) be an open primal circuit and \(\mathcal{D}\) a closed dual circuit. If \(\mathsf{int}(\mathcal{P})\cap\mathsf{int}(\mathcal{D})\) is nonempty, then one must enclose the other._ Proof.: Suppose the conclusion were false, in which case Lemma 6.2(b) would guarantee that the two circuits intersect. But since \(\mathcal{P}\) is primal and \(\mathcal{D}\) is dual, this intersection must be the midpoint of some edge \(e\). Hence \(e\) is both open and closed, a contradiction. **Lemma 6.10**.: _Consider any circuits \(\mathcal{C}_{1},\ldots,\mathcal{C}_{k}\) such that \(\bigcap_{i=1}^{k}\mathsf{int}(\mathcal{C}_{k})\) is nonempty. There exists a unique circuit \(\mathcal{C}\) such that_ \[\bigcup_{i=1}^{k}\mathsf{int}(\mathcal{C}_{i})\subseteq\mathsf{int}(\mathcal{C })\quad\text{and}\quad\mathcal{C}\subseteq\bigcup_{i=1}^{k}\mathcal{C}_{i}. \tag{6.10}\] Proof.: First we prove uniqueness. Consider two circuits \(\mathcal{C},\widetilde{\mathcal{C}}\) that both satisfy (6.10) with the given collection \((\mathcal{C}_{i})_{i=1}^{k}\). Suppose towards a contradiction that \(\mathcal{C}\neq\widetilde{\mathcal{C}}\). Then we claim there is some edge \(e\) belonging to one of the circuits whose midpoint is in the exterior of the other circuit. To see this claim, first consider the case that one circuit encloses the other, say \(\mathsf{int}(\widetilde{\mathcal{C}})\subseteq\mathsf{int}(\mathcal{C})\). Then every edge of \(\mathcal{C}\) either belongs to \(\widetilde{\mathcal{C}}\) and has its midpoint in \(\mathsf{ext}(\widetilde{\mathcal{C}})\). Since we assumed \(\mathcal{C}\neq\widetilde{\mathcal{C}}\), there must be at least one edge with the latter property. On the other hand, if neither circuit encloses the other, then we can simply appeal to Lemma 6.2(a). From the claim, we find some \(z\in\mathsf{ext}(\widetilde{\mathcal{C}})\) which is the midpoint of an edge \(e\) belonging to \(\mathcal{C}\). The edge \(e\) must belong to \(\mathcal{C}_{i}\) for some \(i\), and so every open neighborhood of \(e\) has nonempty intersection with both \(\mathsf{int}(\mathcal{C}_{i})\) and \(\mathsf{ext}(\mathcal{C}_{i})\). In particular, \[\mathsf{Ball}_{1/2}(z)\cap\mathsf{int}(\mathcal{C}_{i})\neq\varnothing.\] Next notice that because the midpoint \(z\) is in the exterior of \(\widetilde{\mathcal{C}}\), the open set \(\mathsf{Ball}_{1/2}(z)\) has no intersection with \(\widetilde{\mathcal{C}}\); it follows that \[\mathsf{Ball}_{1/2}(z)\subseteq\mathsf{ext}(\widetilde{\mathcal{C}}).\] But when read together, the two previous displays imply that \(\mathsf{int}(\mathcal{C}_{i})\) has nonempty intersection with \(\mathsf{ext}(\widetilde{\mathcal{C}})\), which contradicts the hypothesis that \(\mathsf{int}(\mathcal{C}_{i})\subseteq\mathsf{int}(\widetilde{\mathcal{C}})\). We are left to conclude that \(\widetilde{\mathcal{C}}\) must be equal to \(\mathcal{C}\). Having established uniqueness, we look to prove existence. We begin with just two circuits. That is, we wish to find some circuit \(\mathcal{C}\) such that \[\mathsf{int}(\mathcal{C}_{1})\cup\mathsf{int}(\mathcal{C}_{2})\subseteq \mathsf{int}(\mathcal{C})\quad\text{and}\quad\mathcal{C}\subseteq\mathcal{C} _{1}\cup\mathcal{C}_{2}. \tag{6.11}\] If \(\mathsf{int}(\mathcal{C}_{1})\subseteq\mathsf{int}(\mathcal{C}_{2})\) or \(\mathsf{int}(\mathcal{C}_{2})\subseteq\mathsf{int}(\mathcal{C}_{1})\), then we simply take \(\mathcal{C}\) to be \(\mathcal{C}_{2}\) or \(\mathcal{C}_{1}\), respectively. So let us assume neither containment is true, and then Lemma 6.2(a) implies the existence of some \(z_{\text{int}}\in\mathsf{ext}(\mathcal{C}_{1})\cap\mathcal{C}_{2}\). From \(z_{\text{ext}}\) we follow the circuit \(\mathcal{C}_{2}\) in both directions; by Lemma 6.2(b), in each direction we will encounter a distinct first intersection with \(\mathcal{C}_{1}\). Let \(\Phi\) denote the subpath of \(\mathcal{C}_{2}\) between these intersection points and containing \(z_{\text{ext}}\), so that \(\Phi\) is entirely in \(\mathsf{ext}(\mathcal{C}_{1})\) except at its endpoints. We are thus in the setting of Lemma 6.7, with the possibility to complete \(\Phi\) to a full Jordan curve by following \(\mathcal{C}_{1}\) either clockwise or counterclockwise. The circuit resulting from one of these directions will enclose the circuit resulting from the other direction; let \(\mathcal{C}\) be the enclosing circuit and \(\mathcal{C}_{b}\) the enclosed circuit, so that (6.6) gives \[\mathsf{int}(\mathcal{C})=\mathsf{int}(\mathcal{C}_{1})\uplus\mathsf{int}( \mathcal{C}_{b})\uplus\Phi. \tag{6.12}\] Notice that the edges in \(\mathcal{C}\) belong to either \(\mathcal{C}_{1}\) or to \(\Phi\), which was a subpath of \(\mathcal{C}_{2}\). So if \(\mathcal{C}\) encloses \(\mathcal{C}_{2}\), then \(\mathcal{C}\) satisfies (6.11). Otherwise, we proceed inductively with \(\mathcal{C}\) replacing \(\mathcal{C}_{1}\). To complete the proof, we just need to argue that the procedure just performed can only be repeated finitely many times. Indeed, in light of (6.12), Lemma 6.8 tells us that the area of \(\mathsf{int}(\mathcal{C})\) is at least \(1\) unit greater than that of \(\mathsf{int}(\mathcal{C}_{1})\). At the same time, the area of a circuit using only edges in \(\mathcal{C}_{1}\cup\mathcal{C}_{2}\) has a finite upper bound, and so the argument can only be repeated finitely many times. Once it terminates, the resulting circuit \(\mathcal{C}\) must satisfy (6.11). Our final step is to prove existence for general \(n\) from the \(n=2\) case just handled. Given any two circuits \(\mathcal{C}_{1}\) and \(\mathcal{C}_{2}\) such that \(\mathsf{int}(\mathcal{C}_{1})\cap\mathsf{int}(\mathcal{C}_{2})\) is nonempty, let \(\mathcal{C}_{1}\vee\mathcal{C}_{2}\) denote the unique circuit \(\mathcal{C}\) satisfying (6.11). Then given a sequence \((\mathcal{C}_{i})_{i=1}^{k}\) such that \(\bigcap_{i=1}^{k}\mathsf{int}(\mathcal{C}_{i})\neq\varnothing\), we inductively define \[\mathcal{C}_{1}^{\prime} =\mathcal{C}_{1},\qquad\qquad\text{and}\] \[\mathcal{C}_{i}^{\prime} =\mathcal{C}_{i-1}^{\prime}\vee\mathcal{C}_{i}\quad\text{for $i \in\{2,\dots,n\}$}.\] By simple induction, the final circuit \(\mathcal{C}_{n}^{\prime}\) satisfies (6.10). **Lemma 6.11**.: _Let \(S\subset\mathbb{R}^{2}\) be a nonempty connected subset of \(\mathbb{R}^{2}\), and consider any collection of circuits \((\mathcal{C}_{i})_{i\in I}\) such that \(S\subseteq\mathsf{int}(\mathcal{C}_{i})\) for every \(i\). There exists a unique circuit \(\mathcal{C}\) such that_ \[S\subseteq\mathsf{int}(\mathcal{C})\subseteq\bigcap_{i\in I}\mathsf{int}( \mathcal{C}_{i})\quad\text{and}\quad\mathcal{C}\subseteq\bigcup_{i\in I} \mathcal{C}_{i}. \tag{6.13}\] Proof.: First we prove uniqueness. Consider two circuits \(\mathcal{C},\widetilde{\mathcal{C}}\) that both satisfy (6.13) with the given collection \((\mathcal{C}_{i})_{i\in I}\). Suppose towards a contradiction that \(\mathcal{C}\neq\widetilde{\mathcal{C}}\). Then we claim there is some edge \(e\) belonging to one of the circuits whose midpoint is in the interior of the other circuit. To see this claim, first consider the case that one circuit encloses the other, say \(\mathsf{int}(\mathcal{C})\subseteq\mathsf{int}(\widetilde{\mathcal{C}})\). Then every edge of \(\mathcal{C}\) either belongs to \(\widetilde{\mathcal{C}}\) and has its midpoint in \(\mathsf{int}(\widetilde{\mathcal{C}})\). Since we assumed \(\mathcal{C}\neq\widetilde{\mathcal{C}}\), there must be at least one edge with the latter property. On the other hand, if neither circuit encloses the other, then we can simply appeal to Lemma 6.2(a). From the claim, we find some \(z\in\mathsf{int}(\widetilde{\mathcal{C}})\) which is the midpoint of an edge \(e\) belonging to \(\mathcal{C}\). The edge \(e\) must belong to \(\mathcal{C}_{i}\) for some \(i\), and so every open neighborhood of \(e\) has nonempty intersection with both \(\mathsf{int}(\mathcal{C}_{i})\) and \(\mathsf{ext}(\mathcal{C}_{i})\). In particular, \[\mathsf{Ball}_{1/2}(z)\cap\mathsf{ext}(\mathcal{C}_{i})\neq\varnothing.\] Next notice that because the midpoint \(z\) is in the interior of \(\widetilde{\mathcal{C}}\), the open set \(\mathsf{Ball}_{1/2}(z)\) has no intersection with \(\widetilde{\mathcal{C}}\); it follows that \[\mathsf{Ball}_{1/2}(z)\subseteq\mathsf{int}(\widetilde{\mathcal{C}}).\] But when read together, the two previous displays imply that \(\mathsf{int}(\widetilde{\mathcal{C}})\) has nonempty intersection with \(\mathsf{ext}(\mathcal{C}_{i})\), which contradicts the hypothesis that \(\mathsf{int}(\widetilde{\mathcal{C}})\subseteq\mathsf{int}(\mathcal{C}_{i})\). We are left to conclude that \(\widetilde{\mathcal{C}}\) must be equal to \(\mathcal{C}\). Having established uniqueness, we look to prove existence. We begin with just two circuits. That is, we wish to find some circuit \(\mathcal{C}\) such that \[S\subseteq\mathsf{int}(\mathcal{C})\subseteq\mathsf{int}(\mathcal{C}_{1}) \cap\mathsf{int}(\mathcal{C}_{2})\quad\text{and}\quad\mathcal{C}\subseteq \mathcal{C}_{1}\cup\mathcal{C}_{2}. \tag{6.14}\] If \(\mathsf{int}(\mathcal{C}_{1})\subseteq\mathsf{int}(\mathcal{C}_{2})\) or \(\mathsf{int}(\mathcal{C}_{2})\subseteq\mathsf{int}(\mathcal{C}_{1})\), then we simply take \(\mathcal{C}\) to be \(\mathcal{C}_{1}\) or \(\mathcal{C}_{2}\), respectively. let us assume neither containment is true, and then Lemma 6.2(a) implies the existence of some \(z_{\mathrm{int}}\in\mathsf{int}(\mathcal{C}_{1})\cap\mathcal{C}_{2}\). From \(z_{\mathrm{int}}\) we follow the circuit \(\mathcal{C}_{2}\) in both directions; by Lemma 6.2(b), in each direction we will encounter a distinct first intersection with \(\mathcal{C}_{1}\). Let \(\varphi\) denote the subpath of \(\mathcal{C}_{2}\) between these intersection points and containing \(z_{\mathrm{int}}\), so that \(\varphi\) is entirely in \(\mathsf{int}(\mathcal{C}_{1})\) except at its endpoints. We are thus in the setting of Lemma 6.5, with the possibility to complete \(\varphi\) to a full Jordan curve by following \(\mathcal{C}_{1}\) either clockwise or counterclockwise. We choose the direction that contains \(S\) in its interior and call the resulting circuit \(\mathcal{C}\). Because \(S\) is connected, the decomposition (6.1) shows that such a choice exists and is unique; we thus have \(S\subseteq\mathsf{int}(\mathcal{C})\subsetneq\mathsf{int}(\mathcal{C}_{1})\). Notice that the edges in \(\mathcal{C}\) belong to either \(\mathcal{C}_{1}\) or to \(\varphi\), which was a subpath of \(\mathcal{C}_{2}\). So if \(\mathcal{C}\) is enclosed by \(\mathcal{C}_{2}\), then we are done. Otherwise, we proceed inductively with \(\mathcal{C}\) replacing \(\mathcal{C}_{1}\). To complete the proof, we just need to argue that the procedure just performed can only be repeated finitely many times. Indeed, Lemma 6.8 tells us that the area of \(\mathsf{int}(\mathcal{C})\) is at least \(1\) unit less than that of \(\mathsf{int}(\mathcal{C}_{1})\). Since \(\mathsf{int}(\mathcal{C})\) has finite area, it is now evident that the argument can only be repeated finitely many times. Once it terminates, we are left with a circuit \(\mathcal{C}\) satisfying (6.14). Our final step is to prove existence for an arbitrary index set \(I\). There are only countably many distinct circuits in the lattice, and so it suffices to assume the index set \(I\) is the set of positive integers. Given any two circuits \(\mathcal{C}_{1}\) and \(\mathcal{C}_{2}\) such that \(S\subseteq\mathsf{int}(\mathcal{C}_{1})\cap\mathsf{int}(\mathcal{C}_{2})\), let \(\mathcal{C}_{1}\wedge\mathcal{C}_{2}\) denote the unique circuit \(\mathcal{C}\) satisfying (6.14). Then given a sequence \((\mathcal{C}_{i})_{i=1}^{\infty}\) such that \(S\subseteq\mathsf{int}(\mathcal{C}_{i})\) for every \(i\), we inductively define \[\mathcal{C}_{1}^{\prime} =\mathcal{C}_{1},\] and \[\mathcal{C}_{i}^{\prime} =\mathcal{C}_{i-1}^{\prime}\wedge\mathcal{C}_{i}\quad\text{for }i \geq 2.\] By definition, we have \[S\subseteq\mathsf{int}(\mathcal{C}_{i}^{\prime}) \subseteq\mathsf{int}(\mathcal{C}_{i-1}^{\prime})\cap\mathsf{ int}(\mathcal{C}_{i}) \tag{6.15}\] \[\subseteq\mathsf{int}(\mathcal{C}_{i-2}^{\prime})\cap\mathsf{ int}(\mathcal{C}_{i-1})\cap\mathsf{int}(\mathcal{C}_{i})\] \[\vdots\] \[\subseteq\mathsf{int}(\mathcal{C}_{1})\cap\cdots\cap\mathsf{ int}(\mathcal{C}_{i}).\] Since there are only finitely many circuits enclosed by \(\mathcal{C}_{1}\), we must eventually have \(\mathcal{C}_{i}^{\prime}\) equal to some \(\mathcal{C}\) for all large \(i\). For this circuit \(\mathcal{C}\), (6.13) follows from the fact that (6.15) holds for every \(i\). ## 7. Proof of Proposition 2.5 (construction of geodesic) For the benefit of the reader, we recall the conventions of Definitions 2.2 and 2.4 and the statement of Proposition 2.5. We say a circuit \(\mathcal{C}\)_encloses_ a set of vertices \(\mathcal{A}\) (i.e. either \(\mathcal{A}\subseteq\mathbb{Z}^{2}\) or \(A\subset\widehat{\mathbb{Z}}^{2}\)) if \(\mathcal{A}\subseteq\mathsf{int}(\mathcal{C})\). We say a circuit \(\mathcal{C}\)_encloses_ another circuit \(\mathcal{C}^{\prime}\) if \(\mathsf{int}(\mathcal{C}^{\prime})\subseteq\mathsf{int}(\mathcal{C})\). Definition 2.4 states that primal path \(\gamma\) is _open_ if all its edges are open. We also recall the conventions for dual neighbors: every edge \(e\) has two dual neighbors-the endpoints of the dual edge \(e^{\star}\). On the other hand, every vertex \(v\) has four dual neighbors. In both cases, dual neighbors are vertices on the dual lattice. * If \(A\subset\mathbb{Z}^{2}\), then we say \(\gamma\) starts (ends) at \(\mathcal{A}\) if its first (last) vertex is an element of \(\mathcal{A}\). * If \(\mathcal{E}\) is a collection of primal edges, then we say \(\gamma\) starts (ends) at \(\mathcal{E}\) if its first (last) vertex is an endpoint of some element of \(\mathcal{E}\). * If \(\widehat{\mathcal{E}}\) is a collection of dual edges, then we say \(\gamma\) starts (ends) at \(\widehat{\mathcal{E}}\) if its first (last) vertex is a dual neighbor of some element of \(\widehat{\mathcal{E}}\). A dual path \(\zeta\) is _closed_ if all its edges are closed. * If \(\mathcal{A}\subset\mathbb{Z}^{2}\), then we say \(\zeta\) starts (ends) at \(\mathcal{A}\) if its first (last) vertex is equal to \(x\pm(\frac{1}{2},\frac{1}{2})\) or \(x\pm(\frac{1}{2},-\frac{1}{2})\) for some \(x\in\mathcal{A}\). * If \(\mathcal{E}\) is a collection of primal edges, then we say \(\gamma\) starts (ends) at \(\mathcal{E}\) if its first (last) vertex is a dual neighbor of some element of \(\mathcal{E}\). * If \(\widehat{\mathcal{E}}\) is a collection of dual edges, then we say \(\gamma\) starts (ends) at \(\widehat{\mathcal{E}}\) if its first (last) vertex is an endpoint of some element of \(\widehat{\mathcal{E}}\). Proposition 2.5 states the following: let \(\mathcal{A}\) and \(\mathcal{B}\) be finite disjoint connected subsets of \(\mathbb{Z}^{2}\). On the event \(\Omega_{\infty}\), there exists a (possibly empty) sequence of edge-disjoint open circuits \(\mathcal{I}_{1},\ldots,\mathcal{I}_{L},\mathcal{I}_{L+1},\ldots,\mathcal{I}_{P}\) satisfying the following: 1. \(\mathcal{A}\subseteq\mathsf{int}(\mathcal{I}_{1})\subseteq\mathsf{int}( \mathcal{I}_{2})\subseteq\cdots\subseteq\mathsf{int}(\mathcal{I}_{L})\subseteq \mathsf{int}(\mathcal{I}_{L})\cup\mathcal{I}_{L}\subseteq\mathcal{B}^{C}\). 2. \(\mathsf{int}(\mathcal{I}_{L})\cap\mathsf{int}(\mathcal{I}_{L+1})=\varnothing\). 3. \(A^{C}\supseteq\mathsf{int}(\mathcal{I}_{L+1})\cup\mathcal{I}_{L+1}\supseteq \mathsf{int}(\mathcal{I}_{L+1})\supseteq\mathsf{int}(\mathcal{I}_{L+2}) \supseteq\cdots\supseteq\mathsf{int}(\mathcal{I}_{P})\supseteq B\). 4. For \(j\in\{1,\ldots,P-2\}\), the circuits \(\mathcal{I}_{j}\) and \(\mathcal{I}_{j+2}\) are vertex-disjoint. 5. For \(j\in\{1,\ldots,P\}\) and every \(e\in\mathcal{I}_{j}\), there exists a dual path \(\zeta_{e}\) to \(\mathcal{A}\) that has exactly \(j-1\) open edges-one crossing each of the circuits \(\mathcal{I}_{1},\ldots,\mathcal{I}_{j-1}\). Furthermore, there exists a geodesic \(\gamma\) from \(\mathcal{A}\) to \(\mathcal{B}\) and a disjoint dual path \(\zeta\) from \(\mathcal{A}\) to \(\mathcal{B}\) satisfying the following properties (here we note that the paths \(\zeta_{e}\) in Item (v) are not necessarily disjoint from \(\gamma\)): 1. \(\zeta\) has exactly \(P\) open edges-one crossing each of the circuits \(\mathcal{I}_{1},\ldots,\mathcal{I}_{L},\mathcal{I}_{L+1},\ldots,\mathcal{I}_{P}\). 2. For each circuit \(\mathcal{I}_{j}\), let \(x\) and \(y\) be the first and last vertices from \(\gamma\) on that circuit. Then, the portion of \(\gamma\) between \(x\) and \(y\) lies entirely on \(\mathcal{I}_{j}\). If \(\mathcal{I}_{j}\) and \(\mathcal{I}_{j+1}\) are not vertex-disjoint, then the path \(\gamma\) crosses from \(\mathcal{I}_{j}\) to \(\mathcal{I}_{j+1}\) via a common vertex, and \(\gamma\) contains no edges lying strictly between the circuits. 3. If \(\gamma_{R}\) is the geodesic corresponding to \(\mathcal{A}=\{0\}\) and \(\mathcal{B}_{R}=\partial B_{R}\) for \(R\geq 0\), then the sequence \(\gamma_{R}\) can be chosen so that the portion of \(\{\gamma_{R}\}_{R\in\mathbb{Z}}\) between successive open circuits does not depend on \(R\) (while the portion between the last circuit in \(\partial B_{R}\) and the boundary does depend on \(R\)). 4. From every open edge \(e\in\gamma\) with \(e\notin\mathcal{I}_{1}\cup\cdots\cup\mathcal{I}_{L}\cup\mathcal{I}_{L+1}\cup \cdots\cup\mathcal{I}_{P}\), there exists a closed path from \(e\) to \(\zeta\). 5. The dual of each closed edge along \(\gamma\) belongs to a closed circuit \(\mathcal{U}\) that either contains \(\mathcal{A}\) in its interior and \(\mathcal{B}\) in its exterior, or vice versa. The circuit \(\zeta\) does not contain the dual of any other edges along \(\gamma\). 6. With \(\{0,1\}\)-valued edge-weights, the closed circuits \(\mathcal{U}\) from Item (x) can be chosen to form a edge-disjoint collection \(\mathcal{U}_{1},\ldots,\mathcal{U}_{V}\). The union of the circuits \(\mathcal{I}_{1},\ldots,\mathcal{I}_{P}\) and the circuits \(\mathcal{U}_{1},\ldots,\mathcal{U}_{V}\) forms a sequence \(\mathcal{C}_{1},\ldots,\mathcal{C}_{K}\), which is ordered so that, for some index \(W\in\{0,\ldots,K\}\), \(\mathsf{int}(\mathcal{C}_{W})\cap\mathsf{int}(\mathcal{C}_{W+1})=\varnothing\) (with the convention \(\mathsf{int}(\mathcal{C}_{0})=\mathsf{int}(\mathcal{C}_{K+1})=\varnothing\)), and we have the following inclusions \[\mathcal{A}\subseteq\mathsf{int}(\mathcal{C}_{1})\subseteq\cdots\subseteq \mathsf{int}(\mathcal{C}_{W}),\quad\text{and}\quad\mathsf{int}(\mathcal{C}_{W +1})\supseteq\cdots\mathsf{int}(\mathcal{C}_{K})\supseteq B.\] We recall that, on the event \(\Omega_{\infty}\), there exists a closed circuit \(\mathcal{D}\) enclosing both \(\mathcal{A}\) and \(\mathcal{B}\). Choose such a circuit \(\widehat{\mathcal{D}}\) in a deterministic fashion. One way to do this is to let \(\widehat{\mathcal{D}}\) be the unique innermost such circuit from Lemma 6.11, but it is not necessary to make this particular choice. ### Construction of the circuits and Proof of Items (i)-(iv) Assuming that the set of circuits \[S_{0}=\{\text{Open circuits }\mathcal{C}:\mathcal{A}\subseteq\mathsf{int}( \mathcal{C})\text{ and }\mathcal{B}\subseteq\mathsf{ext}(\mathcal{C})\}\] is nonempty, let \(\mathcal{I}_{1}\) be the unique innermost circuit from Lemma 6.11 consisting of edges of circuits in \(S_{0}\) and satisfying \[\mathcal{A}\subseteq\mathsf{int}(\mathcal{I}_{1})\subseteq\bigcap_{\mathcal{C} \in S_{0}}\mathsf{int}(\mathcal{C}).\] Then, also, \[\mathcal{B}\subseteq\bigcup_{\mathcal{C}\in S_{0}}(\mathsf{ext}(\mathcal{C})\cup \mathcal{C})\subseteq\mathsf{ext}(\mathcal{I}_{1})\cup\mathcal{I}_{1},\] but points of \(\mathcal{B}\) cannot lie on \(\mathcal{I}_{1}\) because they do not lie on any of the \(\mathcal{C}\in S_{0}\) by assumption. Hence, \(\mathcal{B}\subseteq\mathsf{ext}(\mathcal{I}_{1})\). Inductively, for \(j\geq 1\), assume that \(\mathcal{I}_{j}\) has been constructed, and assume that the set \[S_{j}=\{\text{Open circuits $\mathcal{C}$ edge disjoint from and enclosing $\mathcal{I}_{i}:\mathcal{B}\subseteq\mathsf{ext}(\mathcal{C})$}\} \tag{7.1}\] is nonempty. Then, let \(\mathcal{I}_{j+1}\) be the unique innermost circuit chosen from the set \(S_{j}\) from Lemma 6.11. Again, \(\mathcal{A}\subseteq\mathsf{int}(\mathcal{I}_{j+1})\) and \(\mathcal{B}\subseteq\mathsf{ext}(\mathcal{I}_{j+1})\). Let \(L\) be the smallest index \(j\) such that \(S_{j}=\varnothing\). We see from induction that \[\mathcal{A}\subseteq\mathsf{int}(\mathcal{I}_{1})\subseteq\mathsf{int}( \mathcal{I}_{2})\subseteq\cdots\subseteq\mathsf{int}(\mathcal{I}_{L}) \subseteq\mathsf{int}(\mathcal{I}_{L})\cup\mathcal{I}_{L}\subseteq\mathcal{B} ^{C},\] so Item (i) holds. We now argue that \(L\) must be finite. Take \(\zeta\) to be any finite dual path from \(\mathcal{A}\) to \(\mathcal{B}\). The path \(\zeta\) must cross any circuit that encloses \(\mathcal{A}\) and keeps \(\mathcal{B}\) in its exterior. The path \(\zeta\) can only cross \(\mathcal{I}_{j}\) at the midpoint of an edge, and since the circuits \(\mathcal{I}_{j}\) are pairwise edge-disjoint, \(L\) can be no greater than the finite number of edges in \(\zeta\). Next, define \(S^{\prime}_{L}\) to be \[S^{\prime}_{L}=\{\text{Open circuits $\mathcal{C}$ edge disjoint from $\mathcal{I}_{L}:\mathsf{int}(\mathcal{I}_{L})\subseteq\mathsf{ext}( \mathcal{C})$ and $\mathcal{B}\subseteq\mathsf{int}(\mathcal{C})$}\}.\] In the case \(L=0\), we replace \(\mathsf{int}(\mathcal{I}_{L})\) with \(\mathcal{A}\) in the definition of \(S^{\prime}_{L}\). In the proof below, we often refer to \(\mathsf{int}(\mathcal{I}_{L})\), but the reader should replace this with \(\mathcal{A}\) for the case \(L=0\). Assuming that \(S^{\prime}_{L}\) is nonempty, we show that \(S^{\prime}_{L}\) is finite. Lemma 6.9 implies that any open circuit then either encloses or is enclosed by \(\widehat{\mathcal{D}}\). If an open circuit encloses \(\widehat{\mathcal{D}}\), it encloses both \(\mathcal{A}\) and \(\mathcal{B}\) and is therefore not in \(S^{\prime}_{L}\). Thus, \(S^{\prime}_{L}\) is a subset of open circuits enclosed by \(\widehat{\mathcal{D}}\) and is therefore finite. Let \(\mathcal{I}_{L+1}\) be the unique circuit from Lemma 6.10 satisfying \(\bigcup_{\mathcal{C}\in S^{\prime}_{L}}\mathsf{int}(\mathcal{C})\subseteq \mathsf{int}(\mathcal{I}_{L+1})\) and \(\mathcal{I}_{L+1}\subseteq\bigcup_{\mathcal{C}\in S^{\prime}_{L}}\mathcal{C}\). It follows immediately that \(\mathcal{B}\subseteq\mathsf{int}(\mathcal{I}_{L+1})\) and that \(\mathcal{I}_{L+1}\) is edge disjoint from \(\mathcal{I}_{L}\). We now prove the nontrivial fact that \(\mathsf{int}(\mathcal{I}_{L})\subseteq\mathsf{ext}(\mathcal{I}_{L+1})\), and in particular, \(\mathsf{int}(\mathcal{I}_{L})\cap\mathsf{int}(\mathcal{I}_{L+1})=\varnothing\), proving Item (ii). Refer to Figure 12 for an explanation of why this is nontrivial and a visual representation of the contradiction derived in what follows. Pick arbitrary \(\mathcal{C}_{1},\mathcal{C}_{2}\in S^{\prime}_{L}\), and let \(\mathcal{C}\) be the unique circuit in Lemma 6.10 satisfying \(\mathsf{int}(\mathcal{C}_{1})\cup\mathsf{int}(\mathcal{C}_{2})\subseteq \mathsf{int}(\mathcal{C})\) and \(\mathcal{C}\subseteq\mathcal{C}_{1}\cup\mathcal{C}_{2}\). By induction, it suffices to show that \(\mathsf{int}(\mathcal{I}_{L})\subseteq\mathsf{ext}(\mathcal{C})\). Assume, to the contrary, that this is false. The set \(\mathsf{int}(\mathcal{I}_{L})\) cannot contain points on \(\mathcal{C}\) because \(\mathcal{C}\) consists of edges of \(\mathcal{C}_{1},\mathcal{C}_{2}\), and \(\mathsf{int}(\mathcal{I}_{L})\) is contained in the exterior of both of these. Then, since \(\mathsf{int}(\mathcal{I}_{L})\) is path connected, it cannot contain points in both \(\mathsf{int}(\mathcal{C})\) and \(\mathsf{ext}(\mathcal{C})\) (If \(L=0\), we replace \(\mathsf{int}(\mathcal{I}_{L})\) with the path connected set consisting of vertices of \(\mathcal{A}\) and all edges connecting them). Hence, our assumption implies that \(\mathsf{int}(\mathcal{I}_{L})\subseteq\mathsf{int}(\mathcal{C})\). Thus, the set \[\widetilde{S}=\{\text{Open circuits $\widetilde{\mathcal{C}}\subseteq \mathcal{C}_{1}\cup\mathcal{C}_{2}$ enclosing $\mathsf{int}(\mathcal{I}_{L})$}\}\] is nonempty. Let \(\widetilde{\mathcal{C}}_{1}\) be the unique circuit from Lemma 6.11 satisfying \(\mathsf{int}(\widetilde{\mathcal{C}}_{1})\subseteq\bigcup_{\widetilde{ \mathcal{C}}\in\widetilde{S}}\mathsf{int}(\widetilde{\mathcal{C}})\) and \(\widetilde{\mathcal{C}}_{1}\subseteq\bigcup_{\widetilde{\mathcal{C}}\in \widetilde{S}}\widetilde{\mathcal{C}}\). By similar reasoning as before, \(\mathcal{B}\) must be contained in exactly one of \(\mathsf{int}(\widetilde{\mathcal{C}}_{1})\) or \(\mathsf{ext}(\widetilde{\mathcal{C}}_{1})\). If \(\mathcal{B}\subseteq\mathsf{ext}(\widetilde{\mathcal{C}}_{1})\), then \(\widetilde{\mathcal{C}}_{1}\in S_{L}\), a contradiction because \(S_{L}\) is empty. Therefore, \(\mathcal{B}\subseteq\mathsf{int}(\widetilde{\mathcal{C}}_{1})\). Consider two cases: **Case 1:**\(\mathcal{C}_{1}\) and \(\widetilde{\mathcal{C}}_{1}\) have one or fewer vertices in common. Then, since \(\widetilde{\mathcal{C}}_{1}\) consists only of edges of \(\mathcal{C}_{1}\) and \(\mathcal{C}_{2}\), we have \(\widetilde{\mathcal{C}}_{1}=\mathcal{C}_{2}\). This is a contradiction because \(\widetilde{\mathcal{C}}_{1}\) encloses \(\mathsf{int}(\mathcal{I}_{L})\), but \(\mathsf{int}(\mathcal{I}_{L})\subseteq\mathsf{ext}(\mathcal{C}_{2})\). **Case 2:**\(\mathcal{C}_{1}\) and \(\widetilde{\mathcal{C}}_{1}\) have at least two vertices in common. We show here that there exists a point along \(\mathcal{C}_{1}\) in \(\mathsf{int}(\widetilde{\mathcal{C}}_{1})\). Once this is shown, we follow \(\mathcal{C}_{1}\) in each direction along this point until hitting the two vertices of \(\widetilde{\mathcal{C}}_{1}\). Lemma 6.5 allows us to construct a smaller circuit built from edges of \(\mathcal{C}_{1},\mathcal{C}_{2}\) that encloses \(\mathcal{I}_{L}\), a contradiction to the minimality of \(\widetilde{\mathcal{C}}_{1}\). We know \(\mathcal{C}_{1}\neq\widetilde{\mathcal{C}}_{1}\) by the same reasoning as in Case 1. Furthermore, the circuit \(\mathcal{C}_{1}\) cannot enclose \(\widetilde{\mathcal{C}}_{1}\) because then, it would enclose \(\mathcal{I}_{L}\). We now have two subcases: **Subcase 2.1:**\(\widetilde{\mathcal{C}}_{1}\) encloses \(\mathcal{C}_{1}\). Then, all points of \(\mathcal{C}_{1}\) are on or in the interior of \(\widetilde{\mathcal{C}}_{1}\). Not all points on \(\mathcal{C}_{1}\) can be on \(\widetilde{\mathcal{C}}_{1}\) because no proper subset of a Jordan curve is a Jordan curve and \(\widetilde{\mathcal{C}}_{1}\neq\mathcal{C}_{1}\). **Subcase 2.2:** Neither of the circuits \(\widetilde{\mathcal{C}}_{1},\mathcal{C}_{1}\) encloses the other. Both circuits contain \(\mathcal{B}\) in their interior, so Lemma 6.2 provides the desired point along \(\mathcal{C}_{1}\) in the interior of \(\widetilde{\mathcal{C}}_{1}\). Lastly, we construct the circuits \(\mathcal{I}_{L+2},\ldots,\mathcal{I}_{P}\). Given \(\mathcal{I}_{j}\) for \(j\geq L+1\), consider the set \[S^{\prime}_{j}=\{\text{Open circuits $\mathcal{C}$ enclosing $\mathcal{B}$ that are also enclosed by and edge-disjoint from $\mathcal{I}_{j}$.}\}\] If \(S^{\prime}_{j}\) is nonempty, let \(\mathcal{I}_{j+1}\) be the unique outermost circuit constructed in Lemma 6.10 from the circuits in \(S^{\prime}_{j}\). Then, \(\mathcal{I}_{j+1}\) encloses \(\mathcal{B}\) and is edge-disjoint from \(\mathcal{I}_{j}\). The points along the edges of \(\mathcal{I}_{j+1}\) all lie on or in the interior of \(\mathcal{I}_{j}\), so \(\mathcal{I}_{j}\) must enclose \(\mathcal{I}_{j+1}\) by Lemma 6.1. The index \(P\) is the smallest index \(j\) so that \(S^{\prime}_{j}=\varnothing\). Then, \[\mathcal{A}^{C}\supseteq\mathcal{I}_{L+1}\cup\mathsf{int}(\mathcal{I}_{L+1}) \supseteq\mathsf{int}(\mathcal{I}_{L+1})\supseteq\mathsf{int}(\mathcal{I}_{L+ 2})\supseteq\cdots\supseteq\mathsf{int}(\mathcal{I}_{P})\supseteq\mathcal{B},\] thus proving Item (iii). Figure 12. In this figure, \(\mathcal{I}_{L}\) is the circuit enclosing \(\mathcal{A}\) denoted in black/thin. Consider the red/medium thickness and blue/thickest circuits. These both enclose \(\mathcal{B}\) and both keep \(\mathcal{I}_{L}\) in their exteriors. However, the unique maximal circuit that joins the two circuits as in Lemma 6.10 (the outermore dashed gray circuit) does not keep \(\mathcal{I}_{L}\) in the exterior. We show that these pairs of circuits cannot exist by showing the existence of the innermore dashed gray circuit built from the two circuits. This contradicts the maximality of \(\mathcal{I}_{L}\). To get Item (iv), note that successive circuits \(\mathcal{I}_{j},\mathcal{I}_{j+1}\) are edge disjoint and either (i) one encloses the other or (ii) they have disjoint interiors. Hence, they can only touch at a corner (see Figure 3), so the circuits \(\mathcal{I}_{j}\) and \(\mathcal{I}_{j+2}\) must be vertex-disjoint. ### Construction of the dual paths \(\zeta_{e}\) and \(\zeta\) (Proof of Items (v)-(vi)) We argue here that there exists a dual path from \(\mathcal{A}\) to \(\mathcal{B}\) that has exactly \(P\) open edges--one crossing each of the circuits \(\mathcal{I}_{1},\ldots,\mathcal{I}_{L},\mathcal{I}_{L+1},\ldots,\mathcal{I}_{P}\). This is done by starting at \(\mathcal{B}\) and moving backwards to \(\mathcal{A}\). There are five stages of the construction: 1. Show the existence of a closed dual path from \(\mathcal{B}\) to \(\mathcal{I}_{P}\). 2. For \(j\in\{L+1,\ldots,P-1\}\), given a fixed dual neighbor, \(x^{\star}\), of \(\mathcal{I}_{j+1}\), lying in \(\mathsf{ext}(\mathcal{I}_{j+1})\), show the existence of a closed dual path from \(x^{\star}\) to \(\mathcal{I}_{j}\). 3. Given a fixed dual neighbor \(x^{\star}\) of \(\mathcal{I}_{L+1}\) lying in \(\mathsf{ext}(\mathcal{I}_{L+1})\) show the existence of a closed dual path from \(x^{\star}\) to \(\mathcal{I}_{L}\) (or to \(\mathcal{A}\) if \(K=0\)). If \(P=L\) so that there is no circuit \(\mathcal{I}_{L+1}\), then the path is from \(\mathcal{B}\) (without specifying the vertex) to \(\mathcal{I}_{L}\) (or to \(\mathcal{A}\) if \(L=0\)). 4. For \(j\in\{1,\ldots,L-1\}\), given a fixed dual neighbor \(x^{\star}\) of \(\mathcal{I}_{j+1}\) lying in \(\mathsf{int}(\mathcal{I}_{j+1})\), show the existence of a closed dual path from \(x^{\star}\) to \(\mathcal{I}_{j}\). 5. Given a fixed dual neighbor \(x^{\star}\) of \(\mathcal{I}_{1}\) lying in \(\mathsf{int}(\mathcal{I}_{1})\), show the existence of a closed dual path from \(x^{\star}\) to \(\mathcal{A}\). These steps give us a map for constructing the path \(\zeta\) in Item (vi): Starting from \(\mathcal{B}\), we travel to a dual neighbor of \(\mathcal{I}_{P}\). Then, we can cross to the exterior of \(\mathcal{I}_{P}\) with a single open edge. From the endpoint of that edge in \(\mathsf{ext}(\mathcal{I}_{P})\), we take a path to a dual neighbor of \(\mathcal{I}_{P-1}\), then cross \(\mathcal{I}_{P-1}\) with a single edge. Continue this procedure until we reach \(\mathcal{A}\). The construction of the paths \(\zeta_{e}\) from Item (v) is the same, except that we now start at the edge \(e\) and go backwards to \(\mathcal{A}\). The choice of the geodesic \(\gamma\) will come later. In Section 7.5, we show that \(\gamma\) and \(\zeta\) can be chosen to be disjoint. **Part (a)**: If \(P=L\) (meaning the set \(\{\mathcal{I}_{L+1},\ldots,\mathcal{I}_{P}\}\) is empty), then this case is deferred to Part (c). Otherwise, we start by showing there is a closed path from \(\mathcal{B}\) to \(\mathcal{I}_{P}\). Here, we interpret \(\mathcal{B}\) as a collection of vertices, and \(\mathcal{I}_{P}\) as a collection of edges, so this means that there exists a closed dual path from some dual neighbor of a _vertex_ in \(\mathcal{B}\) to some dual neighbor of an _edge_ in \(\mathcal{I}_{P}\). Since \(\mathcal{B}\subseteq\mathsf{int}(\mathcal{I}_{P})\), then every dual neighbor of a point \(x\in B\) lies in \(\mathsf{int}(\mathcal{I}_{P})\). To see this, note that dual vertices cannot lie on \(\mathcal{I}_{P}\) so they must either lie in \(\mathsf{int}(\mathcal{I}_{P})\) or \(\mathsf{ext}(\mathcal{I}_{P})\). If \(x\in\mathsf{int}(\mathcal{I}_{P})\) and a dual neighbor \(x^{\star}\) lies in \(\mathsf{ext}(\mathcal{I}_{P})\), then the diagonal path connecting the two vertices must cross \(\mathcal{I}_{P}\). As \(\mathcal{I}_{P}\) lies on the primal lattice, this is not possible. Now, we claim that the set of dual neighbors of vertices in \(\mathcal{B}\) is a connected set (connected by dual edges). The set of northwest dual neighbors is connected, since this is just a shifted version of the set \(\mathcal{B}\). Likewise, the sets of southwest, northeast, and southeast dual neighbors are all connected. But each of these sets is connected to each other, so the entire set of dual neighbors is connected. We perform a modification of the environment as follows. Figure 13 gives an illustration. Set all primal edges (along with their dual counterparts) that lie on or have at least one vertex in the exterior of \(\mathcal{I}_{P}\) to closed. Set all dual edges connecting vertices dual to \(\mathcal{B}\) to closed (along with their primal counterparts). This latter set of edges is connected, and we let \(W\) be the cluster of closed edges that contains these. Then, there is no closed dual path from \(\mathcal{B}\) to \(\mathcal{I}_{P}\) in the original environment if and only if \(W\) is bounded in the modified environment. If \(W\) is bounded in this environment, then Lemma 5.6 guarantees the existence of an open circuit \(\mathcal{C}\) that contains \(W\) in its interior. The circuit \(\mathcal{C}\) is enclosed by and edge disjoint from \(\mathcal{I}_{P}\) because all edges on or in the exterior of \(\mathcal{I}_{P}\) are closed. The open circuit \(\mathcal{C}\) must also be open in the original environment. Since \(\mathcal{I}_{P}\) is the last circuit in the sequence, we derive a contradiction once we show that \(\mathcal{C}\) also contains \(\mathcal{B}\) in its interior. Any vertex \(x\in B\) cannot lie in \(\mathsf{ext}(\mathcal{C})\) because otherwise, the diagonal path from \(x\) to any of its dual neighbors (which lie in \(\mathsf{int}(\mathcal{C})\)) must cross \(\mathcal{C}\). But \(\mathcal{C}\) consists of primal edges and so this is not possible. The only remaining possibility for \(\mathcal{B}\not\subseteq\mathsf{int}(\mathcal{C})\) comes when \(x\) lies on the circuit \(\mathcal{C}\). Then, exactly two edges of \(\mathcal{C}\) share the vertex \(x\). Take one of these edges \(e\). By Lemma 6.3, one of the two dual neighbors of \(e\) lies in \(\mathsf{ext}(\mathcal{C})\). But this dual neighbor of the edge \(e\) is a dual neighbor of the vertex \(x\), which is contained in \(\mathsf{int}(\mathcal{C})\) by assumption. **Part (b)**: Similarly as in Part (a), modify the environment so that all edges along \(\mathcal{I}_{j}\) or with at least one endpoint in \(\mathsf{ext}(\mathcal{I}_{j})\) are set to closed (along with their dual edges). Further, since \(x^{\star}\) is a dual neighbor of \(\mathcal{I}_{j+1}\), there exists a dual edge \(e^{\star}\) crossing \(\mathcal{I}_{j+1}\). Set \(e^{\star}\) and its corresponding primal edge to closed. If there is no closed path from \(x^{\star}\) to \(\mathcal{I}_{j}\) in the original environment, then in the modified environment, the closed cluster of dual edges connected to \(x^{\star}\) is bounded. Lemma 5.6 implies the existence of an open circuit \(\mathcal{C}\) that contains \(x^{\star}\) in its interior. \(\mathcal{C}\) is enclosed by and edge-disjoint from \(\mathcal{I}_{j}\) because all edges on or in the exterior of \(\mathcal{I}_{j}\) were set to closed. The other endpoint of \(e^{\star}\) also lies in \(\mathsf{int}(\mathcal{C})\) because \(e^{\star}\) was set to closed and therefore cannot cross \(\mathcal{C}\). But since \(x^{\star}\in\mathsf{ext}(\mathcal{I}_{j+1})\), this other endpoint lies in \(\mathsf{int}(\mathcal{I}_{j+1})\) by Lemma 6.3. By Lemma 6.10, there exists a circuit \(\mathcal{C}^{\prime}\) which (i) consists entirely of edges of \(\mathcal{I}_{j+1}\) and \(\mathcal{C}\) (and therefore is edge disjoint and enclosed by \(\mathcal{I}_{j}\)) and (ii) encloses \(\mathcal{I}_{j+1}\). This contradicts the definition of \(\mathcal{I}_{j+1}\) as the outermost next circuit in the sequence after \(\mathcal{I}_{j}\). **Part (c):** Recall the definition of the circuit \(\widehat{\mathcal{D}}\) at the beginning of this section and the observation that all the circuits \(\mathcal{I}_{1},\ldots,\mathcal{I}_{L},\mathcal{I}_{L+1},\ldots,\mathcal{I}_{P}\) are contained in \(\mathsf{int}(\widehat{\mathcal{D}})\). We consider a modification of the environment as follows. If \(L=P\) (meaning there is no circuit \(\mathcal{I}_{L+1}\)), then consider the connected set of dual vertices that are neighbors of some Figure 13. Closed edges are denoted by red/thick, and open edges are denoted by blue/thin. The edges on \(\mathcal{I}_{P}\) have been set to closed, as well as all edges outside \(\mathcal{I}_{P}\) and all edges connecting vertices of \(\mathcal{B}\). In this example, the closed component \(W\) of \(\mathcal{B}\) is bounded, so \(B\) is not connected to \(\mathcal{I}_{P}\) by a closed path, and there exists an open circuit enclosing \(W\). vertex in \(\mathcal{B}\). Set all the dual edges connecting this set to closed (along with their primal counterparts). Call the cluster of closed dual edges connected to these edges \(W_{\mathcal{B}}\). If \(P>L\), then we are given a fixed dual neighbor \(x^{\star}\) of \(\mathcal{I}_{L+1}\) lying in \(\mathsf{ext}(\mathcal{I}_{L+1})\). The vertex \(x^{\star}\) is the endpoint of an edge crossing \(\mathcal{I}_{L+1}\): set that edge to closed. Set all primal edges with at least one endpoint in \(\mathsf{int}(\mathcal{I}_{L+1})\) to closed. Then, \(W_{\mathcal{B}}\) is the cluster of dual closed edges that connects to these edges inside \(\mathcal{I}_{L+1}\). We construct another closed cluster, denoted \(W_{\mathcal{A}}\), as follows. If \(L=0\) so that there is no circuit \(\mathcal{I}_{L}\), then we set all dual edges connecting dual neighbors of \(\mathcal{A}\) to closed and let \(W_{\mathcal{A}}\) be the closed dual cluster of these edges. If \(L>0\), then we set all edges that lie on the circuit \(\mathcal{I}_{L}\) or have at least one endpoint in \(\mathsf{int}(\mathcal{I}_{L})\) to closed, and we let \(W_{\mathcal{A}}\) be the closed cluster of the associated dual edges in the modified environment. We also set all edges with at least one endpoint in \(\mathsf{ext}(\widehat{\mathcal{D}})\) to closed, and let \(W_{\widehat{\mathcal{D}}}\) be the infinite cluster of closed edges connected to \(\widehat{\mathcal{D}}\) in the modified environment. Now, assume, by way of contradiction, that in the original environment, there is no closed dual path from \(x^{\star}\) (or from \(\mathcal{B}\) if \(L=P\)) to \(\mathcal{I}_{L}\) (or to \(\mathcal{A}\) if \(L=0\)). Then, in the modified environment, \(W_{\mathcal{A}}\neq W_{\mathcal{B}}\). Lemma 5.7 implies the existence of an open circuit \(\mathcal{C}\), lying in the interior of \(\mathcal{D}\) that contains \(W_{\mathcal{A}}\) in its interior and \(W_{\mathcal{B}}\) in its exterior, or vise versa (It may be helpful to refer back to Figure 9). The circuit \(\mathcal{C}\) must also be open in the original environment. In the first case, \(\mathcal{C}\) is edge disjoint from \(\mathcal{I}_{L}\) because all the edges on \(\mathcal{I}_{L}\) were set to closed in the modified environment. \(\mathcal{C}\) contains the point \(x^{\star}\), which lies in \(\mathsf{ext}(\mathcal{I}_{L+1})\) by assumption. \(\mathcal{C}\) and \(\mathcal{I}_{L+1}\) also share points in their interior because they both enclose \(\mathcal{B}\). Lemma 6.10 implies the existence of another open circuit \(\mathcal{C}^{\prime}\) that is edge disjoint from \(\mathcal{I}_{L}\) and so that \(\mathsf{int}(\mathcal{I}_{L+1})\cup\mathsf{int}(\mathcal{C})\subseteq\mathcal{ I}(\mathcal{C}^{\prime})\). This is a contradiction to our construction of the outermost circuit \(\mathcal{I}_{L+1}\) in Section 7.1. In the case that \(\mathcal{C}\) contains \(W_{\mathcal{B}}\) in its interior and \(W_{\mathcal{A}}\) in its exterior, since all edges on or inside \(\mathcal{I}_{L}\) were set to closed, \(\mathcal{C}\) is edge disjoint from and encloses \(\mathcal{I}_{L}\). This contradicts the definition of \(\mathcal{I}_{L}\) as the last disjoint innermost circuit enclosing \(\mathcal{A}\) but containing \(\mathcal{B}\) in its exterior. **Part (d):** The chosen dual vertex \(x^{\star}\) of \(\mathcal{I}_{j+1}\) is the endpoint of dual edge crossing \(\mathcal{I}_{j+1}\). Set that edge (along with its primal counterpart to closed). Additionally, set all primal edges with at least one endpoint in \(\mathsf{ext}(\mathcal{I}_{j+1})\) to closed (along with the dual counterparts). Set all edges on \(\mathcal{I}_{j}\) or with at least one endpoint in \(\mathsf{int}(\mathcal{I}_{j})\) to closed. Then, there is no closed dual path between \(x^{\star}\) and \(\mathcal{I}_{j}\) in the original environment if and only if, in the modified environment, the cluster of dual closed edges containing those edges passing through \(\mathcal{I}_{j}\) is bounded. In the latter case, Lemma 5.6 implies that there is an open circuit \(\mathcal{C}\) surrounding \(\mathcal{I}_{j}\) in the modified environment (which i=s also open in the original environment). This open circuit is edge disjoint from \(\mathcal{I}_{j}\) and encloses \(\mathcal{I}_{j}\) by Lemma 6.1. The point \(x^{\star}\) lies in \(\mathsf{ext}(\mathcal{C})\) since, in the dual environment, there is an infinite, closed dual path starting from \(x^{\star}\)-this path cannot cross \(\mathcal{C}\). Similarly as in Part (b), Lemma 6.11 contradicts the construction of \(\mathcal{I}_{j+1}\) as the next innermost circuit. **Part (e):** This is almost the same as Part (d). The only difference is that, instead of setting edges on and inside \(\mathcal{I}_{j}\) to closed, we set all edges which connect vertices in \(\mathcal{A}\) to closed. The rest of the proof goes the same. The proofs of Items (viii) and (ix) of Proposition 2.5 are postponed until the end of the section. ### Existence closed circuits hitting the closed edges (Proof of Items (vii), and (xi)) Take \(\gamma\) to be any geodesic from \(\mathcal{A}\) to \(\mathcal{B}\). For every circuit \(\mathcal{I}_{j}\), let \(x\) and \(y\) be the first and last vertices of \(\gamma\) on \(\mathcal{I}_{j}\). Then, we modify the portion of \(\gamma\) between \(x\) and \(y\) to follow the open circuit \(\mathcal{I}_{j}\) (in either direction) if it does not already. We can similarly force \(\gamma\) to take no edges between the circuits \(\mathcal{I}_{j}\) and \(\mathcal{I}_{j+1}\) if these two are not vertex-disjoint. This is Item (vii). To prove Item (xi), enumerate the closed edges along the path, in order starting from \(\mathcal{A}\) as \(e_{1},\ldots,e_{V}\), and choose one of the edges \(e_{i}\). We continue to work on the event \(\Omega_{\infty}\) and this time make use of a deterministically chosen open circuit \(\widehat{\mathcal{C}}\) that encloses both \(\mathcal{A}\) and \(\mathcal{B}\). Modify the environment in the following manner: set all edges with at least one endpoint in the exterior of \(\widehat{\mathcal{C}}\) to open. Set all edges along \(\gamma\) to open, except for the edge \(e_{i}\). Set all edges connecting two vertices in \(\mathcal{A}\) to open, as well as all edges connecting two vertices in \(\mathcal{B}\). This creates three open clusters of edges: the cluster \(W_{\mathcal{A}}\) consisting of all open edges with an open path to \(\mathcal{A}\), the cluster \(W_{\mathcal{B}}\) of open edges with a path to \(\mathcal{B}\), and the unbounded cluster \(W_{\widehat{\mathcal{C}}}\) of the circuit \(\widehat{\mathcal{C}}\). Since \(\gamma\) is a geodesic, \(W_{\mathcal{A}}\neq W_{\mathcal{B}}\). Otherwise, there is an open path from \(\mathcal{A}\) to \(\mathcal{B}\) in the modified environment. In the original environment, this path may still take some of the closed edges \(e_{j}\), but it avoids the closed edge \(e_{i}\), and thus it has a strictly lower passage time than the geodesic \(\gamma\), a contradiction. Now that we have established \(W_{\mathcal{A}}\neq W_{\mathcal{B}}\), the dual counterpart of Lemma 5.7 implies the existence of a closed circuit \(\mathcal{U}\) lying in \(\mathsf{int}(\widehat{\mathcal{C}})\) that contains \(W_{\mathcal{A}}\) (and therefore also \(\mathcal{A}\)) in its interior and \(W_{\mathcal{B}}\) (therefore also \(\mathcal{B}\)) in its exterior, or vise versa. The geodesic from \(\mathcal{A}\) to \(\mathcal{B}\) thus passes through vertices in \(\mathsf{ext}(\widehat{\mathcal{C}})\) and vertices in \(\mathsf{int}(\widehat{\mathcal{C}})\). The path between these must then cross \(\widehat{\mathcal{C}}\). Since all edges on the geodesic \(\gamma\) except for \(e_{i}\) were set to open, the edge \(e_{i}\) is the only edge on \(\gamma\) whose dual belongs to \(\mathcal{U}\). ### Separate argument in the Bernoulli case (Proof of Item (xi)) In the case of Bernoulli weights, the dual version of the construction of the circuits \(\mathcal{I}_{1},\ldots,\mathcal{I}_{L},\mathcal{I}_{L+1},\ldots,\mathcal{I}_ {P}\) allows us to successively construct edge-disjoint innermost closed circuits surrounding \(\mathcal{A}\), \(\mathcal{U}_{1},\ldots,\mathcal{U}_{U}\) followed by successive outermost closed circuits \(\mathcal{U}_{U+1},\ldots,\mathcal{U}_{V}\), surrounding \(\mathcal{B}\). Since the circuits are edge disjoint, any geodesic \(\gamma\) must pass through each circuit at least once. By a symmetric argument to the construction in Section 7.2, we may construct a primal path \(\gamma\) that crosses each of the circuits \(\mathcal{U}_{1},\ldots,\mathcal{U}_{U},\mathcal{U}_{U+1},\ldots,\mathcal{U}_ {V}\) exactly once. This path is therefore a geodesic. By Lemma 6.9, for each pair of open and closed circuit in this sequence, they either have disjoint interior, or one encloses the other. This gives a natural ordering of the sequence \(\mathcal{C}_{1},\ldots,\mathcal{C}_{K}\): we start with the circuits that enclose \(\mathcal{A}\), starting from the innermost, then move to the circuits that enclose \(\mathcal{B}\), starting from the outermost, and this implies that, for some index \(W\in\{0,\ldots,K\}\), \(\mathsf{int}(\mathcal{I}_{W})\cap\mathsf{int}(\mathcal{I}_{W+1})=\varnothing\). Furthermore, \[\mathcal{A}\subseteq\mathsf{int}(\mathcal{C}_{1})\subseteq\cdots\subseteq \mathsf{int}(\mathcal{C}_{W}),\quad\text{and}\quad\mathsf{int}(\mathcal{C}_{W +1})\supseteq\cdots\mathsf{int}(\mathcal{C}_{K})\supseteq B.\] ### Modification of \(\gamma\) and \(\zeta\) to be disjoint Recall that \(\gamma\) is a primal path while \(\zeta\) is a dual path. If \(\gamma\) and \(\zeta\) meet at the midpoint of an open edge, then that edge must lie along one of the circuits \(\mathcal{I}_{1},\ldots,\mathcal{I}_{L},\mathcal{I}_{L+1},\ldots,\mathcal{I}_ {P}\), and so \(\gamma\) contains an edge \(e\) that lies on this circuit. Call the circuit at which the intersection occurs \(\mathcal{C}\). There exists an open path between the endpoints of \(e\) that goes the opposite direction around \(\mathcal{C}\), thus avoiding \(e\). Since \(\zeta\) only crosses \(\mathcal{C}\) once, by rerouting \(\gamma\) in this manner, we have avoided intersection with \(\zeta\) at \(e\) without incurring additional passage time. If \(\gamma\) and \(\zeta\) intersect at the midpoint of a closed edge, then we showed that this edge of \(\gamma\) is the primal counterpart of an edge on a closed circuit. We similarly reroute \(\zeta\) to avoid this edge. We make the observation here that, because the circuits \(\mathcal{I}_{1},\ldots,\mathcal{I}_{P}\) (as well as the dual circuits containing the closed edges along \(\gamma\)) are not necessarily vertex-disjoint, this procedure could have created paths \(\gamma\) and/or \(\zeta\) that are not self-avoiding. If this is the case, we can simply create self-avoiding paths by deleting some of the edges along the paths. ### Modifying the geodesic \(\gamma\) between any two of its closed edges We perform a final modification of he geodesic \(\gamma\) to obain the precise results we need. With the circuits \(\mathcal{I}_{1},\ldots,\mathcal{I}_{L},\mathcal{I}_{L+1},\ldots,\mathcal{I}_{P}\) and the geodesic \(\gamma\) constructed thus far, for \(i\in\{0,\ldots,P\}\), let \(e_{i,1},\ldots,e_{i,J_{i}}\) be an enumeration of the closed edges of the geodesic between the \(i\)th and \(i+1\)st circuit, in order, starting from the set \(\mathcal{A}\). For each \(e_{i,j}\), let \(x^{1}_{i,j}\) be its first vertex along the path and let \(x^{2}_{i,j}\) be the second vertex. If \(x^{2}_{i,j}=x^{1}_{i,j+1}\), this means that \(e_{i,j}\) and \(e_{i,j+1}\) are successive edges on the geodesic. In this case, we perform no modification, so assume \(x^{2}_{i,j}\neq x^{1}_{i,j+1}\). Let \(\gamma_{i,j}\) be the portion of \(\gamma\) between the two vertices. We may replace \(\gamma_{i,j}\) with _any_ open path between \(x^{2}_{i,j}\) and \(x^{1}_{i,j+1}\), and the modified path is still a geodesic. Instead of vertices, we may also consider portions of the geodesic to the next circuit or the next set. There are, in total, \(9\) possibilities of open paths that could lie along the geodesic, but by careful use of definitions, we will reduce the number of total cases. 1. Path from \(\mathcal{A}\) to \(\mathcal{B}\) (if the set \(\mathcal{I}_{1},\ldots,\mathcal{I}_{L},\mathcal{I}_{L+1},\ldots,\mathcal{I}_{P}\) is empty) and there are no closed edges on \(\gamma\). 2. Path from \(\mathcal{A}\) to \(\mathcal{I}_{1}\) (if there are no closed edges on \(\gamma\) between \(\mathcal{A}\) and \(\mathcal{I}_{1}\)). 3. Path from \(\mathcal{A}\) to \(x^{1}_{0,1}\). 4. Path from \(\mathcal{I}_{P}\) to \(\mathcal{B}\) (if there are no closed edges of \(\gamma\) from \(\mathcal{I}_{P}\) to \(\mathcal{B}\)). 5. Path from \(\mathcal{I}_{i}\) to \(\mathcal{I}_{i+1}\) (if \(\mathcal{I}_{1}\) and \(\mathcal{I}_{i+1}\) are disjoint and there are no closed edges of \(\gamma\) between the two circuits). 6. Path from \(\mathcal{I}_{i}\) to \(x^{1}_{i,1}\). 7. Path from \(x^{2}_{P,J_{P}}\) to \(\mathcal{B}\). 8. Path from \(x^{2}_{i,J_{i}}\) to \(\mathcal{I}_{i+1}\). 9. Path from \(x^{2}_{i,j}\) to \(x^{2}_{i,j+1}\). When the path starts at \(\mathcal{A}\) or \(\mathcal{I}_{i}\), we say the first end of the path is _free_. We may move the starting point to a different vertex \(\mathcal{A}\) or a different vertex of the open circuit \(\mathcal{I}_{i}\) and still have a geodesic. Similarly, when the path ends at \(\mathcal{B}\) or \(\mathcal{I}_{i}\), we say the second end is _free_. Otherwise, we say the end is _fixed_. This notation will greatly simplify the construction with fewer cases to handle. For our chosen pair of ends (either free or fixed), we let \(\Pi\) be the set of open paths between them. Our task is to choose an element \(\pi\in\Pi\) in a manner so that each edge \(e\) will have a closed dual connection to \(\zeta\). For each \(\pi\in\Pi\), we associate a Jordan curve \(\mathcal{J}_{\pi}\) that lives on the union of the primal and dual lattices. By a simple modification of Lemma 6.8, the area of each such Jordan curve is an integer multiple of \(\frac{1}{4}\), so we choose a path from the set \(\operatorname*{arg\,min}_{\pi}J_{\pi}\) according to some deterministic total ordering of such circuits. Then, we argue that our chosen path satisfies the desired properties. We construct the Jordan curve for four separate and exhaustive cases. the following observation will be used several times: Any circuit that contains \(\mathcal{A}\) in its interior and \(\mathcal{B}\) in its exterior (or vise versa) must intersect the path \(\zeta\) between \(\mathcal{A}\) and \(\mathcal{B}\). Figure 14 illustrates the case when one end is free and one end is fixed. **Case 1: Both ends are free** For this case, we assume that if \(\pi\) is a path from \(\mathcal{I}_{i}\) to \(\mathcal{I}_{i+1}\), that the two circuits are disjoint because, if not, Item (vii) states that the geodesic passes from \(\mathcal{I}_{i}\) to \(\mathcal{I}_{i+1}\) without ever leaving the union of the circuits. We also assert that the only vertices along \(\pi\) that intersect \(\mathcal{I}_{i},\mathcal{I}_{i+1},A\), or \(\mathcal{B}\), are the ends. Otherwise, we can take a subset of the path that achieves the same purpose. Traverse the path \(\pi\) between its ends. The latter end connects either to an open circuit or to \(\mathcal{B}\). Follow either the open circuit (in any direction) or a self-avoiding path of edges connecting vertices in \(\mathcal{B}\) until we reach \(\zeta\). Then, we can connect the two paths with half of a primal edge and then half of a dual edge. Follow \(\zeta\) backwards until we reach either \(\mathcal{A}\) or the open circuit containing the first end of \(\pi\). Again, we connect this path onto the primal lattice with a half dual edge and a half primal edge. From there, we follow the open circuit in either direction, or a self-avoiding path of edges connecting vertices of \(\mathcal{A}\) until reaching the first end. A circuit has now been formed. This is a well-defined circuit because \(\zeta\) and \(\gamma\) are disjoint, and \(\zeta\) can only intersect each of the open circuits once. **Case 2: One end is free**. Without loss of generality, assume that the free end comes second as we traverse between \(\mathcal{A}\) and \(\mathcal{B}\). Traverse \(\pi\) between its ends until meeting the free end. Connect this path to \(\zeta\) just as in Case 1. Let \(\mathcal{U}\) be the closed circuit constructed in Section 7.3 associated to the fixed end. Follow \(\zeta\) backwards until meeting \(\mathcal{U}\). Then, follow \(\mathcal{U}\) (in either direction) until we can connect it to the first end of \(\pi\) with a concatenation of half dual and half primal edges. **Case 3: Both ends are fixed**. We let \(\mathcal{U}_{i},\mathcal{U}_{i+1}\) be the associated closed circuits constructed in Section 7.3. For this case, we need two subcases. **Subcase 3.1:**\(\mathcal{U}_{i}\) and \(\mathcal{U}_{i+1}\) are (both edge and vertex) disjoint. Traverse \(\pi\) between its fixed edges until reaching \(\mathcal{U}_{i+1}\), and connect the paths just as in Case 2. Follow \(\mathcal{U}_{i+1}\) in either direction until meeting \(\zeta\). Follow \(\zeta\) backwards until meeting \(\mathcal{U}_{i}\). Then, follow \(\mathcal{U}_{i}\) in either direction until meeting the first fixed end, connecting the paths in the usual way. Figure 14. The case when one end is free and one end is fixed. Open edges are shown in blue/thin and closed edges are shown in red/thick. The fixed end crosses through a closed dual circuit with a closed primal edge. The path \(\zeta\) crosses one of the \(\mathcal{I}_{i}\) with an open dual edge. We use edges and half-edges to create a Jordan curve seen in the figure. **Subcase 3.2:**\(\mathcal{U}_{i}\) and \(\mathcal{U}_{i+1}\) share at least one vertex. Traverse the path \(\pi\) between its fixed ends. Connect to \(\mathcal{U}_{i+1}\) in the usual way and then follow this circuit in either direction until we first meet \(\mathcal{U}_{i}\). Follow \(\mathcal{U}_{i}\) in either direction until we can connect to the first fixed end with two half-edges as usual. It is possible that \(\mathcal{U}_{i}\) hits \(\mathcal{U}_{i+1}\) again, but it will not intersect the portion of \(\mathcal{U}_{i+1}\) that we used as part of the Jordan curve since we started along \(\mathcal{U}_{i}\) at the _first_ place that the two circuits met. We emphasize that in the construction of the associated Jordan curves \(\mathcal{J}_{\pi}\), there were often several choices of path (traversing the circuits in either direction, etc.) For each portion of the geodesic, we modify by choosing \(\pi\) from the set of open paths between the ends with the smallest _possible_ area of such a Jordan curve (breaking ties deterministically), and let \(\mathcal{J}_{\pi}\) the associated smallest Jordan curve. ### Proof of consistency and existence of the closed arms (Proof of Items (viii) and (ix)) The consistency of the geodesics in Item (viii) follows because the construction of the paths \(\zeta\) and \(\gamma\) between any pair of successive open circuits depends only on the environment between those circuits. We turn to Item (ix). Let \(e\in\gamma\) be an open edge with \(e\notin\mathcal{I}_{1}\cup\cdots\cup\mathcal{I}_{L}\cup\mathcal{I}_{L+1}\cup \cdots\cup\mathcal{I}_{P}\). Let \(\pi\) be the associated portion of \(\gamma\) between successive closed edges or open circuits, as in Section 7.6, and let \(\mathcal{J}_{\pi}\) be the associated Jordan curve of smallest area. In each case of the construction, \(e\) is an edge along \(\mathcal{J}_{\pi}\). Let \(x_{1}^{\star},x_{2}^{\star}\) be the endpoints of the associated dual edge \(e^{\star}\). Since \(\mathcal{J}_{\pi}\) lives on the union of the primal and dual lattices, either at least one of these points lies on \(\mathcal{J}_{\pi}\) (in which case the desired path is trivial), or Lemma 6.3 implies that one of these points lies in \(\mathsf{int}(\mathcal{J}_{\pi})\) and the other lies in \(\mathsf{ext}(\mathcal{J}_{\pi})\). Without loss of generality, assume that \(x_{1}^{\star}\) lies in \(\mathsf{int}(\mathcal{J}_{\pi})\). We perform a modification of the environment as follows. Set each primal edge with at least one endpoint in \(\mathsf{ext}(\mathcal{J}_{\pi})\) to closed (along with the associated dual edges). Then, \(x_{1}^{\star}\) is connected to \(\zeta\) by a closed dual path in the original environment if and only if the closed dual cluster of \(x_{1}^{\star}\) in the modified environment is unbounded. If, by way of contradiction, this is not the case, then in the modified environment, there is an open circuit \(\mathcal{C}\) containing \(x^{\star}\) in its interior. The open circuit must lie entirely on or in the interior of \(\mathcal{J}_{\pi}\) because all edges outside were set to closed. In other words, \(\mathsf{int}(\mathcal{C})\subseteq\mathsf{int}(\mathcal{J}_{\pi})\). Hence, both \(\mathcal{J}_{\pi}\) and \(\mathcal{C}\) contain \(x_{1}^{\star}\) in their interior and \(x_{2}^{\star}\) in their exterior, so \(e\in\mathcal{J}_{\pi}\cap\mathcal{C}\). Let \(W\) be the connected component of \(\mathcal{J}_{\pi}\cap\mathcal{C}\) containing \(e\). Then, modifying \(\pi\) by replacing \(W\) with \(\mathcal{C}\setminus W\) results in a new path between the two ends (possibly changing the starting and/or end point in the case of free ends). Let \(\pi^{\prime}\) be the modified path, and let \(\mathcal{J}_{\pi^{\prime}}\) be the Jordan curve obtained by replacing \(\pi\) with \(\pi^{\prime}\). It is straightforward to see that \(\mathcal{J}_{\pi^{\prime}}\) is a Jordan curve that can be obtained from \(\pi^{\prime}\) in the setup of Section 7.6. We show that the area of \(\mathcal{J}_{\pi^{\prime}}\) is smaller than the area of \(\mathcal{J}_{\pi}\), thereby obtaining a contradiction. The path \(\pi\) cannot be the same as \(\pi^{\prime}\): otherwise \(\mathcal{C}=W\). But \(W\) is a subset of the self-avoiding path \(\pi\) and can not contain a circuit. By Lemma 6.8, it is sufficient to show that \(\mathsf{int}(\mathcal{J}_{\pi^{\prime}})\subseteq\mathsf{int}(\mathcal{J}_{ \pi})\). To prove this, take \(x\in\mathsf{int}(\mathcal{J}_{\pi^{\prime}})\). We already established \(\mathsf{int}(\mathcal{C})\subseteq\mathsf{int}(\mathcal{J}_{\pi})\), so we may assume \(x\in\mathsf{ext}(\mathcal{C})\). Let \(\theta\) be any infinite path starting from \(x\). We show that \(\theta\) must cross \(\mathcal{J}_{\pi}\). Suppose not. The path \(\theta\) must cross \(\mathcal{J}_{\pi^{\prime}}\), so the crossing must be along \(\mathcal{C}\setminus\pi\). Since \(x\in\mathsf{ext}(\mathcal{C})\), Lemma 6.4 implies that when this crossing occurs, \(\theta\) crosses into \(\mathsf{int}(\mathcal{C})\). But since \(\mathsf{int}(\mathcal{C})\subseteq\mathsf{int}(\mathcal{J}_{\pi})\), \(\theta\) must cross \(\mathcal{J}_{\pi}\), completing the proof. ## 8. Acknowledgments We thank Michael Damron for suggesting this problem, many helpful discussions, and invaluable input. Part of this work was conducted at the International Centre for Theoretical Sciences (ICTS), Bengaluru, India during the program "First-passage percolation and related models" in July 2022 (code: ICTS/fpp-2022/7). We thank ICTS for the privilege of attending this meeting and for its hospitality.
2309.04332
Graph Neural Networks Use Graphs When They Shouldn't
Predictions over graphs play a crucial role in various domains, including social networks and medicine. Graph Neural Networks (GNNs) have emerged as the dominant approach for learning on graph data. Although a graph-structure is provided as input to the GNN, in some cases the best solution can be obtained by ignoring it. While GNNs have the ability to ignore the graph- structure in such cases, it is not clear that they will. In this work, we show that GNNs actually tend to overfit the given graph-structure. Namely, they use it even when a better solution can be obtained by ignoring it. We analyze the implicit bias of gradient-descent learning of GNNs and prove that when the ground truth function does not use the graphs, GNNs are not guaranteed to learn a solution that ignores the graph, even with infinite data. We examine this phenomenon with respect to different graph distributions and find that regular graphs are more robust to this over-fitting. We also prove that within the family of regular graphs, GNNs are guaranteed to extrapolate when learning with gradient descent. Finally, based on our empirical and theoretical findings, we demonstrate on real-data how regular graphs can be leveraged to reduce graph overfitting and enhance performance.
Maya Bechler-Speicher, Ido Amos, Ran Gilad-Bachrach, Amir Globerson
2023-09-08T13:59:18Z
http://arxiv.org/abs/2309.04332v2
# Graph Neural Networks Use Graphs When They Shouldn't ###### Abstract Predictions over graphs play a crucial role in various domains, including social networks, molecular biology, medicine, and more. Graph Neural Networks (GNNs) have emerged as the dominant approach for learning on graph data. Instances of graph labeling problems consist of the graph-structure (i.e., the adjacency matrix), along with node-specific feature vectors. In some cases, this graph-structure is non-informative for the predictive task. For instance, molecular properties such as molar mass depend solely on the constituent atoms (node features), and not on the molecular structure. While GNNs have the ability to ignore the graph-structure in such cases, it is not clear that they will. In this work, we show that GNNs actually tend to overfit the graph-structure in the sense that they use it even when a better solution can be obtained by ignoring it. We examine this phenomenon with respect to different graph distributions and find that regular graphs are more robust to this overfitting. We then provide a theoretical explanation for this phenomenon, via analyzing the implicit bias of gradient-descent-based learning of GNNs in this setting. Finally, based on our empirical and theoretical findings, we propose a graph-editing method to mitigate the tendency of GNNs to overfit graph-structures that should be ignored. We show that this method indeed improves the accuracy of GNNs across multiple benchmarks. ## 1 Introduction Graph labeling problems arise in many domains, from social networks to molecular biology. In these settings, the goal is to label a graph or its nodes given information about the graph. The information for each graph instance is typically provided in the form of the graph-structure (i.e., its adjacency matrix) as well as the features of its nodes. Graph Neural Networks (GNNs) [16, 13, 14] have emerged as the leading approach for such tasks. The fundamental idea behind GNNs is to use neural-networks that combine the node features with the graph-structure, in order to obtain useful graph representations. This combination is done in an iterative manner, which can capture complex properties of the graph and its node features. Although the graph-structures are provided as input to the GNN, in some cases the best solution can be obtained by ignoring them. This may be due to these graph-structures being non-informative for the predictive task at hand. For instance, some molecular properties such as the molar mass (i.e., weight) depend solely on the constituent atoms (node features), and not on the molecular structure. Also, in cases where the provided graph-structure contains valuable information for the task, the GNN might struggle to effectively exploit this information. In such cases, it is anticipated that better accuracy can be achieved by ignoring the graph-structure. Motivated by this observation, we ask a core question in GNNs learning: will GNNs work well in cases where the graph-structure should be ignored or will they overfit the graph, resulting in reduced accuracy? Answering this question has several far-reaching practical implications. To illustrate, if GNNs lack the ability to discern when to disregard the graph, then providing a graph can actually hurt the performance of GNNs, and thus one must carefully re-think which graphs to provide a GNN. On the other hand, if GNNs easily reject the structure when it fails to exploit it, then practitioners should add it even if their domain knowledge and expertise suggest that there is only a small chance that it is informative. We consider the common setting of over-parameterized GNNs. Namely, when the number of parameters the GNN uses is larger than the size of the training data. This is a very common case in deep-learning [15, 16, 17, 18], where the learned model can fit any training data. Previous studies showed that despite over-parameterization, models learned using gradient descent often generalize well. Hence, it was suggested that the learning algorithm uses an implicit bias (e.g., low parameter norm) to avoid spurious models that happen to fit the training data. Our focus is thus on the implicit bias of GNN learning and specifically, whether GNNs are biased towards using or not using the graph-structure. If the implicit bias is towards "simple models" that do not use the graph-structure when possible, then one would expect GNNs to be oblivious to the graph-structure when it is not informative. Our first empirical finding is that this is actually not the case. Namely, GNNs tend to _not_ ignore the graph, and their performance is highly dependent on the provided graph structure. Specifically, there are graph structures that result in models with low accuracy. Next, we ask which properties of the learned graph distribution affect the GNN's ability to ignore the graph. We empirically show that graphs that are regular result in more resilient GNNs. We then provide a theoretical analysis that explains why GNNs are more robust when applied to non-informative regular graphs Finally, based on our empirical and theoretical findings, we propose a method to mitigate the tendency of GNNs to overfit non-informative graph-structures. Our approach is to add synthetic edges to the input graphs, such that they resemble regular graphs. This turns out to improve accuracy on both synthetic and real problems. **The main contributions of this work are** (1) We show that overparameterized GNNs tend to overfit graph-structures, when they should be ignored. (2) We evaluate the graph-structure overfitting phenomenon with respect to different graph distributions and find that the best performance is obtained for regular graphs. (3) We theoretically analyze the implicit bias of GNNs trained on regular graphs and show they converge to unique solutions that are more robust to graph-structure overfitting. (4) We establish an extrapolation result and assessment for GNNs trained on regular graphs, by incorporating insights from the implicit bias analysis. (5) Based on our empirical and theoretical findings, we propose a method to mitigate the tendency of GNNs to overfit the graph-structure, and show that it performs well in practice. To the best of our knowledge, this is the first work to examine the implicit bias of learning GNNs. Indeed, understanding this bias is key to designing GNN learning methods that generalize well. ## 2 GNNs Overfit the Graph-Structure In this section, we present an empirical evaluation showing that GNNs tend to overfit graph-structures that should be ignored, thus hurting their generalization accuracy. ### Preliminaries A graph example is a tuple \(G=(A,X)\). \(A\) is an adjacency matrix representing the graph-structure. Each node \(i\) is assigned with a feature vector \(\mathbf{x}_{i}\in\mathbb{R}^{d}\), and all the feature vectors are stacked to a feature matrix \(X\in\mathbb{R}^{n\times d}\), where \(n\) is the number of nodes in \(G\). The set of neighbors of node \(i\) is denoted by \(N(i)\). We denote the number of samples in a dataset by \(m\). We focus on the common class of Message-Passing Neural Networks (Morris et al., 2021). In these networks, at each layer, each node updates its representation as follows: \[h_{i}^{(k)}=\sigma\left(W_{1}^{(k)}h_{i}^{(k-1)}+W_{2}^{(k)}\sum_{j\in N(i)}h_ {j}^{(k-1)}+b^{(k)}\right) \tag{1}\] where \(W_{1}^{(k)},W_{2}^{(k)}\in\mathbb{R}^{d_{k}\times d_{k-1}}\). The initial representation of node \(i\) is its feature vector \(h_{i}^{(0)}=\mathbf{x}_{i}\). The final node representations \(\{h_{i}^{(L)}\}_{i=1}^{n}\) obtained in the last layer, can then be used for downstream tasks such as node or graph labeling. We focus on graph labeling tasks, where a graph representation vector is obtained by summing all the node representations. This is then followed by a linear transformation matrix \(W_{3}\) that provides the final classification/regression output for regression or classification (referred to as a _readout_). For the sake of presentation, we drop the superscript in cases of one-layer GNNs. We refer to \(W_{1}^{(k)}\) as the _root-weights_ of layer \(k\) and to \(W_{2}^{(k)}\) as the _topological-weights_ of layer \(k\). A Natural way for GNNs to ignore the graph-structure is by zeroing the topological-weights \(W_{2}^{(k)}=\vec{0}\). ### Evidence for Graph Overfitting Our goal is to examine what happens when GNNs learn over graphs that should be ignored. To that end, we conducted experiments on three datasets. Node SumThis is a synthetic task where the label is independent of the graph-structure and relies only on the node features. Therefore the graph-structures should be ignored. In the Node Sum task, the label is generated using a one-layer linear "teacher" model. This teacher simply sums the node features, and applies a linear readout to produce a scalar. The label is then the sign of the result. The teacher readout is sampled once from \(\mathcal{N}(0,1)\) and used for all the graphs. All graphs have \(n=20\) nodes, and each node is assigned with a feature vector in \(\mathbb{R}^{128}\) sampled i.i.d from \(\mathcal{N}(0,1)\). The non-informative graph structures are drawn from the GNP graph distribution (Erdos and Renyi, 1959), where the edges are sampled i.i.d with probability \(p\) (we used \(p=0.5\)). As the teacher model only uses the node features to compute the label, the given graph-structures are non-informative for this task. PROTEINS and ENZYMES(Morris et al., 2020) These are two tasks based on real-world molecular data, where the goal is to classify a molecule into one of two or six classes, respectively. We chose these datasets because Errica et al. (2022) reported on a thorough GNNs comparison, that the best accuracy on these datasets is achieved when the graph-structures are omitted, i.e., on empty graphs. In Errica et al. (2022), the model trained on empty graphs is not an instance of the other compared models. Therefore, one cannot immediately conclude that the other GNNs overfitted the graph-structures. For example, the superiority of the model trained on empty graphs may be due to its architecture rather than the graph information. Nevertheless, if a fixed architecture is used, better performances that are achieved when learning over empty graphs, do indicate that it was better for the GNN to ignore the graph, but it didn't. This is because with a fixed architecture, the solution learned by the \begin{table} \begin{tabular}{l|c|c|c} \hline \hline & Node Sum & PROTEINS & ENZYMES \\ \hline \(GNN\) & 94.5 \(\pm\) 0.9 & 67.4 \(\pm\) 1.9 & 55.2 \(\pm\) 3.1 \\ \(GNN_{\emptyset}\) & 97.5 \(\pm\) 0.7 & 74.1 \(\pm\) 2.5 & 64.1 \(\pm\) 5.7 \\ \hline \hline \end{tabular} \end{table} Table 1: The accuracy of the same GNN architecture, trained on the given datasets (GNN) and on the same data where the graph-structure is omitted (\(GNN_{\emptyset}\)), i.e., on empty graphs. Because the solution of \(GNN_{\emptyset}\) is realizable by \(GNN\), and the only difference between the runs is the given graph-structures, this suggests that the decreased performance of \(GNN\) is due to graph-structure overfitting. GNN trained on empty graphs, is always realizable by the GNN trained on any non-empty graphs. In our experiments, we use a fixed architecture. GNN ModelFor the Node Sum task, following the teacher model, we used a 1-layer "student" GNN, using the update rule in Equation 1, with readout and ReLU activations. For the PROTEINS and ENZYMES tasks, we used 3-layers. Protocol and ResultsOn each of the three datasets, we trained the same GNN twice: once on the given graph-structures in the data (\(GNN\)), and once when the graph-structure is replaced with an empty graph and only the node features are given for training (\(GNN_{\emptyset}\)). The hyperparameters are tuned on a separate validation test. We report test errors averaged over 10 runs with random seeds on a separate holdout test set. More information can be found in the appendix. This difference between these setups shows the effect of providing the graph-structure. Table 1 shows the results of the experiments. In the three tasks, \(GNN_{\emptyset}\) achieves higher accuracy than \(GNN\). This suggests that \(GNN\) made use of the graph in a way that led to lower test accuracy. Therefore, we conclude that \(GNN\) overfitted the graph-structure. ### How Graph Structure Affects Overfitting The previous section showed that in the Node Sum task, where the graph-structures are non-informative and should be ignored, the GNN overfits them instead. Here we further study how this phenomenon is affected by the specific graph-structure used. Thus, we repeat the setup of the Node Sum task but with different graph distributions. DataWe used the Node Sum task described in Section 2.2. We created 4 different datasets from this baseline, by sampling graph-structures from different graph distributions. The set of node feature vectors remains the same across all the datasets, thus the datasets differ only in their graph-structures. The graph distributions we used are: \(r\)-regular graphs (Regular) where all the nodes have the same degree \(r\), star-graph (Star) where the only connections are between one specific node and all other nodes, the Erdos-Renyi graph distribution (GNP) [1], where the edges are sampled i.i.d with probability \(p\), and the preferential attachment model (BA) [1], where the graph is built by incrementally adding new nodes and connecting them to existing nodes with probability proportional to the degrees of the existing nodes. ProtocolThe GNN model was as in the Node Sum task in the previous section. On each dataset, we varied the training set size and evaluated test errors on 10 runs with random seeds. We note that the GNN has a total of \(\sim\)16,000 parameters, and thus it is overparameterized and can fit the training data with perfect accuracy. More information can be found in the appendix. ResultsFor the sake of presentation, we present the results on one instance from each distribution: Regular with \(r=10\), GNP with \(p=0.6\) and BA with \(m=3\). Additional results with more parameters are given in the appendix, and show similar trends. Recall that the datasets differ only by the edges and share the same set of nodes and features. Therefore, had the GNN ignored the graph-structures, we would expect to see similar performance for all datasets. As shown in Figure 1(a), the performance largely differs between different graph distributions, which indicates the GNN overfits the graphs rather than ignores them. To further understand what the GNN learns in these cases, we evaluate the ratio between the norms of the topological and root weights. Results are shown in Figure 1(b). It can be seen that for all the graphs except the empty graphs, the ratio Figure 1: (a) The learning curves of the same GNN model trained on graphs that have the same node features and only differ in their graph-structure, that is samples from different distributions. The label is computed from the node features without the use of any graph-structure. If GNNs were to ignore the non-informative graph-structure they were given, similar performance should have been observed for all graph distributions. Among the different distributions, regular graphs exhibit the best performance. (b) The norm ratio between the topological and the root weights along the same runs. Except for the empty graphs, the ratio is always greater than 1, which indicates that more norm is given to the topological weights. On the empty graphs, the topological weights are not trained and the ratio is 0 due to initialization. is larger than 1, indicating that there is more norm on the topological weights than on the root weights. Specifically, the graph-structure is not ignored. In the case of empty graphs, the topological weights are not trained, and the ratio is 0 due to initialization. We also present the norms of the root and topological weights separately in the appendix. Figure 1 suggests that some graph distributions are more robust to graph-structure overfitting. The GNN trained on regular graphs performs best across all training set sizes. The good performance on regular graphs would seem to suggest that it learns to use low topological weights. However as Figure 1(b) shows actually the opposite is true. This may seem counter-intuitive, but in the next section we show how this comes about. ## 3 Analyzing Regular Distributions In the previous section, we saw that although GNNs tend to overfit the graph-structure when it should be ignored, regular graphs create more resilient GNNs. In this section, we theoretically examine the solutions learned by linear GNNs trained on regular graphs. We begin by analyzing their implicit bias. We then prove they are guaranteed to extrapolate to any other regular graph distribution. For the sake of clarity, we state all theorems for a one-layer GNN with sum-pooling, no readout, and output dimension 1. For simplicity, we also assume no bias term in our analysis. All the proofs and extensions can be found in the appendix. ### Implicit bias of GNNs To examine the solutions learned by GNNs trained on regular graphs, we utilize Theorem 4 from Gunasekar et al. (2018). This theorem states that homogeneous neural networks trained with GD on linearly separable data converge to the max-margin solution. Translated to our formulation of GNNs trained on \(r\)-regular graphs, we get that GD converges to the max-margin solution of the following quadratic problem: \[\min_{\mathbf{w}_{1},\mathbf{w}_{2}} \|\mathbf{w}_{1}\|_{2}^{2}+\|\mathbf{w}_{2}\|_{2}^{2} \tag{2}\] \[s.t. y^{(l)}\big{[}\{(\mathbf{w}_{1}+r\mathbf{w}_{2})\,\cdot\,\Sigma_{i }^{n}\,\mathbf{x}_{i}^{(l)}\}\geq 1\] \[\forall(G^{(l)},\mathbf{y}^{(l)})\in S\] This can be viewed as a max-margin problem in \(\mathbb{R}^{d}\) where the input vector is \(\sum_{i}^{n}\,\mathbf{x}_{i}^{(l)}\). Specifically, although two weight vectors \(\mathbf{w}_{1}\) and \(\mathbf{w}_{2}\) are learned, in the case of \(r\)-regular graphs the GNN only utilizes their weighted sum \(\mathbf{\tilde{w}}=\mathbf{w}_{1}+r\mathbf{w}_{2}\). In the appendix, we present the same view for GNNs trained on general graphs (i.e., not necessarily regular). The next theorem shows that when a GNN is trained using gradient descent on regular graphs, the learned root and topological weights are aligned. **Theorem 3.1** (Weight Alignment).: _Let \(S\) be a set of linearly separable graph examples drawn i.i.d from an \(r\)-regular graph distribution with binary labels. A GNN trained with GD that fits \(S\) perfectly converges to a solution such that \(\mathbf{w}_{2}=r\mathbf{w}_{1}\). Specifically, the root weights \(\mathbf{w}_{1}\) and topological weights \(\mathbf{w}_{2}\) are aligned._ To prove Theorem 3.1 we analyze the KKT conditions for first order stationary points of Equation 2. ### Extrapolation We now use Theorem 3.1 to show that when learning on regular graphs from teachers that do not use the graph-structure, the GNN will extrapolate well to any other regular graph, including the empty graph. This result is in agreement with our empirical results from the previous section where learning with regular graphs indeed succeeds for teachers that do not use the graph-structure. **Theorem 3.2** (Extrapolation).: _Let \(S\) be a set of linearly separable graph examples drawn from an \(r\)-regular distribution, with binary labels. Assume that the ground truth function \(f^{*}\) is realizable by a GNN with \(\mathbf{w}_{2}^{*}=0\). Then a GNN that fits \(S\) perfectly, will extrapolate to any \(r^{\prime}\)-regular distribution._ To prove Theorem 3.2, we substitute the topological weights in Equation 2 with the aligned weights guaranteed from Theorem 3.1. We get that the weights vector used by the GNN is \(\mathbf{w}_{1}+r^{2}\mathbf{w}_{1}\), which does not change its direction when the value of \(r\) is changed. While Theorem 3.2 guarantees extrapolation within the family of regular distributions, we empirically observe that GNNs trained on regular graphs exhibit good extrapolation to other non-regular graph distributions as well, as shown in Table 2. The GNN trained on 5-regular graphs, generalize perfectly for GNP distribution shifts. It generalizes well to BA distribution shifts, and there is a decrease in performance when tested on star-graphs. These extrapolation performances are non-trivial, as it was previously shown in Yehudai et al. (2020) that when there is a certain discrepancy between the train and test distributions, GNNs may fail to extrapolate. We next suggest an explanation for these results. The following lemma shows that applying a GNN trained on regular graphs to any given graph is equivalent to applying it to an \(r^{\prime}\)-regular graph plus applying it to another \(\Delta\)-graph that depends on \(r^{\prime}\), for some \(0\leq r^{\prime}\leq n-1\). **Lemma 3.3**.: _Let \(f\) be a GNN that perfectly fits a training set of \(r\)-regular graphs. Then applying \(f\) to a graph \(G\) can be \begin{table} \begin{tabular}{l|c} \hline Test distribution & Accuracy \\ \hline Regular (r=10) & 100.0 \(\pm\) 0.0 \\ Regular (r=15) & 100.0 \(\pm\) 0.0 \\ \hline GNP (p=0.2) & 100.0 \(\pm\) 0.0 \\ GNP (p=0.5) & 100.0 \(\pm\) 0.0 \\ GNP (p=0.8) & 100.0 \(\pm\) 0.0 \\ \hline BA (m=3) & 98.0 \(\pm\) 1.7 \\ BA (m=15) & 94.2 \(\pm\) 0.9 \\ \hline Star Graph & 73.9 \(\pm\) 1.1 \\ \hline \end{tabular} \end{table} Table 2: Accuracy of a GNN trained on 5-regular graphs and tested on different distribution shifts. The GNN extrapolates perfectly to regular graph distributions, as guaranteed by Theorem 3.2. It also extrapolates well to other distributions, where the lowest performance is when tested on star graphs. written as_ \[f(G)=\underbrace{W_{1}\sum_{i=1}^{n}\mathbf{x}_{i}+rW_{1}\sum_{i=1}^{n}r^{\prime }\mathbf{x}_{i}}_{Regular\ Component}+\underbrace{rW_{1}\sum_{i=1}^{n}\Delta_{r^{ \prime}}(i)\mathbf{x}_{i}}_{\Delta\ Component}\] _Such that \(\Delta_{r^{\prime}}(i)=deg_{G}(i)-r^{\prime}\), for any \(0\leq r^{\prime}\leq n-1\)_ From Lemma 3.3 it follows that if there is \(0\leq r^{\prime}\leq n-1\) such that the \(\Delta\)-component is small with respect to the regular component, then a good extrapolation follows from Theorem 3.2. In the appendix we empirically show, that indeed when the extrapolation is good, there is such \(r^{\prime}\). This suggests that applying the GNN to graphs that are "closer" to regular graphs, exhibits better extrapolation. Due to space limitations, this is fully formulated and explained in the appendix. Inspired by these observations, in the next section we present a method to mitigate the tendency of GNNs to overfit the graph-structure when it should be ignored. ## 4 A Method for Reducing Graph Overfitting In the previous sections, we observed that GNNs tend to overfit the graph-structure even if it should be ignored. One simple practical approach to mitigate this issue is to always try learning a model with an empty graph. This would be equivalent to using a DeepSet model [22]. However, there are cases where the given graph-structure still carries some degree of pertinent information, but the GNN fails to exploit it. In such cases, entirely discarding the graph-structure may result in improved performance, but this performance could potentially be improved if the GNN would manage to exploit the given graph information. Consequently, we suggest a graph-editing method that improves the ability of GNN to exploit the given graph-structures in cases they to carry useful information while reducing overfitting to irrelevant structural information. We show that our method consistently improves performance over the originally given graphs in both synthetic and real-world data. The R-COV Graph Editing MethodThe previous sections showed that regular graphs exhibit robustness to the tendency of GNNs to overfit graph-structures which should be ignored. Therefore, it may be beneficial to edit a given graph to be regular. We suggest a method that preserves the original graph-structures while improving the ability of the GNN to exploit them in the case they are useful for the task and disregard them when they aren't. Unfortunately, it is not clear how to turn a given graph into a regular graph without removing edges. On the other hand, it is possible to make the graphs regular by completing them into full graphs. However, this approach may be computationally expensive or even infeasible when learning on large graphs. Our method, Reduced COV (R-COV) makes the given graphs "more similar" to regular graphs, by reducing their coefficient of variation (COV), i.e., the ratio between the standard-deviation and mean of the node degrees. The COV of regular graphs is 0. Different techniques can be used to reduce the COV of a given graph. In our experiments, we reduce the COV up to a certain threshold by adding edges sampled randomly between nodes of low degree. We iteratively add to each graph a sampled batch of non-existing edges that have at least one end-point in the 10 lowest-degree nodes, until the threshold is exceeded. For efficiency, we used batches of size 3 for small graphs (up to 100 nodes) and batches of size 50 for large graphs (more than 100 nodes). Of course, we do not want to remove the information about the original graph. Thus, we add features to edges that specify if they were part of the original graph or were added by R-COV. Implementation DetailsThe GNN's update rule was revised to incorporate edge-features, as follows. Let \(e_{ij}\) denote the edge-features on edge \(ij\). Then the GNN update is given by: \[h_{i}^{(k)}=\sigma\left(W_{1}^{(k)}h_{i}^{(k-1)}+W_{2}^{(k)}\sum_{j\in N(i)} \left(h_{j}^{(k-1)}+\phi(e_{ij})\right)\right) \tag{3}\] Here \(\phi\) is an MLP with ReLU activations. We used edge feature values 1 and 0.5 for the original and added edges by R-COV, respectively. For the R-COV method, we treated the threshold as a hyper-parameter, and we tested the values \(\{0.15,0.1,0.05\}\). We did not use lower values in order to avoid graphs that are too dense. ### Experiments on Synthetic Data We evaluated R-COV1 on four synthetic tasks: One task where any graph-structure is non-informative (Node Sum), two tasks where the graph-structure is informative and the label relies fully on the graph-structure (Edges, Motifs), and a task where the label can be computed using either the graph-structure or the node features (Mixed Information), i.e., the graph-structure is informative but also can be ignored. In all these datasets, each node has a constant 1 feature and 16 random features drawn i.i.d from \(\mathcal{N}(0,1)\). Footnote 1: Code can be found in github.com/mayabechlerspeicher/Graph_Neural_Networks_Overfit_Graphs Node SumThis is the same task described in Section 2.2. The label is only dependent on the node features. We used graphs over 20 nodes drawn randomly from a GNP(\(p=0.5\)) distribution. EdgesIn this task, the goal is to determine if the number of edges is above a certain threshold. We use the same graphs from the Node Sum task. The label of the graph is 1 if it has at least 190 edges (the average amount of edges in the dataset) and 0 otherwise. In this task, the label relies fully on the graph-structure. Therefore, the graph-structures are informative in the sense that they hold valuable information for the predictive task. MotifsWe used the synthetic motif dataset from Luo et al. (2020). In this dataset, each graph is a random Barabasi-Albert graph over 20 vertices. Half of the graphs are connected to a 5-node house structured graph, and the rest are connected with a 5-node cycle graph. The task is a binary classification of the graph according to the type of structure connected to it. In this task, the graph-structure is informative and the label relies fully on the graph-structure. Mixed InformationHere the task is again to determine if the number of edges is above a certain threshold, as in Edges. This data is designed to allow the GNN to compute the label either from the node features or the graph-structure. We used the same dataset from Edges, with additional node features that indicate for each node to which nodes in the graph it is connected (using a fixed node ordering). Therefore, the number of edges in the graph can be realized from the node features, and so does the graph label. We then created another dataset that shares the same extended node features, but the graph-structures are replaced with non-informative graphs that should be ignored. We then wish to see the performance of R-COV when applied to the dataset with non-informative graphs, with respect to the dataset with the informative graphs. More details about the node features and the non-informative graph generation can be found in the appendix. ProtocolWe evaluate the model in Equation 3 on empty graphs (Empty Graphs), on the original graphs in the dataset (Original Graphs), and on the original graphs with R-COV (Original Graph + R-COV). For each task, we sample the training, validation, and test data from the same distribution. For the Node Sum and Edges, we used a one-layer student GNN with readout, 64 hidden channels, and Relu activations, following the teacher GNN. For the Motifs dataset, two GNN layers were used (because one layer did not fit the training data well). In the Mixed Information task, following our findings in Section 2.2, we expect the GNN to overfit the non-informative graphs. Therefore, we wish to see to what extent R-COV applied to the non-informative graphs is able to improve the performance, with respect to the case where the informative structures, on which the label is computed, are given as the graphs. To allow a more refined analysis, we evaluated this task on varying training set sizes. All models were tuned on a validation set and tested 10 times with random seeds using the best configuration found on the validation set. More details about the training and the hyper-parameters can be found in the appendix. ResultsThe results of Node Sum, Edge and Motif are presented in Table 3. Across all tasks, R-COV significantly improves over the original graphs. The results of the Node Sum task are particularly interesting. The label is only dependent on the node features, yet with R-COV the GNN manages to significantly improve performance over empty graphs. This suggests that the GNN exploits the graph structure although it is not informative. One way in which this can happen is that the graph is used for more efficiently pooling information across the node, or implementing other non-linearities that are invariant to node permutation. Notice that for the Edge and Motif task, the task is not realizable when the graphs are empty, which explains the low performance in this case. Figure 2 shows the learning curves of each dataset in the Mixed Information task. The GNNs trained on the informative graph-structures and the empty graph-structures, perform similarly. As expected, due to GNNs overfitting the graph-structures, when non-informative graph-structures are given, the performance decrease, and does not recover even with \(10k\) samples. When R-COV is applied to the non-informative graph-structures, the performance significantly improves when at least 100 training examples are used. When \(3k\) examples are given, R-COV matches the performance to the case where the informative graphs are given. ### Experiments on Real-World Data Next, we further evaluate R-COV on the real-world datasets used in Errica et al. (2022). All these datasets are publicly available and are frequently used in the GNNs literature. **PROTEINS, ENZYMES, NCI & DD**(Shervashidze et al., 2011) are datasets of chemical compounds. In each dataset, the goal is to classify the compounds according to some property of interest. **IMDB-B, IMDB-M, COLLAB, REDDIT-B, REDDIT-5k**(Yanardag and Vishwanathan, 2015) are social network datasets. Following Errica et al. (2022), we added a feature of the node degrees for all the social network datasets. More information on the datasets can be found in the appendix. EvaluationWe used the protocol and implementation of Errica et al. (2022) who performed a thorough comparison of different GNNs, including GNNs trained on empty graphs. We evaluated the same model in Equation 3 twice: once on the original graphs in the datasets, and once with R-COV applied to the original graphs. The final reported result is the average of 30 runs (10-folds and 3 random seeds). We \begin{table} \begin{tabular}{c|c|c|c} \hline \hline & Nodes Sum & Edges & Motifs \\ \hline Empty Graphs & 97.5 \(\pm\) 0.7 & 50.7 \(\pm\) 1.1 & 50.1 \(\pm\) 0.9 \\ Original Graphs & 94.5 \(\pm\) 0.9 & 89.9 \(\pm\) 0.7 & 85.0 \(\pm\) 0.6 \\ \hline Original Graphs & \multirow{2}{*}{98.9 \(\pm\) 0.9} & \multirow{2}{*}{100.0 \(\pm\) 0.0} & \multirow{2}{*}{95.0 \(\pm\) 0.9} \\ \cline{1-1} \cline{5-5} also included the accuracy of the best model from Errica et al. (2022) among the 5 models they compared. Additionally, we included the accuracy reported on empty graphs from Errica et al. (2022). When the information is available, we also included the best accuracy reported in Alon and Yahav (2021), where the best models from Errica et al. (2022) were trained on the given graphs, with the last layer applied a full graph (FA), to allow long-distance information flow. Additional training details including the hyper-parameters grid are provided in the appendix. ResultsAcross all datasets, R-COV significantly improves over the original graphs. Particularly intriguing outcomes are obtained in the PROTEINS and IMDB-M datasets. Within these two datasets, superior performance is attained when learning over empty graphs in comparison to the provided graphs. Nonetheless, R-COV improves performance also with respect to the empty graphs. This observation suggests that the structural information inherent in the data is indeed informative, yet not fully optimal, as evidenced by the GNN's limited capacity to exploit it. ## 5 Discussion and Practical Implications In practice, the graph layout is typically determined based on a prior understanding of the task at hand, and it is common to asses multiple layouts. In some cases, a natural graph-structure inherently exists within the data, such as in social networks, where the network connections naturally define the graph layout. Nevertheless, it is usually not clear in advance if these graph layouts are informative for the task. Given that certain layouts could provide valuable information for the task while others might not, and this distinction isn't always clear beforehand, this aspect drove our research. Indeed we found that the definition of the graph-structure, typically determined by users, emerges as a pivotal factor in performance outcomes due to the tendency of GNNs to overfit the provided graphs. This revelation opens up a fascinating avenue for further research into the significance of topological information during the training of GNNs. Understanding how GNNs respond to different structural layouts and why certain graph-structures are more effective than others could potentially revolutionize the way we design and train these models. ## 6 Conclusion In this study, we showed that although GNNs possess the capability to disregard the provided graph-structures when needed, they don't. Instead, GNNs tend to overfit the graph-structures, which results in reduced performance. We found that among different graph distributions, regular graphs are more robust to this overfitting. We analyzed the implicit bias of gradient-descent learning of GNNs, as well as their extrapolation abilities, in this setting. Our study shows that in some cases, the graph structure hurts the performance of GNNs, and therefore graph selection is of great importance, as well as having a model that knows when to ignore the graph. Motivated by our empirical and theoretical findings, we suggested R-COV, a graph-editing method that reduces the graph-structure overfitting. We demonstrated on synthetic and real datasets, that R-COV consistently enhances performance. Taken together, our results demonstrate the dramatic effect of the input graph-structure on the performance of GNNs. In future work it will be interesting to obtain a more detailed analysis of the inductive bias of GNNs, for example in cases where both graph structure and node features are informative. ## 7 Acknowledgements This work was supported by a grant from the Tel Aviv University Center for AI and Data Science (TAD) and by the Israeli Science Foundation research grant 1186/18. \begin{table} \begin{tabular}{l|c|c|c|c} \hline & IMDB-B & IMDB-M & COLLAB & REDDIT-B & REDDIT-5K \\ \hline Empty Graphs† & 60.7 ± 2.5 & 49.1 ± 3.5 & 70.2 ± 1.5 & 82.2 ± 3.0 & 52.2 ± 1.5 \\ Original Graphs† & 71.2 ± 3.9 & 48.5 ± 3.3 & **75.6 ± 2.3** & 89.9 ± 1.9 & **56.1 ± 1.7** \\ \hline Original Graphs� & 68.2 ± 2.1 & 47.7 ± 0.9 & 73.5 ± 1.9 & 83.9 ± 1.5 & 50.0 ± 2.1 \\ Original Graphs� + R-COV & **74.1 ± 3.3** & **50.1 ± 3.5** & 74.7 ± 1.9 & **90.2 ± 2.1** & 52.5 ± 2.0 \\ \hline \end{tabular} \end{table} Table 4: Accuracy on real-world tasks. On a fixed architecture using Equation 3, R-COV significantly improves performance across all datasets. § - Equation 3, † - Previously reported in Errica et al. (2022), ‡ - Previously reported in Alon and Yahav (2021).
2309.17428
CRAFT: Customizing LLMs by Creating and Retrieving from Specialized Toolsets
Large language models (LLMs) are often augmented with tools to solve complex tasks. By generating code snippets and executing them through task-specific Application Programming Interfaces (APIs), they can offload certain functions to dedicated external modules, such as image encoding and performing calculations. However, most existing approaches to augment LLMs with tools are constrained by general-purpose APIs and lack the flexibility for tailoring them to specific tasks. In this work, we present CRAFT, a general tool creation and retrieval framework for LLMs. It creates toolsets specifically curated for the tasks and equips LLMs with a component that retrieves tools from these sets to enhance their capability to solve complex tasks. For each task, we collect specific code solutions by prompting GPT-4 to solve the training examples. Following a validation step ensuring the correctness, these solutions are abstracted into code snippets to enhance reusability, and deduplicated for higher quality. At inference time, the language model retrieves snippets from the toolsets and then executes them or generates the output conditioning on the retrieved snippets. Our method is designed to be flexible and offers a plug-and-play approach to adapt off-the-shelf LLMs to unseen domains and modalities, without any finetuning. Experiments on vision-language, tabular processing, and mathematical reasoning tasks show that our approach achieves substantial improvements compared to strong baselines. In addition, our in-depth analysis reveals that: (1) consistent performance improvement can be achieved by scaling up the number of tools and the capability of the backbone models; (2) each component of our approach contributes to the performance gains; (3) the created tools are well-structured and reliable with low complexity and atomicity. The code is available at https://github.com/lifan-yuan/CRAFT.
Lifan Yuan, Yangyi Chen, Xingyao Wang, Yi R. Fung, Hao Peng, Heng Ji
2023-09-29T17:40:26Z
http://arxiv.org/abs/2309.17428v2
# CRAFT: Customizing LLMs by Creating and Retrieving from Specialized Toolsets ###### Abstract Large language models (LLMs) are often augmented with tools to solve complex tasks. By generating code snippets and executing them through task-specific Application Programming Interfaces (APIs), they can offload certain functions to dedicated external modules, such as image encoding and performing calculations. However, most existing approaches to augment LLMs with tools are constrained by general-purpose APIs and lack the flexibility for tailoring them to specific tasks. In this work, we present **CRAFT**, a general tool creation and retrieval framework for LLMs. It creates toolsets specifically curated for the tasks and equips LLMs with a component that retrieves tools from these sets to enhance their capability to solve complex tasks. For each task, we collect specific code solutions by prompting GPT-4 to solve the training examples. Following a validation step ensuring the correctness, these solutions are abstracted into code snippets to enhance reusability, and deduplicated for higher quality. At inference time, the language model retrieves snippets from the toolsets and then executes them or generates the output conditioning on the retrieved snippets. Our method is designed to be flexible and offers a plug-and-play approach to adapt off-the-shelf LLMs to unseen domains and modalities, without any finetuning. Experiments on vision-language, tabular processing, and mathematical reasoning tasks show that our approach achieves substantial improvements compared to strong baselines. In addition, our in-depth analysis reveals that: (1) consistent performance improvement can be achieved by scaling up the number of tools and the capability of the backbone models; (2) each component of our approach contributes to the performance gains; (3) the created tools are well-structured and reliable with low complexity and atomicity. 1 Footnote 1: The code is available at [https://github.com/lifan-yuan/CRAFT](https://github.com/lifan-yuan/CRAFT). ## 1 Introduction Large language models (LLMs) have emerged as transformative tools in AI, exhibiting capabilities in complex problem-solving, including reasoning, planning, and producing creative outputs (Brown et al., 2020; Touvron et al., 2023; Yuan et al., 2023). Recent evidence has shown that LLMs can dynamically interact with the environment through external tools, which grants them access to information beyond their pretrained parameters (Qin et al., 2023; Mialon et al., 2023; Schick et al., 2023). For example, these models can generate code snippets and call APIs provided by visual tools like image encoding models, to solve problems that involve images or videos (Wu et al., 2023; Shen et al., 2023). Success has been achieved by integrating LLMs with large-scale, general-purpose tool collections (Qin et al., 2023; Tang et al., 2023; Suris et al., 2023; Gao et al., 2023; Chen et al., 2022; Gao et al., 2023; Patil et al., 2023). However, adapting LLMs to many domains and evolving applications involves working with more specialized APIs tailored to address specific challenges, which are often inadequately represented in general-purpose toolsets. In response, this work proposes to integrate LLMs with highly customizable toolsets that are curated for specific problems of interest. Our approach, dubbed CRAFT, constructs a toolset customized for a given task (see Figure 1). In contrast to previous approaches that only incorporate one single type of tool (Cai et al., 2023) or create unverified and non-reusable tools (Qian et al., 2023), our toolset contains diverse, reusable, and correct APIs that can tackle various problems. This is achieved through an automated process, by instructing LLMs to generate specific code solutions to solve training problems of the task or related ones. The specific solutions are then abstracted into code snippets, which can later be instantiated to solve similar problems. Dedicated validation and deduplication steps ensure the correctness of the tools and reduce redundancy, thereby enhancing the quality of the toolset. At inference time, precisely identifying and retrieving relevant tools for the given problems is challenging, especially given the constructed large toolset. Existing solutions typically rely on pre-selected tools (Parisi et al., 2022), heuristic-based tool selection strategy (Shen et al., 2023), and simple similarity measure (Qin et al., 2023), which may be unsuitable or insufficient to pinpoint the related tools from a large toolset given the problems. CRAFT implements a retrieval component that takes into account the target problem, the names of the tools (a.k.a, APIs), and their docstrings through a multi-view matching function. The retrieved snippets are then added to the prompt of LLMs so that the retrieved tools can be invoked in the generated code solutions. The empirical effectiveness of CRAFT is validated through experiments on visual question answering, tabular processing, and mathematical reasoning tasks. Compared to strong baselines, CRAFT achieves an average of 43.16% relative improvement in F1 score compared to the best baselines in vision-language tasks, where the LLMs are required to interact with various visual tools to encode the images. Through our carefully designed analysis, we find (1) the performance continually increases as the number of tools and the capability of the backbone models increase; (2) Each component Figure 1: Previous approaches directly solve the given problem by generating code solutions, which may contain errors. CRAFT first creates a toolset that contains diverse, reusable, and correct tools that are executable code snippets. During inference, CRAFT employs a multi-view matching approach, incorporating information about the target problem, API names, and docstrings, to identify and utilize relevant tools, enhancing its problem-solving capabilities. design incorporated in CRAFT contributes to the performance gains; (3) the created tools exhibit atomicity and possess low complexity, underscoring their robust structures and reliability. The contribution of this work is two-fold. First, we introduce CRAFT, a broadly applicable framework to customize LLMs to various tasks and domains via tool creation and retrieval. Second, we release the created toolsets that include diverse, reusable, and correct tools, which are useful for various downstream tasks. Estimatedly, it costs around 2,5005 in total for the toolsets construction. ## 2 Craft We introduce CRAFT to address the challenges faced by prior research in the following two aspects: (1) **Tool Creation:** The establishment of an extensive toolset of diverse, reusable, and correct tools, in contrast to the reliance on limited examples (Cai et al., 2023; Qian et al., 2023); (2) **Tool Retrieval:** The effective retrieval of relevant tools from a large toolset, tailored to the specific question, thereby departing from the conventional approach of simplistic similarity matching (Qin et al., 2023; Patil et al., 2023). By instantiating the retrieved code and adding it to the prompt, LLMs can then use the tools by calling the function to perform complex operations rather than implement every detail from scratch. ### Tool Creation Based on a source dataset, namely a general instruction dataset or a training dataset that contains problem-answer pairs, CRAFT constructs the toolset through four steps: **Generation**, **Abstraction**, **Verification**, and **Deduplication**, which are illustrated in Figure 2 and will be described as follows. **Generation.** To create a toolset containing diverse tools that can be adopted to address various problems, we apply an iterative approach to sample problem-answer pairs from the source dataset. At a high level, the generation step involves iteratively sampling problems from the source dataset, generating code solutions, and filtering out incorrect ones. We use \(Q\) to denote the set of sampled problems and \(R_{i}\) to denote the set of remaining problems after the \(i\)-th iteration. \(Q\) is initialized with \(n\) random samples from the entire source dataset and \(R_{i}\) is initialized as the rest. At each iteration, we use the highest similarity between each \(q_{r}\in R_{i}\) and any \(q_{s}\in Q\) as the similarity between each \(q_{r}\) and set \(Q\). To enhance the diversity of the toolset, \(Q\) is updated by adding \(k\) problems that are least similar to \(Q\), where \(k\) represents the desired number of samples for each iteration. This min-max sampling strategy is: \(Q\gets Q\cup\arg\!\mathrm{TopK}_{\min}\left(\max_{q_{s}\in Q}\mathrm{sim} (q_{r},q_{s})\mid q_{r}\in R_{i}\right)\). Function \(\arg\!\mathrm{TopK}_{\min}\) returns the indices of the top \(k\) elements with the smallest values from a set, which is set to 100 in our implementation, and \(\mathrm{sim}\left(\cdot\right)\) denotes the cosine similarity of the representation vectors computed by SimCSE, a state-of-the-art sentence representation learning method based on contrastive learning (Gao et al., 2021). Figure 2: The toolset construction pipeline creates diverse, reusable, and correct tools that are executable code snippets, which can generalize LLMs to specialized domains and tasks. For each problem \(q_{r}\in Q\), we instruct GPT-4 (OpenAI, 2023) to generate a specific solution in Python that can be executed by an interpreter to get the answer. The prompts are shown in Appendix B. We keep those code solutions that are bug-free and can produce correct outputs, and discard everything else to ensure the correctness of the created tools. **Abstraction.** The generated code solutions are tailored for the given problems, keeping them from being useful for others. The abstraction step aims to promote the reusability of the toolset, ensuring that each tool can be adopted to tackle a broader range of similar problems. This abstraction step is achieved by instructing GPT-4 to replace all specific variable names with general ones (e.g., cat\(\rightarrow\)animal, desk\(\rightarrow\)object) and wrap textual inputs of internal function calls as arguments of the tool (e.g., date = df["date"]\(\rightarrow\)date = df[column_name], where the value of column_name is passed in by tool users) within the code piece, substituting them with more generic counterparts to adapt to similar problems (see Figure 2). In addition, we instruct GPT-4 to assign a suitable and general function name and compose a corresponding docstring to elucidate the functionality of created tools. The prompt is described in Appendix B. **Validation.** The validation step ensures the correctness of the created tools. This is achieved by examining whether the abstract tool functions can solve the original problems. Specifically, we offer GPT-4 access to the abstract tool function, with the expectation that it will address the original problems by supplying appropriate arguments to the tool function. The tools that fail to derive the correct answers given the original problems are discarded. **Deduplication.** To reduce the redundancy in the toolset and improve its diversity, we perform a deduplication step to streamline the toolset and mitigate potential confusion stemming from redundant tools (e.g., same function names). We organize created tools into groups based on function names and the corresponding number of input arguments. Each group contains tools that have the same function names and the number of input arguments. For groups that contain more than one tool, we prompt GPT-4 to decide on the most comprehensive tool with extensive applications within the groups, using the prompt shown in Appendix B. ### Tool Retrieval Retrieving relevant tools from the large constructed toolset is challenging. For better retrieval outcomes, we prompt the LLM to "describe what it needs". During inference, the evaluated LLM is asked to generate the function names \(f_{t}\) and the docstrings \(d_{t}\) based on the target problem \(q_{t}\). Then CRAFT adopts a similarity measuring strategy that takes into account three key aspects of the created tool \(t_{t}\): (1) The original problem used for creating the tool \(q_{t}\): (2) The tool's function name \(f_{i}\); (3) The docstring of the function \(d_{i}\). For each tool \(t_{i}\), this results in a tuple \((q_{i},f_{i},d_{i})\). We conduct multi-view matching, searching tools via \(q_{t}\), \(f_{t}\), and \(d_{t}\) respectively in the toolset \(T\). Specifically, we have: \[T_{q_{t}}=\mathrm{argTopK}_{\mathrm{max}}\left(\mathrm{sim}(q_{i},q_{t})\mid t _{i}\in T\right) \tag{1}\] where \(\mathrm{argTopK}_{\mathrm{max}}\) is a function that returns the indices of the top \(k\) elements with the maximum values from a set, \(\mathrm{sim}\left(\cdot\right)\) measures the similarity between two sentences using SimCSE embeddings, and \(T_{q_{t}}\) is a list of \(k\) tools retrieved by matching problems. We then perform the similar retrieval by matching function names and docstring, obtaining \(T_{f_{t}}\) and \(T_{d_{t}}\) respectively. Next, the three lists of tools are aggregated and ranked by their frequency of occurrences. We then retrieve the three most frequent tools by majority vote. Finally, we filter out those that occur only once, if any. In extreme cases, it is also possible that all tools appear only once, i.e. the retrieved tool set is empty, then LLMs would directly perform code generation to solve question without invoking task-specific tools. After retrieval, the code snippets of tools are added to the prompt of LLMs for code generation to solve a given question. LLMs can invoke the tools (a.k.a, APIs) embedded in the code. Subsequently, the retrieved tool functions and LLM-generated code solutions are instantiated into executable code, and then they are executed to obtain the final predictions. Summary and Discussion.CRAFT creates a specialized toolset offline, and retrieves useful tools from the toolset in inference time. In toolset creation, we apply an iterative problem-sampling strategy based on similarity for diversity, followed by generating code solutions using GPT-4. To ensure the reusability of the created tools, we abstract the specific solutions into high-level tools that can tackle various kinds of problems by instructing GPT-4. To ensure the tools' correctness, we evaluate the tools on the original problems and discard those outputting incorrect answers. Finally, we deduplicate the tools to reduce redundancy, and finally obtain a toolset. In inference, we apply a multi-view matching algorithm regarding the target problem, function name, and docstring between those in the toolset to retrieve related tools. We highlight several advantages of CRAFT. At a high level, by leveraging the tool creation paradigm, we can effectively utilize the domain-specific data to customize the LLMs without extensive fine-tuning, rendering CRAFT a **training-free** and **plug-and-play** approach. Due to CRAFT's flexibility in accommodating various domains and tasks, it is **broadly applicable** across a spectrum of problem categories. In the concrete implementation, each tool is instantiated as an executable code snippet and is targeted at small atomic problems, such as identifying the color of an object. This ensures the **explainability** of the created tools. We can easily incorporate human efforts to examine the problematic tools and fix the errors. In addition, this allows for the decomposition of complex problems into multiple manageable steps, facilitating the **compositionality** of these created tools during inference. ## 3 Experiment ### Experimental Setting Evaluation Tasks, Datasets, and Metrics.To demonstrate the versatility of CRAFT, we select three distinct tasks for evaluation, spanning visual question answering (VQA), tabular processing, and mathematical reasoning: * **VQA**: The goal is to answer questions based on the information available in an associated image. We use three complex visual reasoning datasets, including GQA (Hudson and Manning, 2019), OK-VQA (Marino et al., 2019), and A-OKVQA (Schwenk et al., 2022). The GQA problems are more complex and require compositional reasoning to answer, while OK-VQA and A-OKVQA mainly use external real-world knowledge of objects or actions in the image. For evaluation, we formalize the VQA task as an open-ended generation problem and use the soft accuracy (SAcc) metric (Antol et al., 2015). In addition, we observe that LM-generated functions often produce descriptive responses instead of concise phrases, which hurts the exact match between predictions and ground-truth answers. This can potentially cause an underestimation of the performance, so we also use the F1 score for evaluation, which is frequently employed in extractive question-answering tasks (Rajpurkar et al., 2016). * **Tabular Processing:** It evaluates an LLM's ability to process structured data in tables. We use TabMWP (Lu et al., 2023), a dataset with each sample containing one table and one corresponding problem in natural language. To handle the task, LLMs should understand the natural language descriptions of the problems, extract relevant information from the accompanying tables, and finally perform calculations based on the extracted information. We use the accuracy based on the exact match to measure model performance. * **Mathematical Reasoning:** LLMs are expected to solve mathematical problems written in natural language, leveraging both their understanding of textual inputs and complex reasoning capabilities. We use the algebra subset of MATH (Hendrycks et al., 2021), containing \(881\) challenging competition-level algebra problems. Evaluating CRAFT on all subsets goes beyond our budget constraint but we believe CRAFT is equally applicable to other math problems. The models' performance is evaluated using accuracy. Baselines.We compare CRAFT with baseline methods of four categories: * **Basic Reasoning without Tools:** This line of methods solves downstream problems solely based on the intrinsic reasoning ability of LLMs without access to any external tool. We use the chain-of-thought prompting (**CoT**) (Wei et al., 2022), which prompts LLMs to generate the rationales before answers _without_ using tools. However, it does not apply to the VQA task since LLMs cannot process visual information without external visual tools. * **Tool Learning:** We compare with approaches that directly leverage existing tools to assist the problem-solving process. In this case, LLMs only learn to use the human-provided tools without creating and retrieving tools. We compare to two approaches: (1) **Vanilla** stands for utilizing the most basic tools, such as Python Interpreter for all three tasks, and extra vision models to solve VQA problems. Specifically, the vanilla tool-using method for VQA is ViperGPT (Sur'is et al., 2023), and that for the other two tasks is Program-of-Thoughts reasoning (Chen et al., 2022b). (2) **External library:** Therefore, we also explore the possibility of exploiting external tool functions in the Python libraries to enhance the vanilla methods. For VQA, we use Numpy (Harris et al., 2020), SciPy (Virtanen et al., 2020), Scikit-Image (Van der Walt et al., 2014), and Mahotas (Coelho, 2012). For the remaining two tasks, we substitute Scikit-Image and Mahotas with Pandas (McKinney et al., 2011) and SymPy (Meurer et al., 2017). * **Different LLM-Created Tools:** We compare with previous tool creation approaches, including **LATM**(Cai et al., 2023) and **CREATOR**(Qian et al., 2023). Specifically, LATM samples 3 examples from the training set to create a tool for the task, which is further verified by 3 samples from the validation set. The created tool is then applied to all test cases. CREATOR creates one specific tool for each test case in the inference time. For fair comparisons, we remove the format checking and rectifying process used in the original work and only measure the one-pass accuracy. * **Alternative Retrieval Methods:** We compare with previous tool retrieval approaches, which focus on the similarity measure between the problem and the API names. We include two prevalent measures, namely SimCSE and BM25 similarity, following Qin et al. (2023b) and Patil et al. (2023) respectively. The baseline retrieval methods are also based on our created toolset for fair comparison. In this work, we implement CRAFT and all baselines based on the GPT-3.5-Turbo (ChatGPT) backbone because: (1) It is more cost-effective compared to alternatives like GPT-4, with affordable cost and strong performance; (2) The Turbo-0613 version is specially optimized for the tool-learning purpose. Conversely, alternative backbone models (e.g., CodeLlama (Roziere et al., 2023)) demonstrate near-random performance in our setting, which can be attributed to their suboptimal tool-using capabilities. The concrete implementation details are described in Appendix A. ### Experimental Results We present the results in Table 1. Particularly, we find that directly leveraging tools from external Python libraries fails to improve the performance, and in certain cases, may have a detrimental impact (e.g., in mathematical reasoning). This suggests that the relevance of tools affects the performance of augmented LLMs, motivating us to construct a high-quality tool base that customizes LLMs to each task. We observe that LATM struggles with all datasets and brings negative effects; CREATOR yields a notable enhancement in mathematical reasoning task performance, while its impact on other datasets appears marginal. This result suggests the necessity of sufficient and diverse tools to tackle problems of various categories in downstream datasets. For tool retrieval baselines, the performances vary across datasets. But in general, LLMs do not get substantial enhancement except on TabMWP, posing the need for better retrieval algorithms. Overall, CRAFT demonstrates superior performance on all datasets, especially on the challenging VQA tasks. Significantly, CRAFT demonstrates a notable enhancement over the vanilla baseline, namely ViperGPT, with absolute SAcc improvements of 10.4, 18.0, and 15.2 observed on the GQA, OK-VQA, and A-OKVQA datasets, respectively. In addition, based on the same created toolset, the retrieval approach incorporated in CRAFT demonstrates overall better performance compared to alternative ones, which exhibit a certain level of performance variance. One exception is the \begin{table} \begin{tabular}{l l c c c c c c c c} \hline \hline \multirow{2}{*}{GPT-3.5-Turbo} & \multicolumn{3}{c}{**GQA**} & \multicolumn{3}{c}{**OK-VQA**} & \multicolumn{2}{c}{**A-OKVQA**} & \multicolumn{2}{c}{**TabMWP**} & \multicolumn{1}{c}{**MATH\({}_{\textbf{alg}}\)**} \\ \cline{3-10} & Method & SAcc & F1 & SAcc & F1 & SAcc & F1 & Acc & Acc \\ \hline Basic Reasoning & CoT & - & - & - & - & - & - & 75.2 & 50.9 \\ \hline \multirow{2}{*}{Tool Learning} & Vanilla & 35.0 & 36.9 & 15.4 & 24.7 & 15.6 & 23.0 & 80.6 & 58.2 \\ & External & 34.2 & 37.8 & 16.8 & 25.3 & 14.45 & 22.9 & 83.1 & 41.1 \\ \hline \multirow{2}{*}{Different Tools} & LATM & 29.4 & 30.3 & 7.8 & 11.8 & 6.5 & 11.4 & 9.3 & 30.3 \\ & CREATOR & 34.3 & 38.4 & 16.7 & 27.3 & 17.3 & 25.8 & 81.0 & 65.0 \\ \hline \multirow{2}{*}{Alternative Retrieval} & SimCSE & 36.4 & 38.8 & 18.4 & 28.9 & 16.8 & 24.3 & 83.8 & 36.7 \\ & BM25 & 37.9 & 39.0 & 13.4 & 24.3 & 17.8 & 26.1 & **89.2** & 35.9 \\ \hline This Work & CRAFT & **45.4** & **48.8** & **33.4** & **43.0** & **30.8** & **40.6** & 88.4 & **68.1** \\ \hline \hline \end{tabular} \end{table} Table 1: The experimental results of CRAFT and four categories of baselines on three tasks. SAcc denotes soft accuracy, which is widely used for VQA. F1 is supplemented to tackle the issue of underestimated performance caused by the descriptive responses of LLMs. Acc denotes the accuracy. comparison with BM25 on TabMWP. This discrepancy can be attributed to the presence of relatively straightforward patterns within this dataset, which do not sufficiently showcase the advantages of our approach in tool retrieval. ## 4 Further Analysis. In this section, we conduct an in-depth analysis for CRAFT on VQA datasets. This task is particularly pertinent for assessing the impact of external tool augmentation, given that LLMs lack the capability to directly process images. Thus, it serves as a key testbed for measuring the influence of external tools. ### Does Abstraction Facilitate Tool Use? Setup.Abstraction is a crucial step in constructing the toolset, converting solutions for specific problems into general-purpose tools that are applicable to diverse problems with a common pattern. In this section, we explore its efficacy with an ablation study. To scrutinize this, we establish a control group, where the toolset is created ablating the abstraction step. To ensure compatibility, we prompt GPT-4 to assign a distinctive function name and docstring for each solution to facilitate the multi-view retrieval approach for fair comparison. Results.Table 2 shows a clear performance drop when the abstraction step is ablated, confirming its importance. Moreover, comparing abstraction-ablated CRAFT with ViperGPT, improvements are achieved across all three datasets, especially on OK-VQA and A-OKVQA. We identify two potential reasons that can elucidate the improvement. First, the created toolset is large and diverse enough, facilitating the adoption of specific tools without abstraction for addressing new problems. Second, as retrieved tools offer a correct approach to problem-solving, LLMs can efficiently adapt these strategies to address new problems. ### Is Every Matching in the Retrieval Triplet Equally Important? Setup.CRAFT retrieves tools based on multi-view matching. We demonstrate its effectiveness in Section 3.2. Next, we respectively ablate problems, function names, and docstring from the matching process to investigate their influence on performance. Results.As demonstrated in Table 2, it is clear that the removal of any of the three similarity measures from our multi-view matching function adversely impacts performance, thereby validating the rationale behind our design strategy. Among them, the function names appear the most important one, resulting in more than 6.6 absolute SAcc drop when ablated. \begin{table} \begin{tabular}{l c c c c c c} \hline \hline \multirow{2}{*}{GPT-3.5-Turbo} & \multicolumn{2}{c}{**GQA**} & \multicolumn{2}{c}{**OK-VQA**} & \multicolumn{2}{c}{**A-OKVQA**} \\ \cline{2-7} & SAcc & F1 & SAcc & F1 & SAcc & F1 \\ \hline ViperGPT & 35.0 & 36.9 & 15.4 & 24.7 & 15.6 & 23.0 \\ CRAFT & **45.4** & **48.8** & **33.4** & **43.0** & **30.8** & **40.6** \\ \cline{2-7} w/o Abstraction & 37.1 & 39.7 & 31.0 & 41.4 & 28.0 & 39.3 \\ w/o Problem & 42.4 & 45.8 & 32.7 & 42.3 & 29.8 & 38.7 \\ w/o Name & 36.4 & 38.3 & 26.8 & 35.7 & 21.7 & 30.6 \\ w/o Docstring & 37.3 & 39.1 & 29.8 & 38.8 & 25.0 & 34.0 \\ \hline \multirow{2}{*}{GPT-4} & \multicolumn{2}{c}{**GQA**} & \multicolumn{2}{c}{**OK-VQA**} & \multicolumn{2}{c}{**A-OKVQA**} \\ \cline{2-7} & SAcc & F1 & SAcc & F1 & SAcc & F1 \\ \hline ViperGPT & 51.4 & 53.7 & 36.7 & 47.2 & 32.8 & 42.4 \\ CRAFT & **55.6** & **58.8** & **39.0** & **49.1** & **35.3** & **44.8** \\ \hline \hline \end{tabular} \end{table} Table 2: Results of further analysis, encompassing ablation study on abstraction and retrieval components, as well as the comparison between ViperGPT and CRAFT with different backbones. ### Does CRAFT still Work for More Powerful Backbone Models? Setup.In previous experiments, CRAFT is implemented using GPT-3.5-Turbo as the backbone. In this analysis, we evaluate CRAFT when using the more powerful GPT-4 as the backbone. Due to the budget limits, we only compare CRAFT with the vanilla baseline ViperGPT without tool creation. Results.The results in Table 2 demonstrate that CRAFT achieves consistently better performance with GPT-4, confirming that CRAFT is helpful even with more capable backbone models. However, it's noteworthy that while the improvement of CRAFT on GPT-4 is pronounced, it is less obvious compared to the impact on GPT-3.5-Turbo. We hypothesize that this result is in line with the conclusions of recent work, which finds that LLMs can benefit from the guidance of more capable models while gaining no improvement from the guidance of itself (Fu et al., 2023; Wang et al., 2023). The tools, created by GPT-4, may provide comparatively fewer insights for itself, thereby limiting the potential benefits of external tool augmentation. ### Can CRAFT Improve Performance as the Toolset Gets Larger? Setup.A feature of CRAFT distinctive from prior approaches is the extensibility of the toolsets. We examine the utility of extension by manipulating the toolset's size and tracking performance trends. To elaborate, the iterative problem sampling strategy detailed in Section 2.1 is initialized with a total of 11 epochs. In this analysis, the sizes of the toolset are modified through the exclusive inclusion of tools created at distinct epochs. We choose tools from the initial epoch, the final epoch, and the epoch in between, resulting in toolset sizes of 0 (no created tool for comparison), 261, 337, and 525, respectively. Results.The results in Figure 3 show a consistent increase in soft accuracy as the toolset expands across 3 datasets, demonstrating the scalability and potential of CRAFT. The upward trend of soft accuracy continues, suggesting the potential for further improvement of CRAFT as the toolset keeps expanding. Significantly, the most substantial improvement is observed when transitioning from the absence of any created tools to the utilization of 261 tools. This validates the effectiveness of creating the specialized toolset to customize LLMs to various tasks and domains. ### What is Inside the Toolset? We analyze the complexity and diversity of the code in toolsets. For complexity, we use the widely adopted cyclomatic complexity (McCabe, 1994) to measure the number of linearly independent paths, with the higher value indicating the code is more complicated and requires-refactoring to make it more reliable. Good software should have a complexity of no more than 10, and a less complex toolset is desirable since it is less prone to trigger bugs. For diversity, we classify each tool into different groups. We use the number of distinct groups as the metric, with a larger number of tool groups indicating a wider range of problems that our toolset can address. We calculate the complexity using Lizard Python library2, and present the average complexity of tools for each task in Table 3. We observe that the created toolsets for 3 tasks exhibit relatively low complexity, indicating that the tools are well-structured and reliable. We then adopt the Louvain community detection method (Blondel et al., 2008), a graph-based community dividing algorithm, to Figure 3: The performance of CRAFT improves as the toolset scales up. \begin{table} \begin{tabular}{l|c c c} \hline \hline Task & VQA & \begin{tabular}{c} Tabular \\ Process \\ \end{tabular} & \begin{tabular}{c} Mathematics \\ Reasoning \\ \end{tabular} \\ \hline Avg. Cyclomatic Complexity & 2.64 & 2.07 & 1.34 \\ \# Tools & 525 & 181 & 282 \\ \# Classes of Tools & 195 & 23 & 234 \\ \hline \hline \end{tabular} \end{table} Table 3: Analysis of cyclomatic complexity and diversity of the toolsets. group different tools. As shown in Table 3, for VQA, tabular process, and mathematics reasoning, there are 195, 23, and 234, distinct classes out of 525, 181, and 282 tools respectively. This suggests that the MATH dataset has the most diverse patterns, followed by VQA, while problems in the TabMWP dataset are more homogeneous and can be well-solved using fewer created tools. ## 5 Related Work ### Tool Learning with LLMs LLMs, when integrated with real-world Application Programming Interfaces (APIs), gain the capability to actively interact with a range of external systems (a.k.a, tools) (Parisi et al., 2022; Schick et al., 2023; Tang et al., 2023; Patil et al., 2023; Song et al., 2023; Hao et al., 2023; Shen et al., 2023). The pioneering work connects GPT-3 (Brown et al., 2020) with the web browser to access latest information, and hires human annotators to provide demonstrations of web searching (Nakano et al., 2021). Further research expands upon this concept by encompassing a broader range of tools, such as calculators, calendars, interpreter, physical simulator, and maps (Shuster et al., 2022; Paranjape et al., 2023; Liu et al., 2023; Chen et al., 2022; Gao et al., 2023; Drori et al., 2022; Pan et al., 2023; Liu et al., 2023b), and explores the application of weakly-supervised methods, such as bootstrapping (Parisi et al., 2022; Schick et al., 2023). More recently, progress has been achieved through the process of distilling the tool using the ability of closed-source LLMs (ChatGPT (ChatGPT Plugins)) to the open-source LLMs. The key idea revolves around allowing ChatGPT to produce synthetic data exemplifying the usage of specified APIs. Subsequently, this synthetic data is leveraged for the refinement of open-sourced LLMs (Qin et al., 2023; Tang et al., 2023). In this work, we extend our approach beyond mere dependence on existing tools. We adapt LLMs to diverse downstream tasks through the creation of customized tools and the retrieval of relevant tools during inference. ### Tool Creation & Retrieval While the exploration on tool creation and retrieval is relatively limited compared to tool learning with LLMs, we identify some preliminary efforts in this domain. For tool creation, Cai et al. (2023) proposes an approach wherein tools are created through the utilization of three training samples, and their efficacy is subsequently assessed using three validation samples. Consequently, the resulting toolbase is constrained in quantity. This approach hinges on the assumption that there exists a notable similarity between the distributions of the training and testing data. Consequently, the tools produced can be readily incorporated. Similarly, Qian et al. (2023) adopts a strategy that involves generating tools exclusively based on the provided query. As a result, the created tools lack reusability, thereby undermining the fundamental purpose of tool creation. For tool retrieval, existing research primarily includes pre-selection of human-curated tools tailored to specific problems (Parisi et al., 2022; Tang et al., 2023; Schick et al., 2023; Zhuang et al., 2023), employing heuristic-based methods for tool selection (Shen et al., 2023; Liang et al., 2023), and adopting a straightforward similarity metric between user queries and API names (Qin et al., 2023; Patil et al., 2023; Xu et al., 2023). In this work, we motivate to create a large tool base that can be effectively utilized on related downstream tasks and address the challenge of retrieving the relevant tools from the large tool base. ## 6 Conclusion In conclusion, this paper presents CRAFT, a general framework for tool creation and retrieval to generalize LLMs for diverse domains and tasks. The framework's effectiveness is demonstrated through improved performance in challenging tasks, alongside insights into component contributions, constructed toolsets, and scalability. ## Limitations and Future Work We identify two limitations in this work that are worth future exploration. First, although the basic idea in CRAFT is widely applicable in principle, it is currently based on code generation for tool creation. This indicates that CRAFT is only suitable for tasks that can be solved via writing code solutions. We plan to expand this scope by exploring the use of pseudocode to generalize CRAFT to more tasks. Second, the effectiveness of CRAFT is greatly affected by the tool-using ability of backbone models. In our pilot exploration, some open-source models achieve near-random performance in the challenging tool-manipulation setting. Future work includes eliciting the tool manipulation ability in open-source models, such as the pilot exploration in Qin et al. (2023b).
2309.09616
A low repetition rate optical frequency comb
Reducing the pulse repetition rate of an optical frequency comb increases the pulse energy for a given average power. This enhances the efficiency of nonlinear frequency conversion and it facilitates extending the accessible wavelength range, for example into the extreme ultraviolet (XUV). The resulting spectrally dense frequency comb can still be used for precision spectroscopy of narrow atomic or molecular transitions. In this article, we demonstrate a low-noise infrared frequency comb with a repetition rate as low as 40 kHz using a Yb:KYW mode-locked laser, pulse picking, and subsequent amplification. The frequency comb structure is confirmed by generating a beat note with a continuous wave reference laser. A comb mode is actively stabilized to the reference laser, and the integrated rms phase noise from 20 Hz to 20 kHz is measured to be 195 mrad.
Francesco Canella, Johannes Weitenberg, Muhammad Thariq, Fabian Schmid, Paras Dwivedi, Gianluca Galzerano, Theodor W. Haensch, Thomas Udem, Akira Ozawa
2023-09-18T09:39:42Z
http://arxiv.org/abs/2309.09616v1
# A low repetition rate optical frequency comb ###### Abstract Reducing the pulse repetition rate of an optical frequency comb increases the pulse energy for a given average power. This enhances the efficiency of nonlinear frequency conversion and it facilitates extending the accessible wavelength range, for example into the extreme ultraviolet (XUV). The resulting spectrally dense frequency comb can still be used for precision spectroscopy of narrow atomic or molecular transitions. In this article, we demonstrate a low-noise infrared frequency comb with a repetition rate as low as \(40\,\mathrm{kHz}\) using a Yb:KYW mode-locked laser, pulse picking, and subsequent amplification. The frequency comb structure is confirmed by generating a beat note with a continuous wave reference laser. A comb mode is actively stabilized to the reference laser, and the integrated rms phase noise from \(20\,\mathrm{Hz}\) to \(20\,\mathrm{kHz}\) is measured to be \(195\,\mathrm{mrad}\). [http://dx.doi.org/XXX](http://dx.doi.org/XXX) ## 1 Introduction Optical frequency combs have revolutionized the field of optical frequency metrology [1, 2, 3] and are indispensable tools for high-precision laser spectroscopy to study fundamental physics [4, 5, 6] and for optical frequency standards [7, 8, 9, 10, 11]. The first application of frequency combs was to measure the frequency of a continuous wave laser that is then used for precision spectroscopy [3, 12, 13]. It is also possible to use the frequency comb itself to excite target transitions and perform spectroscopy with ultra-wide spectral coverage, fast detection times, and high sensitivities [14, 15, 16, 17, 18, 5]. An optical frequency comb consists of many spectral modes that are equally spaced by the repetition rate of the generating mode-locked pulse train which is typically in the MHz to the GHz range for conventional solid-state and fiber laser-based oscillator designs. While combs with mode spacings of several tens of GHz have found applications for calibration of astronomical spectrographs [19, 20], a pulse train with a relatively low repetition rate below \(10\,\mathrm{MHz}\) can have correspondingly higher pulse energy and is therefore advantageous for several applications, including for example driving efficient nonlinear processes [21, 22, 23]. Nonlinear frequency conversion of optical frequency combs can enable precision spectroscopy in wavelength ranges where continuous-wave lasers are not available, such as the extreme ultraviolet (XUV) [24, 25, 26]. For example, our planned experiment of precision spectroscopy of the 1S-2S transition in He\({}^{+}\) ions requires an optical frequency comb at 60.8 nm [27, 28]. The Ramsey-type frequency comb, which consists of pairs of intense pulses, can be an alternative method to address transitions at XUV wavelengths [29]. Precision spectroscopy of the nuclear transition of \({}^{29}\mathrm{Th}\) at around \(149\,\mathrm{nm}\) may find application as a nuclear optical clock [30, 31, 32, 33]. Intra-cavity high-order harmonic generation allows the generation of high-power XUV frequency combs suitable for direct frequency comb spectroscopy [34, 35, 36, 37, 38]. In this scheme, the high-order harmonic generation process is performed inside an enhancement cavity using a gaseous or solid medium placed at the focus of the cavity. Special care must be taken to avoid detrimental effects of the high average power and intensity, such as thermal lensing [39], plasma phase shift [40, 41, 42], misalignment due to elevated temperature [39], and damage to the optics [43]. In addition, enhancement cavities for ultrashort pulses have to be carefully designed to tailor the intra-cavity dispersion [44, 45]. Alternatively, by operating a comb at a lower repetition rate, a similarly high pulse energy could be achieved with modest average power even without an enhancement cavity. Lowering the repetition rate results in a frequency comb with a smaller mode spacing. For precision spectroscopy, the mode spacing should be at least several times larger than the linewidth of the transition under investigation in order to obtain a comb-mode resolved spectroscopy signal. For instance, a signal linewidth of about 1 kHz is expected for the 1S-2S transition in He\({}^{+}\) at 60.8 nm [27, 28], while the nuclear transition of \({}^{229}\)Th at about 149 nm has a natural linewidth of about 20 uHz [30, 31]. In principle, a comb with a few kHz mode spacing would be sufficient for these applications. High pulse energy ultrafast lasers at such low repetition rates are conventionally generated using master oscillator power amplifiers (MOPAs), where the chirped pulse amplification (CPA) scheme is employed and have found various applications including attosecond physics [46, 47], laser particle acceleration [48, 49], and ultrafast transient spectroscopy [50]. By actively controlling the round-trip phase-shift of the oscillator, each pulse can have an identical carrier-envelope phase (CEP), which is particularly important for the study of field-sensitive phenomena driven by few-cycle pulses [51]. Using the balanced optical cross-correlator method [52], the pulse-to-pulse timing jitter of the CPA laser system can be controlled precisely. For example, in Ref. [53] a sub-100 fs timing jitter is demonstrated at 1 kHz repetition rate. A pulse timing jitter of 10 fs still introduces a relative frequency uncertainty of \(10^{-10}\) at a repetition rate of 1 kHz. This corresponds to a broadening of the optical comb modes that is larger than the mode spacing and could wash out the comb structure in the frequency domain. Although such CEP-stabilized low-repetition-rate laser systems have been shown to be suitable for studying ultrafast phenomena, they do not guarantee a low-noise frequency comb structure. In this work, we report a low-noise optical frequency comb that operates at tunable repetition rates from 40 kHz to 40 MHz using a Yb:KYW mode-locked oscillator. Mode-locked lasers oscillating directly at sub-MHz repetition rates would require very long cavities which may not be practical. Instead, conventional mode-locked lasers and pulse pickers are used to generate optical frequency combs at low repetition rates [54, 55]. The associated power loss is compensated by re-amplifying the pulse train such that higher pulse energy is achieved. A comb mode is actively stabilized to an ultra-stable continuous wave (cw) reference laser. The phase noise of the stabilized mode is characterized with respect to the reference laser and is shown to result in a narrow linewidth suitable for exciting narrow transitions. ## 2 Modelling of the pulse-picking process In this section, we first model the pulse picking process by treating the pulse picker as an ideal amplitude modulator. For mathematical convenience, we model the output of the mode-locked laser as a train of Gaussian-shaped pulses with a repetition rate \(f_{\mathrm{rep}}\). The temporal pulse spacing is \(T=f_{\mathrm{rep}}^{-1}\). The electric field \(E(t)\) of the laser pulse train can be described as \[E(t)=A\sum_{k=-\infty}^{\infty}\exp\left[-\frac{\left(t-k\,T\right)^{2}}{ \Delta t^{2}}\right]e^{i\left(\omega t+\varphi(t)\right)}, \tag{1}\] where \(A\) is the field envelope peak amplitude, \(\omega_{0}\) is the angular frequency of the carrier, and \(\Delta t\) is the pulse duration defined by the \(1/e\) half-width of the field amplitude. The residual phase fluctuation after comb stabilization is represented by \(\varphi(t)\). For simplicity, the CEO frequency of the comb is assumed to be zero in eq.(1) by assuming that the carrier frequency \(\omega_{0}\) is an integer multiple of the pulse repetition rate \(2\pi/T\). Including a finite CEO frequency in the calculation is straightforward, and does not change the results. We introduce pulse picking by assuming that the laser pulses pass through an ideal amplitude modulator with rectangular-shaped gating which selects every \(m\)-th pulse. We call \(m\) the pulse picking factor. The pulse picker reduces the average power and the repetition rate by a factor \(m\) and hence the power in each of the modes by a factor \(m^{2}\). Under the approximation that the electric field amplitude at the rising and falling edge of the rectangular-shaped gating is negligible, the pulse-picked field \(E_{\mathrm{p}}(t)\) can be written as \[E_{\mathrm{p}}(t) =A\sum_{k=-\infty}^{\infty}\exp\left[-\frac{\left(t-m\,k\,T\right) ^{2}}{\Delta t^{2}}\right]e^{i\left(\omega_{0}t+\varphi(t)\right)} \tag{2}\] \[\equiv E_{\mathrm{p,0}}(t)e^{i\varphi(t)},\] where \(E_{\mathrm{p,0}}(t)\) is defined as the noiseless component of the pulse-picked field. The spectrum of the pulse-picked field \(E_{\mathrm{p}}(t)\) is given by \[\tilde{E}_{\mathrm{p}}(\omega)=\int_{-\infty}^{\infty}E_{\mathrm{p}}(t)e^{- i\omega t}\mathrm{d}t\approx\tilde{E}_{\mathrm{p,0}}(\omega)+\frac{i}{2\pi}\, \left(\tilde{E}_{\mathrm{p,0}}\ast\tilde{\varphi}\right)(\omega), \tag{3}\] where the approximation is valid for small rms phase noise with a vanishing mean, i.e. \(e^{i\varphi(t)}\approx 1+i\,\varphi\left(t\right)\). In Eq. (3), the convolution is defined as \(\left(\tilde{E}_{\mathrm{p,0}}\ast\tilde{\varphi}\right)(\omega)=\int_{- \infty}^{\infty}\tilde{E}_{\mathrm{p,0}}(\omega^{\prime})\tilde{\varphi}( \omega-\omega^{\prime})\mathrm{d}\omega^{\prime}\), with \(\tilde{E}_{\mathrm{p,0}}(\omega)\) and \(\tilde{\varphi}(\omega)\) are Fourier transforms of \(E_{\mathrm{p,0}}(t)\) and \(\varphi(t)\), respectively. The main Fourier components of \(\tilde{E}_{\mathrm{p,0}}(\omega)\) and \(\tilde{\varphi}(\omega)\) are in the optical and the radio frequency domain respectively. The spectrum of the noiseless component is given by: \[\tilde{E}_{\mathrm{p,0}}(\omega) =A\int_{-\infty}^{\infty}\sum_{k=-\infty}^{\infty}\exp\left[-\frac {\left(t-m\,k\,T\right)^{2}}{\Delta t^{2}}\right]e^{i\left(\omega_{0}-\omega \right)t}\mathrm{d}t\] \[= \frac{2\pi^{3/2}\Delta t}{mT}\,A\exp\left[-\frac{1}{4}\Delta t^{2 }\left(\omega-\omega_{0}\right)^{2}\right]\] \[\quad\times\sum_{n=-\infty}^{\infty}\delta\left(\omega-\frac{2 \pi n}{mT}\right). \tag{4}\] As expected, the spectrum after pulse picking consists of comb modes with a mode spacing of \(1/mT=f_{\mathrm{rep}}/m\), as expressed by the sum over the delta functions \(\delta(\omega-2\pi n/mT)\). The newly created comb modes can be considered as the sidebands at subharmonics of the original repetition rate introduced by the amplitude modulation that selects every \(m\)-th pulse (see Supplement for a detailed derivation). These additional sidebands fill in the gaps between the modes of the original comb. The Gaussian spectral envelope is maintained (in the limit of \(\Delta t\ll\tau\)), also for the newly added modes, since the picked pulses are still Gaussian in the time domain. The spectral intensity can be calculated as \(\tilde{I}_{\mathrm{p,0}}(\omega)\propto|\tilde{E}_{\mathrm{p,0}}(\omega)|^{2}\) demonstrating that the comb-mode power scales with \(m^{-2}\) as explained above. Note that \(|\tilde{E}_{\mathrm{p,0}}(\omega)|^{2}\) includes the squares of the delta functions, i.e. infinite power density at the frequencies of the modes compatible with an infinite number of pulses. Using Eq. (4) in the last term of Eq. (3), we find that the phase noise is convolved into all comb modes, including the new modes created by pulse picking. The phase noise spectrum \(\tilde{\varphi}(\omega)\) for radio frequencies \(|\omega|>2\pi/mT\) is folded into \(|\omega|<2\pi/mT\). In the time domain, the effect can be considered as an aliasing where the phase noise at \(|\omega|>2\pi/mT\) is undersampled by the pulse train. If the phase noise spectrum \(\hat{\varphi}(\omega)\) is flat and extends up to the original repetition frequency, reducing the repetition rate by a factor of \(m\) will result in a \(m\)-fold increase in the power spectral density (PSD) of the phase noise in the frequency range of \(|\omega|<2\pi/mT\). On the other hand, the integrated phase noise does not change after pulse picking because the \(m\)-fold increase in the PSD is canceled by an inverse reduction of the maximum frequency, i.e. the Nyquist frequency reduces by a factor \(m\). This is consistent with Eqs. (1) and (2) which contain identical noise terms \(e^{i\varphi(t)}\), and the rms phase noise is expected to be identical before and after pulse picking. The model described here shows that the phase noise already present in the original pulse train affects the frequency comb structure equally before and after pulse picking, regardless of the repetition rate. A trivial requirement is that the linewidth of the comb modes prior to pulse picking should be narrower than the mode spacing after pulse-picking to avoid washing out the comb structure. In addition, the actual implementation of the pulse picker and the subsequent amplification should be low-noise to preserve the comb structure at lower repetition rates. Our work described in sections 3 and 4 aims to experimentally confirm that this is possible. Pulse picking can be implemented using an acousto-optical modulator (AOM) or a combination of an electro-optic modulator (EOM) and a polarizer. In Fig. 1, we show a conceptual scheme of an AOM-based pulse picking. The AOM is driven at a carrier frequency \(f_{\text{AOM}}\) (typically tens or hundreds of MHz) which is amplitude modulated with gating pulses. Here we assume a rectangular-shaped gate function with repetition rate \(f_{\text{gate}}\) (time spacing \(T_{\text{gate}}=f_{\text{gate}}^{-1}\)) and a width of \(\tau\).1 The RF drive signal for the AOM can be described as Footnote 1: In the Supplement, we discuss the effect of gating timing jitter. \[V_{\text{RF}}(t)=\sum_{n=-\infty}^{\infty}A_{\text{RF}}\left(t-nT_{\text{gate }}\right)e^{i\omega_{\text{AOM}}t}, \tag{5}\] where \(A_{\text{RF}}\) is the rectangular-shaped gate function, and \(\omega_{\text{AOM}}=2\pi f_{\text{AOM}}\). The modulator diffracts every \(m\)-th pulse to the 1st diffraction order, and the rest is sent to a beam dump. The Fourier transform of Eq. (5) shows that the spectrum of the RF signal consists of narrow lines spaced by \(f_{\text{gate}}\), similar to an optical frequency comb. The frequency of the RF modes is given by \[f_{\text{RF},n^{\prime}}=n^{\prime}\,f_{\text{gate}}+f_{\text{AOM}}, \tag{6}\] with an integer \(n^{\prime}\). The frequency of the optical comb modes \(f_{n,n^{\prime}}\) after pulse picking is given by the sum of the original comb mode frequencies and the AOM RF frequencies: \[f_{n,n^{\prime}} =n\,f_{\text{rep}}+f_{\text{CEO}}+f_{\text{RF},n^{\prime}} \tag{7}\] \[=n\,f_{\text{rep}}+n^{\prime}\,f_{\text{gate}}+f_{\text{CEO}}+f_ {\text{AOM}},\] where \(f_{\text{CEO}}\) is the CEO frequency of the original comb. To obtain an equidistant comb structure after pulse-picking, the ratio \(f_{\text{rep}}/f_{\text{gate}}\) must be an integer. In the time domain, this condition translates to the requirement that \(T_{\text{gate}}=m\,T\), with \(m\) introduced in Eq. (2). The CEO of the pulse-picked comb remains unchanged after pulse picking if \(f_{\text{AOM}}/f_{\text{gate}}=q\) is an integer. With an integer \(q\), we can define a new mode-index \(\hat{n}\equiv n^{\prime}+q+n\,m\) to find a compact expression for the mode frequencies of the pulse-picked comb \[f_{\hat{n}}=\hat{n}\,\frac{f_{\text{rep}}}{m}+f_{\text{CEO}}. \tag{8}\] In the special case of zero CEO frequency (\(f_{\text{CEO}}=0\)), a pulse train with a constant CEP can be obtained which finds interesting applications in attosecond physics [56, 57, 58]. ## 3 Experiment Figure 2 shows our setup for generating and testing a low repetition rate optical frequency comb. A home-built Yb:KYW oscillator is mode-locked by soft-aperture Kerr-lensing and generates a \(40\,\mathrm{MHz}\) pulse train. The output spectrum is centered Figure 1: Low-repetition rate frequency comb generation using an AOM-based pulse picker. Every \(m\)-th pulse is diffracted by the AOM to the 1st diffraction order, while the others remain in the 0th order and are dumped. After the pulse picking, the pulse-to-pulse time interval increases by the factor \(m\), while the mode spacing in the frequency domain becomes \(m\) times smaller. The AOM is driven by an RF carrier which is amplitude modulated with a rectangular-shaped gate-pulse train. at 1030 nm and has a FWHM bandwidth of 14 nm. The average output power is 26 mW. The FWHM temporal pulse duration is measured to be 89 fs using the intensity autocorrelation method (autocorrelation length 137 fs), assuming sech\({}^{2}\) pulse shape. Two PZT-actuated mirrors are installed in the laser cavity to control the cavity length and are used to stabilize the frequency comb (PZT stands for lead zirconate titanate). One has a bandwidth of about 10 kHz, while a second is used to compensate for slow drifts. The laser cavity is placed on a vibration-isolated, temperature-stabilized aluminum baseplate and installed inside an air-tight aluminum housing [59]. At the laser output, the repetition rate \(f_{\mathrm{rep}}\) is detected with a fast photodiode (Thorlabs DET01CFC, not shown in Fig. 2). The laser has an auxiliary output that is taken from the reflection of an intracavity optic element with a compromised spectral phase compared with the main output. This second output has about 60 mW of power and is sent into a heterodyne beat detection setup with a continuous wave (cw) reference laser. The cw reference laser operates at 1033 nm and is stabilized to an ultra-stable reference cavity. The rms phase noise of the reference laser was measured to be 10.2 mrad integrated from 10 kHz to 10 MHz with respect to the reference cavity [60]. In the heterodyne beat detection setup, about 100 comb modes are filtered out around the frequency of the cw reference laser using an interference filter (Alluxa A4017) and an etalon (LightMachinery OP-6204-M, 7.3 GHz FWHM bandwidth). The beat signal between the frequency comb and the reference laser is detected using balanced photodetectors (Koheron PD100B) which suppresses the contribution of classical amplitude noise [61]. The RF beat signal is filtered to isolate the beat note between the closest comb modes and the reference laser. Then the signal is phase-compared to a 10 MHz signal generated by a signal generator (Marconi Instruments 2022C). The signal controls the PZT actuators of the Yb:KYW laser via a home-built loop filter. This way, one of the comb modes is phase-stabilized to the cw reference laser. The frequency comb is amplified by a solid-state double-pass Yb:LuAG amplifier pumped by a multimode diode laser operating at around 935 nm. With an input seed power of 26 mW, we obtain 250 mW at the output when the pump power is set to 7.2 W. Since the gain bandwidth of Yb:LuAG is about 5 nm [62], a significant gain narrowing effect reduces the bandwidth of the amplifier output to 2.7 nm. The peak gain at 1030 nm is approximately 16.6 dB. The amplifier output is sent to an AOM (AA Opto Electronic MT110) that is used to stabilize one of the comb modes in combination with the PZTs in the laser cavity. The first-order diffraction of the AOM downshifts the entire frequency comb of the laser by about 110 MHz. The diffraction efficiency of the AOM is about 70 %. The second beat signal between the frequency comb and the reference cw laser is obtained after the AOM. The beat signal is compared with the 10 MHz frequency reference and is sent to a loop filter (Vescent Photonics D2-125). The loop filter's output is sent to a voltage-controlled oscillator (VCO, Pasternack Enterprises Inc. PE1V31008) which generates the RF signal that drives the AOM. The control bandwidth is estimated to be \(>\)100 kHz, limited by the time required for the acoustic wave inside the AOM to reach the laser beam. The following pulse picker AOM (AA Opto Electronic MT200-A0.4-1064) is driven at a carrier frequency of \(f_{\mathrm{AOM}}=200\) MHz. It selects every \(m\)-th pulse by amplitude modulating the carrier with a rectangular-shaped envelope. A rectangular gate signal with a pulse width of \(\tau=32\) ns at a frequency of \(f_{\mathrm{gate}}=f_{\mathrm{rep}}/m\) Figure 2: Schematic of the experimental setup. The 40 MHz optical frequency comb (dotted box) consists of a mode-locked Yb:KYW oscillator, a solid-state amplifier using Yb:LuAG as gain medium, and an AOM frequency shifter for fast control of the stabilized comb mode frequency. Slower control acts on PZT-actuated mirrors of the laser cavity (PZTs). The output of the Yb:KYW oscillator is centered at 1030 nm with a bandwidth of 14 nm. The error signal for phase stabilization of one of the comb modes is obtained from beat notes between that mode and an ultra-stable cw reference laser emitting at 1033 nm. The cw reference laser is amplified by a semiconductor optical amplifier (BOA1050P Thorlabs, not shown) before sending it to the second and third beat note detection units. An AOM-based pulse picker reduces the comb’s repetition rate to \(\left(40\,\mathrm{MHz}\right)/m\), where \(m\) is the pulse picking factor. After the pulse picker, the pulses are re-amplified by a second Yb:LuAG amplifier. The third feedback loop controls a PZT-actuated mirror in the beamline and reduces the phase noise due to fluctuations in the beam path length. is used. The gate pulse width is less than 2\(T\) as required to select individual pulses. An RF switch (Minicircuit ZASWA-2-50DR+) is used for the modulation. The diffraction efficiency of the AOM is measured to be \(>\)60 %, and the pulses remaining in the 0th order are sent to a beam dump. The gate signal is generated by a delay generator (Alphanov Tombak) using the repetition rate signal from the Yb:KYW oscillator as the timing source. The RF carrier signal to drive the AOM is derived from the 5th harmonic of the repetition rate which is generated in the detection with a high-bandwidth photodiode. A band-pass filter with a 3 dB-bandwidth of 10 MHz is used to isolate the 5th harmonic. In this way, the gate signal, the repetition rate, and the AOM RF carrier are phase synchronized. The RF switch produces a modulation with 10-90% rise/fall times of 5 ns. Measuring the energy of the picked pulses for different gating delays with respect to the pulses reveals the time response of the AOM. The 10-90 % rise and fall time of the AOM was measured to be 7.5 ns. This is dominated by the time it takes for the acoustic wave to cross the focused laser beam at the point of interaction. From the speed of sound within the modulator material (TeO\({}_{2}\)) of 4200 m/s and the laser beam diameter of \(2w_{0}=39\) um, the rise/fall time is estimated to 6 ns. The picked pulses are sent to a second Yb:LuAG amplifier which is similar in design to the first amplifier. When the pump power is set to 14 W, the amplifier output power is 2.5 W, 374 mW, 43 mW, and 8 mW at a repetition rate of 40 MHz, 4 MHz, 400 kHz, and 40 kHz, respectively. The output pulse train of the second amplifier is measured by a fast InGaAs photodiode (Thorlabs DET01CFC, 1.2 GHz bandwidth) and a 2.5 GHz oscilloscope (LeCroy WavePro 7Zi). The results are shown in Figure 3 at repetition rates of 40 MHz, 4 MHz, 400 kHz, and 40 kHz. We find that the pulse-picked beam still contains a tiny fraction of the pulse train at the original repetition rate of 40 MHz. This is not due to incomplete suppression of the RF carrier, but is caused by scattering within the AOM material. We could suppress the optical power of this component by 28 dB compared to the picked pulses. This was achieved by carefully adjusting the size and position of an iris surrounding the pulse-picked laser beam. A PZT-actuated mirror is introduced in the beamline after the second amplifier to compensate for possible low-frequency phase fluctuations due to free-space parts of the setup, e.g. mirrors or breadboard vibrations. In addition, the phase noise introduced by the pulse picker and amplifier is partially compensated. A portion of the beam after the PZT actuated mirror is sent to the third beat detection setup. The beat signal is then phase compared to a reference at 11.3 MHz from an electronic synthesizer (Marconi Instruments 2022C) and is used as an error signal to drive the PZT via a home-built loop filter. The in-loop error signal shows a peak at about 4 kHz when the feedback gain is too high, indicating a control bandwidth of approximately 4 kHz. Figure 3: Time domain traces of pulses after pulse picking and the second amplifier (”Output” in Fig. 2). The negative signal is due to ringing. a) At a repetition rate of 40 MHz without pulse picking (\(m\,=\,1\)). b) At a repetition rate of 4 MHz which corresponds to a pulse-to-pulse interval of 250 ns and a pulse picking factor of \(m\,=\,10\). The magnified inset shows a trace averaged over 1000 acquisitions where residual 40 MHz pulses are visible with about 28 dB of supression. c) Pulses at a repetition rate of 400 kHz, corresponding to \(m=10^{2}\) (2.5 μs pulse-to-pulse interval). d) Pulses at 40 kHz repetition rate, corresponding to a pulse-to-pulse interval of 25 μs and \(m=10^{3}\). The oscilloscope sampling rate is 40 Gs/s for all traces. This is fast enough to significantly suppress low-frequency phase noise caused by mechanical and acoustic vibrations. ## 4 Results and Discussion The evaluated beat note spectra are obtained from the third beat note detection unit (Fig. 2) and shown in Fig. 4. The acquisitions were made at repetition rates of 40 MHz (no pulse pick-ring), 4 MHz, 400 kHz, and 40 kHz with resolution bandwidths of 10 kHz, 1 kHz, 100 Hz, and 10 Hz, respectively. A clear comb structure is maintained even at the reduced repetition rate of 40 kHz, where a narrow beat note peak is still visible with a signal-to-noise ratio above 30 dB. For all repetition rates, the linewidth of the beat signal is limited by the resolution bandwidth of the spectrum analyzer (Agilent E4445A). Fig. 5 shows the RF power of the beat notes for different pulse picking factors \(m\). The electrical beat note power is proportional to the optical power contained in a single comb mode and scales with \(1/m^{2}\), as expected from the spectral intensity \(I_{\mathrm{p,0}}(\omega)\) of our model in Eq. (4). The power spectral density (PSD) of the phase noise is obtained by Fourier transforming the recorded time trace, assuming that \(\varphi(t)\) is small. Analyzing the heterodyne beat note allows to bring optical phase noise to the RF domain, where it can be recorded and analyzed. In a sense, the convolution of Eq.(3) converts the RF noise of \(\varphi(\omega)\) into the optical domain, while heterodying brings it back into the RF domain. The time traces contain \(2.05\times 10^{8}\) samples, and the Blackman window was applied before performing the Fourier transform. The sideband spectrum of the Fourier transform trace around the beat frequency gives the PSD of the phase noise after normalization to the peak amplitude of the beat-note. Since the beat signal becomes weaker when lowering the repetition rate of the comb, the phase noise PSD experiences a relative increase in the noise floor. To overcome this issue, the gated optical noise reduction (GATOR) technique described by Deschenes _et al._ in Ref. [63] was used to evaluate the data. In the time domain, the beat note of a frequency comb with a cw laser can be understood as the pulse train sampling the cw wave, i.e. the beat note signal is available only for the duration of the pulses. The GATOR technique strongly suppresses the noise from the cw reference laser and the detection setup by evaluating the beat signal only within narrow time windows around the comb pulses. In the frequency domain, the GATOR method effectively averages the spectrum of the beat notes between the cw laser and several different comb modes. This increases the signal-to-noise ratio compared to beat detection with a single comb mode. We used a software implementation of GATOR by introducing temporal gate windows around each of the pulses. The width of the gate windows was set to 40 ns for all repetition rates, limited by the bandwidth of the detector. The resulting phase noise PSDs are shown with solid lines in Fig. 6. To evaluate the noise floor, we repeated the same measurement without comb pulses and reperformed the same analysis. Since for traces in which the pulse separation time is lower than the gate window width the GATOR does not affect the data, we used for simplicity the ungated data to compute the 40 MHz of Fig. 6. The PSD of the original frequency comb at 40 MHz repetition rate contains low-frequency phase noise up to about 1 kHz due to uncompensated environmental vibrations. Two broad noise peaks at about 20 kHz and 100 kHz are servo bumps from feedback loops controlling the fast PZT actuator in the laser cavity and the AOM used for phase stabilization, respectively. The servo bump of the PZT for path length stabilization is around 400 Hz. The peak at 17.3 MHz in the 40 MHz trace is due to the neighboring beat note. Other spurious signals that we attribute to radio frequency pickups are visible at frequencies beyond 100 kHz. The noise floor is determined by the amplitude noise of the cw reference laser, the noise of the photodetectors, and the electronics used in the beat detection setup. It increases for lower repetition rates due to reduced carrier power of the beat signal. The low-frequency noise below 1 kHz is the same for repetition rates of 40 MHz, 4 MHz, and 400 kHz. At a repetition rate of 40 kHz, some of the noise structure below 1 kHz appears 5 to 10 dB higher than other repetition frequencies, which contributes to the increase of the rms integrated phase noise of about 50 mrad. This might be due to the different gain settings of the path length stabilization feedback. No other significant increase in phase noise was observed above the noise floor for the pulse-picked frequency combs. The FWHM linewidth of the frequency comb is limited by the resolution bandwidth of the measurement (10 Hz) for all repetition rates investigated. The rms phase noise integrated from 20 Hz to half the repetition rate \(f_{\mathrm{rep}}/2m\) was calculated from the PSD spectra. They are 100 mrad, 190 mrad, 156 mrad, and 195 mrad for repetition rates of 40 MHz, 4 MHz, 400 kHz, and 40 kHz, respectively. The contributions of the harmonic peaks of \(f_{\mathrm{rep}}\) and the beat notes are not representing the phase noise of the comb mode and are therefore excluded from the integration. For all repetition frequencies, the measurement noise floor contributes significantly to the rms integrated phase noise. Therefore, our measurement gives the upper limit of the phase noise of the pulse-picked comb. The measurements presented here are based on in-loop signals. In our case, this is sufficient to determine the noise of the low repetition rate laser relative to the cw reference laser. The phase noise is not less than the measurement noise floor at any frequency. Therefore, our in-loop phase noise measurement adequately reflects the upper limit of the phase noise of our low repetition rate laser system relative to the cw reference laser. Note that the path length stabilization used here should be implemented in any application of the low repetition rate frequency comb. It is interesting to consider what limits the lowest possible repetition rate. In the laser system discussed here, the high repetition rate frequency comb is stabilized before pulse-picking using a method described in Sect. 3. The amplifier setup after pulse picking is prone to additional noise and more difficult to actively stabilize. This is because the error signal for path length stabilization is taken from low-repetition-rate pulses and the available feedback bandwidth is limited to half the repetition rate. In the time domain, the servo system needs to receive at least two pulses before it can know if the phase shift has changed. As a result, the phase noise at Fourier frequencies significantly lower than half the repetition rate can be efficiently suppressed, while noise at higher frequencies remains unsuppressed. Note that feedback systems based on Proportional-Integral-Differential (PID) controllers require their control bandwidth to be much larger than the frequency of the noise in order to achieve a small phase delay that allows high feedback gain at lower frequencies while suppressing oscillation at higher frequencies [64]. The amazing accuracy of optical frequency combs comes from their steady-state operation, i.e. a regular pulse train that finds the same environment for each pulse. Steady-state operation is disturbed when the pulse-to-pulse interval is longer than the typical time scale of the phase noise introduced by the environment, and the disturbance cannot be actively stabilized due to limited feedback bandwidth. This problem is significant for repetition rates below 10 kHz where acoustic vibrations of many standard optical components are significant. The lowest possible repetition rate depends on the type of amplifier and the design of the setup. Our amplifiers are pumped with a cw pump source and therefore pulse-to-pulse phase variations are minimized. It also does not rely on the CPA scheme which can introduce wavelength-dependent phase fluctuations. Our demonstration shows that the compact and simple solid-state amplifiers used here introduce sufficiently low phase noise to maintain the comb structure at a repetition rate as low as 40 kHz. To increase the feedback bandwidth significantly beyond the repetition rate after pulse picking, a probe laser with a higher repetition rate or even a continuous wave laser could be superimposed on the beamline to monitor the phase variations. However, we expect limitations of this method because the probe beam would have a different peak intensity and spectrum. Intensity and spectrum-dependent phase shifts expected in an optical amplifier, pulse compressor, and nonlinear frequency conversion cannot be easily accounted for in this way. ## 5 Conclusions and future prospects In this paper, we have demonstrated a low repetition rate optical frequency comb based on a Yb:KYW solid-state mode-locked oscillator using an AOM pulse picker. The repetition rate is adjustable over three orders of magnitude from 40 MHz to 40 kHz. One of the modes of the frequency comb is tightly phase-locked to a cavity-stabilized ultra-low noise cw laser by measuring the heterodyne beat note between them and providing feedback to the cavity length of the oscillator, an external AOM, and a PZT actuated mirror in the beamline. We have characterized the phase noise of the frequency comb at repetition rates of 40 MHz, 4 MHz, 400 kHz and 40 kHz with respect to the reference cw laser. The results confirm that a narrow linewidth comb structure is preserved even after pulse picking. Using the power spectral density of the phase noise obtained using the GATOR technique, the integrated rms phase noise was evaluated to be 195 mrad at 40 kHz repetition rate. For the first time to the best of our knowledge, we demonstrate optically-stabilized low-noise frequency combs at repetition rates as low as a few tens of kHz. The pulse energy and average power were 200 nJ and 8 mW at a repetition rate of 40 kHz. A solid-state amplifier similar to the one used in this work can be added to increase the pulse energy to a >10 nJ level. Frequency combs with such high pulse energy and moderate average power are expected to be use Figure 4: Beat note spectra acquired from the third beat note detection unit at different repetition rates of (a) 40 MHz, (b) 4 MHz, (c) 400 kHz, and (d) 40 kHz. A resolution bandwidth (RB) of 10 kHz, 1 kHz, 100 Hz, and 10 Hz was used, respectively. In all plots except for panel (d), two peaks which correspond to \(f_{\mathrm{rep}}/m\) and \(2f_{\mathrm{rep}}/m\) are visible in addition to two peaks that correspond to the beat frequencies. The repetition rate signal is strongly suppressed by balanced photodetection. The trace acquired for 40 kHz shows frequencies between the 17th and the 18th harmonics of \(f_{\mathrm{rep}}\) to avoid the elevated noise floor of the measurement setup at low frequencies. The RF spectrum at 40 MHz repetition rate without pulse picking shown in (a) contains several peaks between the repetition rate peaks and the beat notes. Most of these are due to the mixing of the strong repetition rate and beat note signals on the photodiode. ful for driving high harmonic generation processes and generating XUV frequency combs [65]. The low noise and narrow linewidth comb modes shown here indicate that low repetition rate frequency combs are a promising option for high-resolution spectroscopy at exotic wavelengths. Dual comb spectroscopy at XUV wavelengths can be an interesting application of the low repetition rate XUV frequency combs. When direct frequency comb spectroscopy is performed using frequency combs with a low repetition rate, it may be difficult to determine the comb mode number. If the pulse picking factor is large and the uncertainty of the line center determination is small enough, the comb mode number can be unambiguously determined by repeating the measurement for different pulse picking factors. The single-pass pulse picking scheme used in this study is inefficient for reducing the repetition rate because most of the original pulses are unused. In the future, we plan to perform pulse picking in a femtosecond buildup cavity to avoid the loss of average power [66, 67]. Intracavity pulse picking will also serve as a narrow spectral filter to efficiently suppress phase and amplitude noise at frequencies greater than half the resonance width. ## Funding This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No. 742247). T.W. Hansch acknowledges support from the Max-Planck Foundation. ## Data Availability Data supporting the results presented in this paper are available from the authors' own reasonable request. ## Disclosures The authors declare no conflicts of interest. ## Supplemental document See Supplement for supporting content.
2309.16736
Cognizance of Post-COVID-19 Multi-Organ Dysfunction through Machine Learning Analysis
In the year 2022, a total of 466 patients from various cities across Iraq were included in this study. This research paper focuses on the application of machine learning techniques to analyse and predict multi-organ dysfunction in individuals experiencing Post-COVID-19 Syndrome, commonly known as Long COVID. Post-COVID-19 Syndrome presents a wide array of persistent symptoms affecting various organ systems, posing a significant challenge to healthcare. Leveraging the power of artificial intelligence, this study aims to enhance early detection and management of this complex condition. The paper outlines the importance of data collection and preprocessing, feature selection and engineering, model development and validation, and ethical considerations in conducting research in this field. By improving our understanding of Post-COVID-19 Syndrome through machine learning, healthcare providers can identify at-risk individuals and offer timely interventions, potentially improving patient outcomes and quality of life. Further research is essential to refine models, validate their clinical utility, and explore treatment options for Long COVID. Keywords: Post-COVID-19 Syndrome, Machine Learning, Multi-Organ Dysfunction, Healthcare, Artificial Intelligence.
Hector J. Castro, Maitham G. Yousif
2023-09-27T22:25:49Z
http://arxiv.org/abs/2309.16736v1
# Cogninance of Post-COVID-19 Multi-Organ Dysfunction through Machine Learning Analysis ###### Abstract Abstract: In the year 2022, a total of 466 patients from various cities across Iraq were included in this study. This research paper focuses on the application of machine learning techniques to analyse and predict multi-organ dysfunction in individuals experiencing Post-COVID-19 Syndrome, commonly known as Long COVID. Post-COVID-19 Syndrome presents a wide array of persistent symptoms affecting various organ systems, posing a significant challenge to healthcare. Leveraging the power of artificial intelligence, this study aims to enhance early detection and management of this complex condition. The paper outlines the importance of data collection and preprocessing, feature selection and engineering, model development and validation, and ethical considerations in conducting research in this field. By improving our understanding of Post-COVID-19 Syndrome through machine learning, healthcare providers can identify at-risk individuals and offer timely interventions, potentially improving patient outcomes and quality of life. Further research is essential to refine models, validate their clinical utility, and explore treatment options for Long COVID. Post-COVID-19 Syndrome, Machine Learning, Multi-Organ Dysfunction, Healthcare, Artificial Intelligence. 1 ## Introduction: [https://www.isohe.org/medical-advances-and-innovations-journal](https://www.isohe.org/medical-advances-and-innovations-journal) August 2023 | Volume 1 | Issue 3 The COVID-19 pandemic, caused by the novel coronavirus SARS-CoV-2, has had a profound impact on global health, affecting millions of individuals worldwide. While much attention has been directed towards understanding the acute phase of the disease and developing vaccines, the long-term health consequences of COVID-19 have garnered increasing attention. Among the post-recovery complications, a condition known as Post-COVID-19 Syndrome or Long COVID has emerged, characterized by a wide array of persistent symptoms that affect multiple organ systems (1-3). This research aims to delve into the multifaceted aspects of Post-COVID-19 Syndrome, focusing on the analysis and prediction of multi-organ dysfunction in individuals who have experienced this condition. To address this complex issue, machine learning techniques are employed, capitalizing on their capacity to process and derive insights from extensive datasets. The study draws upon a diverse range of data sources, including electronic health records, laboratory results, and patient surveys, to gain a comprehensive understanding of the disease (4-6). Recent studies have revealed that Post-COVID-19 Syndrome is not limited to respiratory issues but extends to affect various organs, including the cardiovascular system, nervous system, and hematological parameters (7-10). As highlighted by studies from Iraq, the impact of COVID-19 on health is multidimensional, leading to hematological changes, alterations in immune responses, and the potential for Methodology and Study Design: **Study Population and Data Collection:** The study population will consist of 466 patients from various cities in Iraq who have experienced Post-COVID-19 Syndrome during the year 2022. Patients will be recruited from hospitals, clinics, extended-spectrum beta-lactamase-producing bacterial infections (11-13). Machine learning has demonstrated great potential in various medical applications, and its utilization in studying Post-COVID-19 Syndrome could significantly contribute to our understanding of the condition. By developing predictive models that take into account a wide range of factors, including patient demographics, comorbidities, and biomarkers, healthcare providers can identify those at higher risk of multi-organ dysfunction and implement timely interventions to mitigate the consequences (14-16). Furthermore, this research considers ethical considerations regarding data collection and privacy protection, ensuring that all practices adhere to ethical standards and regulations. By doing so, the study aims to provide a comprehensive overview of Post-COVID-19 Syndrome and its associated multi-organ dysfunction while upholding the rights and well-being of study participants (17-20). This paper will proceed to explore the methodology used in data collection and analysis, the development and validation of machine learning models, and the potential implications of this research on the management of Post-COVID-19 Syndrome. It is anticipated that the findings from this study will contribute to the growing body of knowledge surrounding COVID-19 and its long-term effects, thereby aiding in the development of more effective interventions and therapies (21-26). and healthcare centers. Data on these patients will be collected through electronic health records, patient surveys, and laboratory results. The data will encompass a range of variables, including demographic information, comorbidities, COVID-19 severity, symptoms, and a wide array of clinical parameters. **Data Preprocessing:** Raw data will undergo thorough preprocessing, including data cleaning, missing value imputation, and outlier detection. Any inconsistencies or discrepancies will be addressed to ensure data quality. **Feature Selection and Engineering:** Feature selection techniques will be employed to identify the most relevant variables for the analysis. Additionally, new features may be engineered to capture specific aspects of the disease. **Machine Learning Model Development:** A variety of machine learning algorithms will be considered, including but not limited to logistic regression, decision trees, random forests, support vector machines, and neural networks. The dataset will be split into training, validation, and test sets to develop and fine-tune the models. Cross-validation techniques will be used to assess model performance. **Outcome Prediction:** The primary outcome will be the prediction of multi-organ dysfunction in Post-COVID-19 Syndrome patients. This will involve identifying patients at higher risk for complications affecting various organ systems, including the cardiovascular, respiratory, neurological, and hematological systems. **Ethical Considerations:** The study will adhere to ethical guidelines and obtain necessary approvals from institutional review boards and ethics committees. Informed consent will be obtained from all participants, and their privacy and data security will be strictly maintained. **Analysis and Interpretation:** Machine learning models will be evaluated using appropriate performance metrics such as accuracy, precision, recall, F1-score, and area under the receiver operating characteristic curve (AUC-ROC). The results will be interpreted to identify key factors contributing to multi-organ dysfunction in Post-COVID-19 Syndrome patients. **Implications and Recommendations:** The study will provide insights into the prediction and understanding of multi-organ dysfunction in Post-COVID-19 Syndrome. Recommendations for clinical management, interventions, and further research will be discussed. **Publication and Dissemination:** The findings will be disseminated through peer-reviewed publications and presentations at scientific conferences. This will contribute to the existing body of knowledge regarding the long-term effects of COVID-19 and inform healthcare practices. **Limitations and Future Directions:** Any limitations encountered during the study will be acknowledged. Future research directions and potential improvements to the predictive models will be discussed. By following this comprehensive methodology and study design, the research aims to advance our understanding of Post-COVID-19 Syndrome and its impact on multiple organ systems, ultimately contributing to more effective patient care and management strategies. ## Results Figure 2 presents the severity classification of COVID-19 infection among the study cohort. The table categorizes patients into four groups based on the severity of their initial COVID-19 infection: Mild, Moderate, Severe, and Critical. This information helps in understanding the spectrum of disease severity in patients who later developed Post-COVID-19 Syndrome. Figure 4: Prevalence of Multi-Organ Dysfunction in Post-COVID-19 Syndrome Patients Figure 3: Symptoms Reported by Post-COVID-19 Syndrome Patients Figure 5 displays the performance metrics of different machine learning models used to predict multi-organ dysfunction in Post-COVID-19 Syndrome patients. The metrics include accuracy, precision, recall, F1-score, and the area under the receiver operating characteristic curve (AUC-ROC). These metrics assess the models' ability to correctly classify patients with and without multi-organ dysfunction, providing valuable insights for clinical decision-making. These tables offer a comprehensive overview of the study's results, highlighting key demographic characteristics, COVID-19 severity, persistent symptoms, prevalence of multi-organ dysfunction, and the performance of machine learning models in predicting organ dysfunction in Post-COVID-19 Syndrome patients. Figure 4: provides insight into the prevalence of multi-organ dysfunction among Post-COVID-19 Syndrome patients. The table categorizes dysfunction by organ system, with cardiovascular dysfunction being the most common, followed by respiratory, neurological, hematological, and gastrointestinal dysfunction. These findings underscore the systemic impact of Post-COVID-19 Syndrome. Figure 5: Machine Learning Model Performance Metrics ## Discussion: The discussion section will examine and interpret the study's findings, considering the relevant literature cited throughout the research. This section will explore the implications of the results and their contribution to our understanding of multi-organ dysfunction in Post-COVID-19 Syndrome. Additionally, it will address the limitations of the study and suggest areas for future research. The study's cohort consisted of 466 patients from various cities in Iraq who experienced Post-COVID-19 Syndrome in 2022. The demographic characteristics revealed a relatively balanced gender distribution (52.9% male, 47.1% female), with a mean age of 45.2 years. Comorbidities such as hypertension (39.1%), diabetes (20.8%), and obesity (29.0%) were prevalent among the patients. The prevalence of comorbidities is consistent with existing research (27-31), which highlights their role in the development of post-COVID-19 syndrome. The study population's age distribution aligns with the notion that older individuals are at a higher risk of experiencing persistent symptoms (32-35). Post-COVID-19 Syndrome patients reported a range of persistent symptoms, with fatigue (80.3%), shortness of breath (61.6%), chest pain (46.2%), cognitive impairment (40.6%), and muscle weakness (36.0%) being the most common. These symptoms can significantly impact patients' quality of life and are consistent with previous reports (36-40). The high prevalence of fatigue is particularly noteworthy and may warrant further investigation into its underlying mechanisms and management strategies (41-45). Additionally, the presence of cognitive impairment underscores the multi-systemic nature of Post-COVID-19 Syndrome, with potential neurological involvement (46-50). Multi-organ dysfunction is a hallmark of Post-COVID-19 Syndrome. In this study, cardiovascular dysfunction was the most prevalent (38.9%), followed by respiratory dysfunction (27.8%), neurological dysfunction (19.2%), hematological dysfunction (13.5%), and gastrointestinal dysfunction (9.6%). These findings emphasize the diverse and complex nature of the syndrome. The high prevalence of cardiovascular dysfunction aligns with research on COVID-19's impact on the cardiovascular system (51-56). Furthermore, the presence of neurological and hematological dysfunction underscores the need for multidisciplinary care and long-term monitoring of affected patients (57-60). The study utilised machine learning models to predict multi-organ dysfunction in Post-COVID-19 Syndrome patients. The models exhibited varying degrees of accuracy, precision, recall, F1-score, and AUC-ROC. Notably, the neural network model demonstrated the highest performance, with an accuracy of 84%. Machine learning models have shown promise in predicting disease outcomes (61-63). The use of such models in Post-COVID-19 Syndrome prediction can aid healthcare providers in early identification and intervention, improving patient care and outcomes. Two studies (64-66) explored the psycho-immunological status of recovered SARS-CoV-2 patients and the effect of hematological parameters on pregnancy outcomes among pregnant women with COVID-19. These investigations underline the importance of studying the long-term consequences of COVID-19 across various domains, including mental health and maternal-fetal health. Understanding the psycho-immunological aspects of recovery (67-68) can guide support and interventions for individuals experiencing psychological distress post-infection. Additionally, investigating the impact of COVID-19 on pregnancy outcomes (69-71) is critical for maternal and neonatal health. This study adds valuable insights into the understanding of Post-COVID-19 Syndrome, but several avenues for future research are evident. Longitudinal studies are needed to track the evolution of symptoms and organ dysfunction over time. Furthermore, investigations into potential treatments and interventions for specific symptoms and dysfunctions are warranted. The integration of machine learning into healthcare is a promising area, but further refinement and validation of predictive models are necessary. Collaborative efforts across healthcare institutions and countries can facilitate larger-scale studies and the development of more robust predictive tools (72-75).
2309.00140
Improving vision-inspired keyword spotting using dynamic module skipping in streaming conformer encoder
Using a vision-inspired keyword spotting framework, we propose an architecture with input-dependent dynamic depth capable of processing streaming audio. Specifically, we extend a conformer encoder with trainable binary gates that allow us to dynamically skip network modules according to the input audio. Our approach improves detection and localization accuracy on continuous speech using Librispeech top-1000 most frequent words while maintaining a small memory footprint. The inclusion of gates also reduces the average amount of processing without affecting the overall performance. These benefits are shown to be even more pronounced using the Google speech commands dataset placed over background noise where up to 97% of the processing is skipped on non-speech inputs, therefore making our method particularly interesting for an always-on keyword spotter.
Alexandre Bittar, Paul Dixon, Mohammad Samragh, Kumari Nishu, Devang Naik
2023-08-31T21:25:57Z
http://arxiv.org/abs/2309.00140v1
Improving Vision-Inspired Keyword Spotting Using Dynamic Module Skipping in Streaming Conformer Encoder ###### Abstract Using a vision-inspired keyword spotting framework, we propose an architecture with input-dependent dynamic depth capable of processing streaming audio. Specifically, we extend a conformer encoder with trainable binary gates that allow us to dynamically skip network modules according to the input audio. Our approach improves detection and localization accuracy on continuous speech using Librispeech top-1000 most frequent words while maintaining a small memory footprint. The inclusion of gates also reduces the average amount of processing without affecting the overall performance. These benefits are shown to be even more pronounced using the Google speech commands dataset placed over background noise where up to \(97\%\) of the processing is skipped on non-speech inputs, therefore making our method particularly interesting for an always-on keyword spotter. Alexandre Bittar, Paul Dixon, Mohammad Samragh, Kumari Nishu, Devang Naik keywords: keyword spotting, streaming audio, conformer, input-dependent dynamic depth, speech commands ## 1 Introduction Recent advances in deep learning have given rise to numerous end-to-end automatic speech recognition (ASR) models [1, 2, 3, 4, 5, 6, 7, 8] typically trained with encoder-decoder architectures on vast amounts of data. Although capable of approaching human-like ASR performance, such models usually involve substantial memory and power requirements to be stored or to process long audio streams of data directly on portable devices. More efficient approaches using smaller models with limited vocabulary can alternatively be used to solve the simpler task of accurately spotting a set of selected keywords or key-phrases in streaming audio. Such detections can then trigger additional processing from larger ASR-like models, so that the heavy computations only happen on desired audio portions. This paper focuses on improving both the performance and the efficiency of keyword spotting (KWS) models by borrowing techniques from ASR and computer vision. Similarities between keyword spotting and object localization have already led to a variety of vision-inspired KWS approaches [9, 10, 11, 12]. Indeed, an audio segment can be treated as a 1D-image making computer vision methods applicable. The task of object localization is typically solved using bounding boxes [13, 14, 15]. Precisely localizing keywords can be crucial both for privacy and efficiency considerations, as the ASR model should only be triggered by the lighter KWS model when desired. The base framework for this paper is that of recent work by Samragh _et al._[12]. From the encoded speech representations of a fully convolutional BC-ResNet [16] backbone, their KWS model yields three types of keyword predictions, namely detection, classification and localization. Using the same general pipeline, we focus on improving the encoder by taking inspiration from ASR models while retaining streaming and low-memory constraints. The conformer [17] architecture, which combines convolution and attention based modules, constitutes a state-of-the-art choice of model for ASR. On top of its proven representational capabilities for speech, the conformer also uses residual connections in a way that allows the inclusion of gates to dynamically skip modules. Recent work by Peng _et al._[18] has shown that binary gates can be added inside a Transformer-based ASR architecture to dynamically adjust the network's depth and reduce the average number of computations while retaining the same word error-rate. We extend their input-dependent dynamic depth (I3D) method to the conformer by placing local gates to skip feedforward, convolution and attention modules based on characteristics of the input to the module itself. Skipping modules still requires the full model to be loaded and does not noticeably impact the user-perceived latency, nevertheless, it improves the efficiency and can lead to considerable power savings. In this paper, we replace the BC-ResNet encoder used in the vision-inspired KWS framework of [12] with an I3D-gated conformer that can process streaming audio. Considering the following goals, we aim to: 1. Improve keyword detection and localization while reducing the memory footprint using the conformer. 2. Minimize computations and lower the power consumption at inference using the I3D method. We apply this method to (i) Librispeech [19] top-1000 most frequent words, and to (ii) the Google speech commands (GSC) [20] with added background noise. The former represents the task of detecting occurrences of specific words in continuous speech and is used to assess the encoder's detection and localization capabilities. The latter simulates an au dio stream of background noise over which short speech commands are placed in isolation. This allows us to showcase the efficiency benefits of using I3D gates as we expect the model to be able to skip even more processing on non-speech audio regions. Our experiments show that the proposed gating mechanism can skip on average 30%, 42%, and 97% of the encoder modules when processing continuous speech, isolated keywords and background noise respectively. ## 2 Methods The overall KWS pipeline is illustrated in Figure 1. Compared to the BC-ResNet encoder used in [12], the conformer combines the power of self-attention to also learn global interactions along with convolutions which capture local correlations. The proposed method operates in streaming mode using windowing and handles variable command lengths using max-pooling. ### Input processing The input audio is processed to generate 40-channel Mel filter banks from 25ms windows shifted by a stride of 10ms. To accomodate streaming conditions and limit the attention context, the inputs to the encoder are configured as 1.2 second windows (120 frames) with a shift of 240ms (24 frames). The encoder therefore receives inputs \(x\in\mathbb{R}^{B\times 120\times 40}\), where \(B\) is the batch size (set to one for inference) and 120 and 40 represent the number of frames and Mel features respectively. During training, these sliding windows are computed at once from the full input utterance and stacked over the batch dimension. At inference time, the model stores the last 960ms of the current window, waits for 240ms, then combines the two together to form the next 1.2 second window and repeats the procedure, which enables it to process streaming audio. ### Encoder The encoder is defined as a standard conformer [17] with additional I3D gates [18] that allow its modules to be dynamically skipped. The input first goes through a subsampling convolutional layer which reduces its time dimension from 120 to \(T_{z}=29\). Its feature dimension also gets projected to the desired hidden size \(H\). It then passes through a series of conformer blocks, where each block represents a sequence of (i) feedforward, (ii) mutli-headed self attention, (iii) convolution and (iv) feedforward modules. It is worth noting that none of these modules alter the shape of the transmitted tensors. A local binary gate can therefore be added to the residual connection of each of these four modules in all \(N_{z}\) conformer blocks, so that an input \(x\in\mathbb{R}^{B\times T_{z}\times H}\) gets mapped as, \[x\to x+g(\theta,x)\cdot\text{module}(x)\,, \tag{1}\] where \(g(\theta,x)\in\{0,1\}^{B}\) is the gating function parameterized by \(\theta\). In our experiments, we use a linear layer with weights \(W_{g}\in\mathbb{R}^{H\times 2}\) and bias \(b_{g}\in\mathbb{R}^{2}\) to implement each gate. The underlying probabilities of keeping or skipping the related module \(\mathbf{p_{g}}(x)=\big{(}p_{\text{keep}},p_{\text{skip}}\big{)}\in[0,1]^{B\times 2}\) are then simply computed as, \[\mathbf{p_{g}}(x)=\text{Softmax}\big{[}W_{g}\cdot\bar{x}+b_{g}\big{]}\,, \tag{2}\] where \(\bar{x}\) represents the mean of \(x\) taken over the time dimension. The softmax ensures that \(p_{\text{keep}}=1-p_{\text{skip}}\). During training, the Gumbel-Softmax trick [21, 22] is used to sample discrete zeros or ones from \(p_{\text{keep}}(x)\) in a differentiable way and obtain the desired binary values for \(g(\theta,x)\). At inference time, \(g(\theta,x)\) is alternatively computed as \(p_{\text{keep}}(x)>\beta\) using a fixed threshold \(\beta=0.5\). A regularizer \(\mathcal{L}_{\text{gate}}=\lambda\cdot f_{\text{open}}\) with hyperparameter \(\lambda=1\) is also defined during training to minimize the fraction of open gates \(f_{\text{open}}\) given by, \[f_{\text{open}}=\frac{1}{4\,N_{z}\,B}\sum_{b=1}^{B}\sum_{l=1}^{N_{z}}\sum_{m= 1}^{4}g(\theta_{l,m},x_{b})\,, \tag{3}\] where \(g(\theta_{l,m},x_{b})\in\{0,1\}\) corresponds to the gate output of the \(m\)-th module in the \(l\)-th conformer block, when applied to the \(b\)-th batch element of \(x\). We obtained better results by first pretraining the network without gates during a few epochs before enabling them. Our method is also applicable to fine-tune gates on top of a non-gated pretrained conformer. ### Output layers **Detection.** After the encoder, a feedforward layer with a sigmoid activation maps the encodings \(z\in\mathbb{R}^{B\times T_{z}\times H}\) to keyword detection probabilities \(\hat{y}_{\text{det}}\in[0,1]^{B\times T_{z}\times C}\) as \[\hat{y}_{\text{det}}=\text{Sigmoid}\big{[}W_{\text{det}}\,z+b_{\text{det}} \big{]}\,, \tag{4}\] where \(W_{\text{det}}\in\mathbb{R}^{H\times C}\) and \(b_{\text{det}}\in\mathbb{R}^{C}\) for \(C\) keyword classes. **Classification.** For the classification probabilities, a feed-forward layer is combined with a binary mask to discard all classes with detection probabilities below 0.5, and a softmax activation outputs the final classification probabilities \(\hat{y}_{\text{class}}\), \[\hat{y}_{\text{class}}=\text{Softmax}\Big{[}\big{(}\hat{y}_{\text{det}}\geq 0. 5\big{)}\cdot\big{(}W_{\text{class}}\,z+b_{\text{class}}\big{)}\Big{]}\,. \tag{5}\] Here an (unmasked) additional class with label \(C+1\) is used to account for situations where no known keyword is present, so \(W_{\text{class}}\in\mathbb{R}^{H\times C+1}\) and \(b_{\text{class}}\in\mathbb{R}^{C+1}\). **Localization.** A feedforward layer with \(W_{\text{loc}}\in\mathbb{R}^{H\times 2C}\) and \(b_{\text{loc}}\in\mathbb{R}^{2C}\) simply predicts keyword widths and offsets as \[(\hat{y}_{\text{width}},\hat{y}_{\text{offset}})=W_{\text{loc}}\,z+b_{\text{loc}}\,. \tag{6}\] **Maxpool.** To account for the variability of keyword lengths, a max-pooling layer is applied over the \(T_{z}=29\) time steps of \(\hat{y}_{\text{class}}\) with a kernel-size of 24 and a stride of 1, representing 1 second and 40ms respectively. The selected indices are then used to index the other outputs \(\hat{y}_{\text{det}}\), \(\hat{y}_{\text{width}}\) and \(\hat{y}_{\text{offset}}\), so that all return six time steps per 1.2 second window. In a similar fashion to [23], max-pooling allows the loss to only optimize steps with highest posterior probabilities. ### Ground truths During training, after going through the model, the \(N_{w}\) sliding input windows are transferred from the batch axis back to the time dimension, resulting in \(T=6N_{w}\) output steps for the complete utterance. For the ground truths, we therefore consider a receptive field of \(R=1\) second with a stride of \(S=40\)ms so that it matches the model's predictions. We briefly explain the treatment of labels and losses here but refer the reader to [12] for more details. **Detection.** The event detection labels are computed with the intersection over ground truth (IOG) overlap metric. For a keyword \(c\) with begin and end timings \((b,e)\) it is given as \[\text{iog}_{t,c}=\frac{\text{overlap}\big{[}(tS,tS+R),(b,e)\big{]}}{e-b}\,, \tag{7}\] for \(t=1,\dots,T\). The detection labels are then computed as, \[y_{\text{det}}^{t,c}=\begin{cases}1,&\text{if iog}_{t,c}>0.95\\ 0,&\text{if iog}_{t,c}<0.5\\ \text{undefined},&\text{otherwise}\,.\end{cases} \tag{8}\] When the overlap is in between 0.5 and 0.95, it is not clear whether the keyword should be counted as present or absent. Such situations are therefore masked so that no gradient update takes place, resulting in a thresholded binary cross-entropy (BCE) loss. **Classification.** In order to make the model more robust to keyword collision and confusion, the softmax classifier is trained with a thresholded cross-entropy (CE) loss, where the labels \(y_{\text{class}}^{t}\) are defined as \[y_{\text{class}}^{t}=\begin{cases}k,&\text{if $\exists c\leq C$ with iog}_{t,c}>0.95\\ C+1,&\text{if iog}_{t,c}<0.05\quad\forall c\leq C\\ \text{undefined},&\text{otherwise}\,,\end{cases} \tag{9}\] and \(k=\operatorname*{argmax}_{c}\text{iog}_{t,c}\). **Localization.** For the keyword localization, the CenterNet approach from [15] is adopted. The receptive field center at timestep \(t\) is defined as \(c_{t}=t+\frac{R}{2S}\), so that the ground truth width and offset can be computed as \[y_{\text{width}}^{t,c}=\frac{e-b}{R}\,,\quad y_{\text{offset}}^{t,c}=\frac{b+e }{2S}-c_{t}\,. \tag{10}\] The localization loss is then simply the L1 distance between ground-truth and predicted values. ### Inference At inference, the model outputs a sequence of six prediction steps every 240ms, which accounts almost entirely for the user perceived latency. At step \(t\), a score is computed as \(\max\{\hat{y}_{\text{class}}^{t,c}:c\leq C\}\). If the score is above a threshold \(\vartheta\), then an event is proposed as \(\big{(}\operatorname*{argmax}\{\hat{y}_{\text{class}}^{t,c}:c\leq C\},\hat{b} _{t},\hat{e}_{t}\big{)}\) where \(\hat{b}_{t}\) and \(\hat{e}_{t}\) are the estimated begin and end timings of the event computed from the width and offset predictions. Their relation is defined in Equation (10). Non-maximum suppression (NMS) [24] is then used to select the best non-overlapping proposals based on their scores and timings, which suppresses repetitive proposals. ## 3 Experiments ### Top1000 Librispeech **Dataset.** The Librispeech dataset [19] is an English corpus obtained from audio books that have been read aloud and sampled at a frequency of 16 kHz. The training set contains 280k utterances, totaling 960 hours of speech. It is used for keyword spotting by defining a lexicon with the 1000 words that appear the most frequently within the training set. Similarly to [9, 12], the start and end timings of the words are extracted using the Montreal Forced Aligner [25]. It represents a rather difficult KWS task as (i) many keywords often collide inside a single window due to the continuous read speech nature of the utterances, and (ii) many confusable words such as (peace, piece), (night, knight), and (right, write) are treated as distinct classes. The results are reported on _test-clean_ and _test-other_, where the latter represents more challenging data. **Training.** Gated and non-gated conformer architectures with \(H=80\) and \(N_{z}=8\) are trained with the Adam optimizer [26] and a Cosine Annealing scheduler, which gradually reduces the learning rate from 0.001 to 0.0001 over 100 epochs. Data augmentation is applied by randomly cropping utterances before the start of the first keyword, which makes the model agnostic to shifts. Additionally, zero-padding both sides with 250ms ensures that no audio portion gets discarded during Figure 1: Keyword spotting pipeline in training mode. the windowing procedure. All models are trained on eight NVIDIA-V100 GPUs using Pytorch distributed data parallelism [27] and a batch-size of eight utterances per GPU. **Evaluation.** To evaluate a trained model, we first obtain predicted events with NMS score greater than \(\vartheta=0.95\) and count as true positives (TPs) the ones for which a ground-truth event of the same class overlaps. Ground-truth events that are not predicted by the model are counted as false negatives (FNs), and predicted events that do not have a corresponding ground-truth of the same class are counted as false positives (FPs). This allows us to compute precision, recall, F1-score, actual accuracy and average IOU similarly to [9, 12]. Mean term weight values (MTWV) [28] are also reported using twenty selected keywords originally chosen in [29] and used in [9, 12], where \(\vartheta\) is tuned for each keyword. False reject rate (FRR) is computed as \(\text{FNs}/(\text{FNs}+\text{TPs})\) and false accept rate (FAR) as FPs per second. The portion of skipped multiply-and-accumulate operations (MACs) in the conformer over the complete test set is also reported to illustrate the efficiency benefits of using gates. **Results.** As presented in the top and middle panels of Table 1, while using 16\(\%\) fewer parameters, our approach improves upon the modified BC-ResNet baseline defined in [12] on almost all metrics, especially on _test-other_. Adding input-dependent dynamic gates to the encoder (Ours-L-gated) results in skipping on average 30\(\%\) and 26\(\%\) of MACs on _test-clean_ and _test-other_ respectively while maintaining the same performance (less than \(1\%\) difference on all metrics). It also outperforms a non-gated smaller model (Ours-S) with comparable number of MACs, hence the benefits of the I3D method. ### Google speech commands **Dataset.** We place the 35 Google speech commands v2 [20] in isolation over a stream of babble noises from the MS-SNSD dataset [30] with signal to noise ratio of 10-40dB. **Training.** Here tiny conformer architectures with \(H=40\) and \(N_{z}=3\) are trained as explained in 3.1. For the BC-ResNet baseline, we use the same architecture as on Librispeech with a reduced number of convolution channels. **Evaluation.** The evaluation is similar to that on Librispeech except here all labels are used to compute MTWVs, and skipped MACs are reported for speech and non-speech inputs separately. **Results.** Although our approach still improves upon the BC-ResNet baseline with 32\(\%\) fewer parameters, this simpler task mainly aims to demonstrate the benefits of gates in a command scenario. Here the encoder shows its ability to distinguish between speech and non-speech inputs and adapt its processing accordingly. We indeed measure that 42\(\%\) of MACs are skipped when processing regions containing speech, compared to 97\(\%\) for pure noise. As expected, the computational savings are even more significant in the absence of commands, making our method particularly interesting to improve the efficiency of always-on models. ## 4 Conclusion This paper proposes a method for efficient and streaming KWS. We incorporate a streaming conformer encoder into a vision-inspired KWS pipeline and include trainable binary gates to control the network's dynamic depth. These gates can selectively skip modules based on input audio characteristics, resulting in reduced computations. Our method outperforms the baseline in both continuous speech and isolated command tasks, while using fewer parameters, thereby maintaining a small memory footprint. Furthermore, the gates allow us to considerably reduce the average number of computations during inference without affecting the overall performance. Their inclusion is observed to be even more advantageous in a scenario where speech commands appear sparsely over some background noise. \begin{table} \begin{tabular}{l l l l l l l l l l l} \hline \hline **Data** & **Model** & **Params** & **FRR** & **FAR** & **Precision** & **Recall** & **F1** & **Actual** & **IOU** & **MTWV** & **Skips**\([\%]\) \\ \hline Librispeech & BC-ResNet-L & 1.54M & 0.132 & 0.192 & 0.856 & 0.868 & 0.862 & 0.872 & 0.850 & 0.78 & 0 \\ _test-clean_ & Ours-L & 1.29M & **0.129** & **0.097** & **0.922** & **0.871** & **0.896** & 0.861 & 0.839 & **0.87** & 0 \\ & Ours-L-gated & 1.29M & 0.134 & **0.100** & **0.920** & 0.866 & **0.892** & 0.866 & 0.830 & **0.84** & 30 \\ & Ours-S & 966k & 0.168 & **0.107** & **0.911** & 0.832 & **0.870** & 0.821 & 0.839 & **0.84** & 0 \\ \hline Librispeech & BC-ResNet-L & 1.54M & 0.316 & 0.314 & 0.749 & 0.684 & 0.715 & 0.684 & 0.843 & 0.58 & 0 \\ _test-other_ & Ours-L & 1.29M & **0.295** & **0.146** & **0.869** & **0.705** & **0.779** & **0.697** & 0.830 & **0.76** & 0 \\ & Ours-L-gated & 1.29M & **0.306** & **0.151** & **0.863** & **0.694** & **0.769** & **0.688** & 0.820 & **0.70** & 26 \\ & Ours-S & 966k & 0.354 & **0.145** & **0.859** & 0.646 & **0.737** & 0.646 & 0.830 & **0.67** & 0 \\ \hline GSC & BC-ResNet-XS & 139k & 0.055 & 0.004 & 0.966 & 0.945 & 0.955 & 0.945 & 0.762 & 0.84 & 0 \\ & Ours-XS & 93k & **0.052** & **0.002** & **0.982** & **0.948** & **0.964** & **0.948** & **0.818** & **0.89** & 0 \\ & Ours-XS-gated & 94k & 0.056 & **0.003** & **0.976** & 0.944 & **0.960** & 0.944 & 0.757 & **0.87** & 42, 97 \\ \hline \hline \end{tabular} \end{table} Table 1: Metrics comparison between our models and a BC-ResNet baseline that was reproduced from [12]. Bold values indicate that a metric improves upon BC-ResNet. The last column shows the portion of skipped MACs when using gates.
2309.06531
ASPED: An Audio Dataset for Detecting Pedestrians
We introduce the new audio analysis task of pedestrian detection and present a new large-scale dataset for this task. While the preliminary results prove the viability of using audio approaches for pedestrian detection, they also show that this challenging task cannot be easily solved with standard approaches.
Pavan Seshadri, Chaeyeon Han, Bon-Woo Koo, Noah Posner, Subhrajit Guhathakurta, Alexander Lerch
2023-09-12T19:10:45Z
http://arxiv.org/abs/2309.06531v2
# ASPED: An Audio Dataset for Detecting Pedestrians ###### Abstract We introduce the new audio analysis task of pedestrian detection and present a new large-scale dataset for this task. While the preliminary results prove the viability of using audio approaches for pedestrian detection, they also show that this challenging task cannot be easily solved with standard approaches. Pavan Seshadri\({}^{b}\), Chaeyeon Han\({}^{a}\), Bon-Woo Koo\({}^{d}\), Noah Posner\({}^{e}\), Subhrajit Guhathakurta\({}^{a}\), Alexander Lerch\({}^{b}\)\({}^{a}\)CSPAV, \({}^{b}\)Music Informatics Group, and \({}^{c}\)IPaT, Georgia Institute of Technology \({}^{d}\)Toronto Metropolitan University pedestrian detection, audio classification, dataset ## 1 Introduction The intelligent analysis of urban soundscapes plays an increasingly important role in the design of smart cities. Microphones can complement or even replace other forms of sensors because (i) they are affordable, (ii) have low power requirements, (iii) can cover large angles up to 360 degrees, and (iv) are not negatively impacted by light conditions, weather patterns such as fog, or obstacles blocking the angle of view. In this paper, we propose a new challenging task in urban sound analysis: the detection of pedestrians from audio-only signals. The detection of pedestrians helps in alleviating bottlenecks and in triggering advance warnings about potential dislocations. Understanding the temporal and spatial variation in demand for pedestrian infrastructure can also lead to better resource use, more equitable service delivery, and greater sustainability and resilience. Detecting pedestrians through audio poses some unique challenges. The audio signal pedestrians produce are often low volume and can be as diverse as steps and speech. These signals have to be detected in a poly-timbral and time-varying mixture of multiple urban sound sources with overlapping frequency content. To allow investigation of the viability of this novel task as well as to enable and encourage future research on the task of pedestrian detection through audio signals, we present a new, large-scale dataset containing audio and video data recorded in multiple separate recording sessions at different locations at the Georgia Tech campus, Atlanta.1 The number of pedestrians in proximity to the microphones is annotated through video analysis with three different proximity radii. Footnote 1: urbanaudiosensing.github.io/ASFED, last access date Sep 6, 2023 The main contributions of this paper are (i) the introduction of a new task in urban sound analysis: pedestrian detection, (ii) the publication of a new large-scale audio dataset for this task called ASPED (Audio Sensing for PEdestrian Detection), and (iii) the presentation of baseline results for benchmarking and for viability analysis. ## 2 Related Work Identifying the environmental context through Sound Event Detection (SED) has been an active area of research in the past decade [1, 2]. The challenge of SED in a typical outdoor environment is the detection of an event from multiple known and unknown sources of sound that are emitted simultaneously. Initial approaches to SED have used Mel-frequency cepstral coefficients (MFCC) or other time-frequency representations such as Fourier transform and the wavelet transform [3, 4]. Other approaches included non-negative matrix transformations (NMF) and spectrogram analysis with image processing techniques [5, 6, 7]. Recent advances in feedforward neural networks (FNN) and multilabel recurrent neural networks (RNN) have been particularly promising for SED [8, 9, 3]. The advances in SED have led to a small but emerging field focusing on the detection and classification of urban sounds [1, 2]. This research has been instrumental in the automatic detection of crime indicators such as screams and gunshots and in monitoring urban noise pollution [10, 11, 12, 13]. A recent large-scale research effort in this domain has been an NSF-funded project called SONYC for detecting noise and tagging urban sound sources [14]. This project has provided a large dataset of audio recordings tagged by citizen science volunteers who annotated the presence of 23 fine-grained categories of events. Another such dataset is AudioSet, which was developed by the Machine Perception Research Organization at Google [15]. AudioSet is a large-scale collection of human-labeled \(10\,\mathrm{s}\) sound clips from over 2 million YouTube videos and contain 527 classes of annotated sounds. The same group at Google has also released the YouTube-100M data set labeled with one or more topic identifiers from a set of 30,871 labels [16]. These labels are assigned automatically based on the metadata and image content. A number of labeled data sets for SED have also been developed from contributions to freesound.org, including ESC-50 and FSDS0K [17, 18]. In ad tion, VGGSound is another audio-visual dataset released in 2020 containing more than 310 audio classes [19]. However, previous research have not focused on sensing pedestrians using SED techniques. ## 3 Dataset ### Data acquisition Two hardware setups were used for data acquisition. The audio collection setup consisted of multiple Tascam DR-05X audio recorders with power banks for extended duration recording, Saramonic SR-XM1 microphones, and 5L OverBoard Waterproof Dry Flat Bags for audio permeable weatherproofing. The video setup is a GoPro HER09 Black cameras with power banks (housed in a Seahorse 56 OEM Micro Hard Case) for extended duration recording. Multiple audio sensors and cameras were deployed for each data collection session. For each session, the recorders were placed in their weatherproof bags once started, then secured to their recording locations using zip ties. Recorders were secured at approx. chest height as it was determined that sub-meter variation in height did not affect audio quality. The cameras were set to time-lapse mode with a \(1\,\mathrm{s}\) duration. All Wi-Fi functionality was disabled to extend battery life. Multiple cameras were utilized to keep all recorders in view. The camera mounts were secured at approx. \(2.5\,\mathrm{m}\) using zip ties. In order to time sync the cameras the time as listed on www.time.gov was shown on a mobile device to each camera after starting recording. A fox 40 pear whistle was then blown and the precise time was recorded. This whistle was used to sync the audio recorders. In deployment locations over larger areas, multiple whistle blows were conducted. Recorders were deployed at two on-campus locations, the Cadell Courtyard, and the Tech Walkway. Both locations are near areas with restaurants and cafes but are off-limits to vehicular traffic. The battery life of the recording devices limited the length of each recording session to approx. 2 days per session. In total, we captured 1-fps video recordings that sum up to 3,406,229 frames and the corresponding audio recordings of nearly 2,600 hours. All but one recorded days are weekdays. ### Annotations The number of pedestrians that actually passed the audio recorders was detected and annotated by applying the Masked-attention Mask Transformer (Mask2Former) [20], with a prediction threshold of 0.7 on video the recordings. This study used a Mask2Former implementation by OpenMMLab,2 trained on Microsoft COCO [21]. Footnote 2: openmmlab.com, last access date Sep 5, 2023 For each video frame, bounding boxes of the detected 'person' class were first extracted from the prediction from Mask2Former. Next, circular buffers of different radius \(r\in[$1\,\mathrm{m}$,$3\,\mathrm{m}$,$6\,\mathrm{m}$,$9\,\mathrm{m}$]\) were overlaid on the video frames around the poles to which audio recorders were attached. The buffers were angled to match the perspective of each video recording instance. Finally, the number of pedestrians with the bottom center of the bounding box intersecting with recorder buffers was counted and labeled in each frame. Each frame has four sets of annotation data for the four different recording radii (see Sect. 3.2). Among the annotated videos, frames without any detected pedestrians were the most common (around 92.8 %). Frames with one pedestrian were next most frequent, followed by those with two, then three, four pedestrians, and so on. The labeled data shows a lot more pedestrians detected during the daytime. Pedestrian activities were at peak around noon, especially during lunch time (11AM-14PM, see Fig. 2). ## 4 Experiments ### Experimental setup We determine a baseline level of performance for this task with three different models, all targeting a binary classifica Figure 1: Research team installing audio recorders in the field. Figure 2: Detected number of pedestrians at 6-meter radius by hour of day. tion (pedestrians present/not present) at different microphone radius settings and for different pedestrian count thresholds separating the present/not present classes. #### 4.1.1 Model architectures First, we investigate using the VGGish embeddings [16], pre-trained on AudioSet [15], as input to a transformer encoder to learn temporal relationships across each segment (referred to as _VGGISH_). Second, we use a convolutional encoder with a log-mel spectrogram input, followed by the aforementioned transformer encoder (referred to as _CONV_). Third, we explore using the Audio Spectrogram Transformer, which has been shown to deliver state-of-the-art performance for audio scene classification tasks [22] (referred to as _AST_). All models compute class output probabilities through an appended linear classification layer with a sigmoid activation function. #### 4.1.2 Feature extraction All network inputs are extracted in time frames of approx. \(1\,\mathrm{s}\) length. Both VGGISH and CONV follow the pre-processing procedure for the pre-trained VGGish network [16], resulting in a 128-dimensional VGGish embedding or a \(96\times 64\) dimensional (time \(\times\) freq) log-mel spectrogram, respectively. The AST input is a spectrogram with dimensionality \(100\times 128\) (time \(\times\) freq), following the original publication [22]. The input of the VGGish and CONV models are a sequence of 10 concurrent features, corresponding to each \(1\,\mathrm{s}\) frame per \(10\,\mathrm{s}\) audio segment. The input to the AST are single features per \(1\,\mathrm{s}\) frame. Each classification is done per frame, for every second of audio. #### 4.1.3 Training procedure As our data contains pedestrian counts per frame, we create classification labels where values of 0 are counted as negative-activity, and any value above 0 is counted as positive-activity. The dataset was randomly split into train/test/validation subsets with 80/10/10 proportion, respectively. For testing and validation, any overlapping segments are removed so that labels are not re-used multiple times. The loss function for all models is binary cross-entropy. As Fig. 2 shows, the label distribution is highly skewed towards no-activity; To promote the learning of pedestrian activity, we use the following augmentations for the underrepresented classes: (i) _weighted batch sampling_ -- in each mini batch, audio segments are sampled with replacement such that roughly half will contain at least one pedestrian activity event; (ii) _variable weighted loss_ -- each classes loss is weighted dynamically per batch relative to its density in the training samples, such that both positive and negative pedestrian-activity contribute roughly equally to the loss per batch. The weighting function used is shown below: \[\mathcal{L}=\lambda\mathcal{L}_{\mathrm{BCE+}}+(1-\lambda)\mathcal{L}_{ \mathrm{BCE-}} \tag{1}\] \[\lambda=\left\{\begin{array}{ll}\frac{1/num^{+}}{1/num^{+}+1/num^{-}},&\text {if }num^{+}\neq 0\\ 0,&\text{if }num^{+}=0\end{array}\right. \tag{2}\] #### 4.1.4 Hyperparameters and implementation For CONV and VGGISH, we use 1 transformer encoder with 4 attention heads, with a hidden dimensionality of 128. CONV contains 6 convolutional blocks each containing a conv2D, batchnorm, and leakyReLU layer. Both networks are trained with a learning rate of 0.0005. For the AST, we use the base configuration per the authors implementation3 pre-trained on ImageNet [23] and AudioSet [15] with hidden dimensionality of 768, and a learning rate of 5e-7. We train the CONV and VGGish models for 20 epochs and the AST for 10 epochs, with the best performing model selected via performance on the validation set. Parameters are optimized using the ADAM optimizer [24]. We use a batch size of 2048 for VGGISH, 512 for CONV, and 32 for AST. Footnote 3: github.com/YuanGongND/ast, last access date Sep 5, 2023 #### 4.1.5 Experiments We evaluate the baseline performance measured by class-level and macro-average recall with the following experiments: **E1 -- Comparison of baseline architectures:** In order to capture task performance using general audio classification methods as well as to evaluate performance across architectures of varying complexity, the three models introduced above are compared. The complexity ranges from \(\sim 100\)K trainable parameters (VGGISH) to \(\sim 80\)\(\mathrm{M}\) trainable parameters (AST). **E2 -- Impact of recording radius on accuracy:** With this experiment, the impact of the recording radius on the performance is investigated. Spatial consideration for determining pedestrian activity affects both the count and diversity of pedestrian noises: smaller radii contain a lower number of pedestrians that should be easier to classify while larger radii contain a higher number of pedestrians with harder to Figure 3: Pedestrian detection video setup. classify samples. As such, larger radii should provide a greater diversity of pedestrian signals to our models with the downside that counted pedestrians are more difficult to detect. **E3 -- Impact of pedestrian count during training and testing on performance:** As the threshold for binary classification can be set at arbitrary pedestrian counts, and does not necessarily have to be identical for both training and testing. Therefore, we determine the impact of different training thresholds on different inference thresholds and thus investigate the model generalizability to different pedestrian activity. This experiment utilizes a CONV model at radius \(r=6\,\mathrm{m}\) while thresholding labels with values \(p_{\mathrm{T}}\)! [1,2,3,4] such that any value lower than \(p_{\mathrm{T}}\) is set to 0. We then test each trained model on the 4 resulting test sets. ### Results **E1:** Figures 3(a) and 3(b) detail the results using the VGGISH, CONV, and AST models, respectively. We can make the following observations. First, the VGGISH model is in most cases outperformed by both the CONV and AST models. Second, in terms of macro accuracy, the performance of the VGGish model is fairly constant across all radii, while the CONV and AST models achieve highest performance on radii 3 and 6. Third, the AST generally has closest parity between performance on both classes. Lastly, the negative class recall seems to generally slightly outperform the recall for the positive (pedestrian) activity, although the dramatic class imbalance observed in the data is not observed in the results showing the effectiveness of the sampling and loss weighting applied during training. **E2:** When attempting to compare the performance across different radii in Fig. 3(a), it is important to note that the test sets are not identical; although all audio content is identical, the labels and, therefore, class-proportions differ. The performance per class tends to be most balanced using radii \(3\,\mathrm{m}\) and \(6\,\mathrm{m}\). The performance for radius \(1\,\mathrm{m}\) likely suffers due to pedestrian signals just outside the radius being labeled as no-activity, while radius \(9\,\mathrm{m}\) likely suffers from the opposite issue: low-volume pedestrian signals on the edge are labeled as pedestrians while potentially not detectable from audio. **E3:** Figure 3(c) visualizes the macro accuracy for each permutation of combinations of train threshold and test threshold for pedestrian count. We can make the following observations. First, as the threshold for the _test_ pedestrian count increases, a greater proportion of the samples are classified correctly. It is unsurprising that the classifier can perform better given the increasing detecting the presumably easier to detect frames with more pedestrians. Second, performance generally decreases with increasing threshold for the _training_ pedestrian count, indicating that the classifier benefits from harder to classify training samples. Third, a training pedestrian count threshold of 4 is underperforming in all tests. Likely due to the low number of occurrences in the dataset. In general the performance seems to be best when trained with a low pedestrian count threshold and evaluated with a high pedestrian count threshold (upper right triangle). ## 5 Conclusion We have introduced the new large-scale dataset ASPED for the challenging task of detecting pedestrians from audio data. The dataset includes high quality audio recordings plus the video recordings used for labeling the data with pedestrians counts. The baseline results indicate the feasibility of using audio sensors for pedestrian tracking, although the performance needs to be improved before systems become practically usable. Plans for future work include extending the dataset to locations with car traffic, investigating the accuracy of regression approaches to predict exact pedestrian counts, and the development of a more sophisticated classification approaches for pedestrian detection. Figure 4: Baseline results.
2309.16722
Erratum to the paper: Asymptotic Invariants of Base Loci
This note points out a gap in the proof of one of the technical results in the paper "Asymptotic Invariants of Base Loci", that appeared in Ann. Inst. Fourier (Grenoble) 56 (2006), 1701-1734. We provide a correct proof of this result.
Lawrence Ein, Robert Lazarsfeld, Mircea Mustata, Michael Nakamaye, Mihnea Popa
2023-09-20T22:21:25Z
http://arxiv.org/abs/2309.16722v1
# Erratum to the paper: asymptotic invariants of base loci ###### Abstract. This note points out a gap in the proof of one of the technical results in the paper _Asymptotic Invariants of Base Loci_, that appeared in _Ann. Inst. Fourier (Grenoble)_ 56 (2006), 1701-1734. We provide a correct proof of this result. ## 1. The setup We work over an algebraically closed field \(k\) and let \(X\) be a variety over \(k\) (that is, a scheme of finite type over \(k\) that is irreducible and reduced). Let \(N\) be a finitely generated, free abelian group and \(S\subseteq N\) a finitely generated, saturated subsemigroup. We denote by \(C\) the cone generated by \(S\) in \(N_{\mathbf{R}}=N\otimes_{\mathbf{Z}}\mathbf{R}\), so \(C\) is a rational polyhedral convex cone and \(S=C\cap N\). For standard facts of convex geometry, we refer to [10] and [11]. An _\(S\)-graded system of ideals_ on \(X\) is a family \(\mathfrak{a}_{\bullet}=(\mathfrak{a}_{m})_{m\in S}\) of coherent ideals \(\mathfrak{a}_{m}\subseteq\mathcal{O}_{X}\) for \(m\in S\) such that \(\mathfrak{a}_{0}=\mathcal{O}_{X}\) and \(\mathfrak{a}_{m}\cdot\mathfrak{a}_{m^{\prime}}\subseteq\mathfrak{a}_{m+m^{ \prime}}\) for every \(m,m^{\prime}\in S\). The _Rees algebra_ of \(\mathfrak{a}_{\bullet}\) is the quasi-coherent sheaf of \(S\)-graded \(\mathcal{O}_{X}\)-algebras \[R(\mathfrak{a}_{\bullet})=\bigoplus_{m\in S}\mathfrak{a}_{m}.\] We say that \(\mathfrak{a}_{\bullet}\) is _finitely generated_ if \(R(\mathfrak{a}_{\bullet})\) is a finitely generated \(\mathcal{O}_{X}\)-algebra. For a coherent ideal \(\mathfrak{b}\) on \(X\), we denote by \(\overline{\mathfrak{b}}\) the integral closure of \(\mathfrak{b}\) (see [13] for the definition and basic properties of integral closure of ideals). The following result is Proposition 4.7 in [1]. **Proposition 1.1**.: _If \(\mathfrak{a}_{\bullet}\) is a finitely generated \(S\)-graded system of ideals on the variety \(X\), then there is a smooth fan \(\Delta\) with support \(C\) such that for every smooth fan \(\Delta^{\prime}\) refining \(\Delta\), there is a positive integer \(d\) with the following property: if \(\sigma\) is a cone in \(\Delta^{\prime}\) and \(e_{1},\ldots,e_{s}\) are the generators of \(S_{\sigma}=\sigma\cap N\), then_ \[\overline{\mathfrak{a}_{d\sum_{i}pie_{i}}}=\overline{\prod_{i}\mathfrak{a}_{ de_{i}}^{p_{i}}}\quad\text{for all}\quad p_{1},\ldots,p_{s}\in\mathbf{Z}_{\geq 0}. \tag{1}\] The argument in [1] proceeds by induction on the dimension of \(C\). A key claim is that one can choose a smooth fan \(\Delta\) with support \(C\) such that the degrees corresponding to a finite system of generators of \(R(\mathfrak{a}_{\bullet})\) lie on the rays of \(\Delta\) and such that the equality (1) holds on each cone of \(\Delta\) of dimension \(\dim(C)-1\). However, it is not clear that this can be achieved when \(\dim(C)\geq 3\): given any fan \(\Delta\) with support \(C\), we can apply the inductive hypothesis to get suitable refinements for the cones in \(\Delta\) of dimension \(\dim(C)-1\), but we then need to further refine \(\Delta\), leading to new cones of dimension \(\dim(C)-1\). It is not clear that this process terminates. ## 2. The corrected proof In what follows we provide a different proof of Proposition 1.1. The key ingredient is the following general lemma. While the statement is familiar to the experts in convex geometry, we provide a proof since we could not find a reference in the literature. **Lemma 2.1**.: _Let \(N\) be a finitely generated, free abelian group and \(C\) the convex cone in \(N_{\mathbf{R}}\) generated by \(v_{1},\ldots,v_{r}\in N_{\mathbf{Q}}=N\otimes_{\mathbf{Z}}\mathbf{Q}\). Given \(\alpha=(\alpha_{1},\ldots,\alpha_{r})\in\mathbf{R}_{\geq 0}^{r}\), we consider the function \(\varphi_{\alpha}\colon C\cap N_{\mathbf{Q}}\to\mathbf{R}_{\geq 0}\) given by_ \[\varphi_{\alpha}(v)=\inf\big{\{}\lambda_{1}\alpha_{1}+\ldots+\lambda_{r} \alpha_{r}\mid\lambda_{1},\ldots,\lambda_{r}\in\mathbf{Q}_{\geq 0},\lambda_{1}v_{1}+ \ldots+\lambda_{r}v_{r}=v\big{\}}. \tag{2}\] _For every \(\alpha\), the infimum in (2) is a minimum and \(\varphi_{\alpha}\) is a convex, piecewise linear function. Moreover, there is a fan \(\Delta\), with support \(C\), such that each \(\varphi_{\alpha}\) is linear on every cone of \(\Delta\)._ Before giving the proof of the lemma, we recall one well-known fact. For \(u=(u_{1},\ldots,u_{n}),v=(v_{1},\ldots,v_{n})\in\mathbf{R}^{n}\), we put \(\langle u,v\rangle=\sum_{i=1}^{n}u_{i}v_{i}\). We use the same notation for the corresponding pairing of vectors in \(\mathbf{R}^{r}\). _Remark 2.2_.: Recall that a (rational) _polyhedron_ in \(\mathbf{R}^{n}\) is a subset defined by finitely many affine linear inequalities (defined over \(\mathbf{Q}\)). A (rational) _polytope_ in \(\mathbf{R}^{n}\) is a bounded (rational) polyhedron, or equivalently, the convex hull of finitely many points (in \(\mathbf{Q}^{n}\)); see [29, Theorem 1.1]. Any polyhedron \(P\) in \(\mathbf{R}^{n}\) can be written as \(P_{0}+C\), where \(P_{0}\) is a polytope and \(C\) is a polyhedral convex cone (see [29, Theorem 1.2]); moreover, if \(P\) is rational, then \(P_{0}\) and \(C\) can be taken rational as well. Suppose now that \(\ell\) is a linear function on \(\mathbf{R}^{n}\) given by \(\ell(v)=\langle u,v\rangle\) for some \(u\in\mathbf{R}^{n}\). It is clear that \(\ell\) is bounded below on \(P\) if and only if \(\ell\geq 0\) on \(C\), in which case we have \[\inf_{v\in P}\ell(v)=\min_{v\in P}\ell(v)=\min_{v\in P_{0}}\ell(v)=\min\big{\{} \ell(w_{1}),\ldots,\ell(w_{s})\big{\}},\] where \(w_{1},\ldots,w_{s}\) are the vertices (that is, the \(0\)-dimensional faces) of \(P_{0}\). Note that if \(P_{0}\) is a rational polytope, then \(w_{i}\in\mathbf{Q}^{n}\) for all \(i\), hence if \(P\) is a rational polyhedron, we have \[\min_{v\in P}\ell(v)=\min_{v\in P\cap\mathbf{Q}^{n}}\ell(v).\] Proof of Lemma 2.1.: Let us choose an isomorphism \(N\simeq\mathbf{Z}^{n}\) that allows us to identify \(N_{\mathbf{Q}}\) and \(N_{\mathbf{R}}\) with \(\mathbf{Q}^{n}\) and \(\mathbf{R}^{n}\), respectively. We can thus write \(v_{i}=(v_{i,1},\ldots,v_{i,n})\) for \(1\leq i\leq r\), with \(v_{i,j}\in\mathbf{Q}\) for all \(i\) and \(j\). For \(\alpha\in\mathbf{R}_{\geq 0}^{r}\), let us denote by \(\widetilde{\varphi}_{\alpha}\) the map \(C\to\mathbf{R}\) given by \[\widetilde{\varphi}_{\alpha}(v)=\inf\big{\{}\langle\alpha,\lambda\rangle\ |\ \lambda=( \lambda_{1},\ldots,\lambda_{r})\in\mathbf{R}_{\geq 0}^{r},\lambda_{1}v_{1}+ \ldots+\lambda_{r}v_{r}=v\big{\}}. \tag{3}\] If \(v=(b_{1},\ldots,b_{n})\in\mathbf{R}^{n}\) and \(\lambda=(\lambda_{1},\ldots,\lambda_{r})\in\mathbf{R}^{r}\), the conditions \(\lambda_{1},\ldots,\lambda_{r}\geq 0\) and \(v=\sum_{i=1}^{r}\lambda_{i}v_{i}\) are equivalent to \(\lambda\in P(v)\), where \(P(v)\) is the polyhedron in \(\mathbf{R}^{r}\) given by \[\sum_{i=1}^{r}v_{i,j}\lambda_{i}=b_{j}\text{ for }1\leq j\leq n\quad\text{ and}\quad\lambda_{i}\geq 0\text{ for }1\leq i\leq r.\] For every \(\alpha=(\alpha_{1},\ldots,\alpha_{r})\in\mathbf{R}_{\geq 0}^{r}\), we have \(\langle\alpha,\lambda\rangle\geq 0\) for all \(\lambda\in P(v)\). We thus conclude using Remark 2.2 that \(\varphi_{\alpha}(v)=\widetilde{\varphi}_{\alpha}(v)\) for every \(v\in C\cap\mathbf{Q}^{n}\) and the infimum in the definition of \(\varphi_{\alpha}(v)\) is a minimum. The fact that each \(\widetilde{\varphi}_{\alpha}\) is a convex function follows easily from the definition. Indeed, since we clearly have \(\widetilde{\varphi}_{\alpha}(tv)=t\cdot\widetilde{\varphi}_{\alpha}(v)\) for all \(v\in C\) and \(t\geq 0\), convexity is equivalent to the fact that \[\widetilde{\varphi}_{\alpha}(v+v^{\prime})\leq\widetilde{\varphi}_{\alpha}(v)+ \widetilde{\varphi}_{\alpha}(v^{\prime})\quad\text{for all}\quad v,v^{\prime}\in C.\] This follows from the fact that if \(v=\sum_{i=1}^{r}\lambda_{i}v_{i}\) and \(v^{\prime}=\sum_{i=1}^{r}\lambda_{i}^{\prime}v_{i}\), with \(\lambda_{i},\lambda_{i}^{\prime}\in\mathbf{R}_{\geq 0}\) for all \(i\), are such that \(\sum_{i=1}^{r}\lambda_{i}\alpha_{i}=\widetilde{\varphi}_{\alpha}(v)\) and \(\sum_{i=1}^{r}\lambda_{i}^{\prime}\alpha_{i}=\widetilde{\varphi}_{\alpha}(v^ {\prime})\), then \(v+v^{\prime}=\sum_{i=1}^{r}(\lambda_{i}+\lambda_{i}^{\prime})v_{i}\), hence \[\widetilde{\varphi}_{\alpha}(v+v^{\prime})\leq\sum_{i=1}^{r}\lambda_{i}\alpha _{i}+\sum_{i=1}^{r}\lambda_{i}^{\prime}\alpha_{i}=\widetilde{\varphi}_{\alpha }(v)+\widetilde{\varphi}_{\alpha}(v^{\prime}).\] It is a consequence of the Duality Theorem in Linear Programming (see [1, Theorem IV.8.2]) that for every \(v=(b_{1},\ldots,b_{n})\in C\), we have \[\widetilde{\varphi}_{\alpha}(v)=\max_{\gamma\in Q(\alpha)}\langle v,\gamma\rangle,\] where \(Q(\alpha)\) is the polyhedron in \(\mathbf{R}^{n}\) consisting of those \(\gamma=(\gamma_{1},\ldots,\gamma_{n})\) with \(\sum_{j=1}^{n}v_{i,j}\gamma_{j}\leq\alpha_{i}\) for \(1\leq i\leq r\). In order to complete the proof of the lemma, it is enough to show that there is a fan \(\Delta\) (consisting of strongly convex, rational polyhedral convex cones) with support \(C\), such that \(\widetilde{\varphi}_{\alpha}\) is a linear function on every cone of \(\Delta\) for all \(\alpha\in\mathbf{R}_{\geq 0}^{n}\). Note that if \(v\in C\) is fixed and we write \(P(v)\) as \(P_{0}(v)+C_{0}(v)\), for a polytope \(P_{0}(v)\) and a polyhedral convex cone \(C_{0}(v)\), then it follows from Remark 2.2 that \[\widetilde{\varphi}_{\alpha}(v)=\min\big{\{}\langle\alpha,w_{1}\rangle,\ldots,\langle\alpha,w_{s}\rangle\big{\}},\] where \(w_{1},\ldots,w_{s}\) are the vertices of \(P_{0}(v)\). We thus conclude that the function \(\alpha\mapsto\widetilde{\varphi}_{\alpha}(v)\) is continuous on \(\mathbf{R}_{\geq 0}^{r}\). In particular, it is enough to find a fan \(\Delta\) as above such that \(\widetilde{\varphi}_{\alpha}\) is linear on the cones of \(\Delta\) for all \(\alpha\in\mathbf{R}_{>0}^{r}\). Note now that if \(\alpha\in\mathbf{R}_{>0}^{r}\), then \(0\) lies in the interior of \(Q(\alpha)\). We consider the normal fan1\(\Delta(\alpha)\) to \(Q(\alpha)\) (see [1, Example 7.3]). Its cones are of the form Footnote 1: We consider the version of the normal fan whose rays are the outer normals to the facets of the polyhedron. \[\sigma_{F}=\big{\{}w\in\mathbf{R}^{n}\mid\langle w,u^{\prime}\rangle\geq \langle w,u\rangle\text{ for all }u\in Q(\alpha),u^{\prime}\in F\big{\}},\] where \(F\) runs over the faces of \(Q(\alpha)\). It is clear that \(\widetilde{\varphi}_{\alpha}\) is linear on each cone of \(\Delta(\alpha)\): on \(\sigma_{F}\) it is given by \(\langle-,u^{\prime}\rangle\) for every \(u^{\prime}\in F\). The support of \(\Delta(\alpha)\) consists precisely of those \(w\in\mathbf{R}^{n}\) such that the function \(\langle w,-\rangle\) is bounded above on \(Q(\alpha)\); equivalently, if we write \(Q(\alpha)=Q_{0}(\alpha)+T(\alpha)\), where \(Q_{0}(\alpha)\) is a polytope and \(T(\alpha)\) is a polyhedral convex cone, then \(-w\) lies in the dual \(T(\alpha)^{\vee}\) of \(T(\alpha)\). Note that by definition of \(Q(\alpha)\), the cone \(T(\alpha)\) is defined by \(\sum_{j=1}^{n}v_{i,j}\gamma_{j}\leq 0\) for \(1\leq i\leq r\) (see [1, Proposition 1.12]), hence it is the dual of \(-C\). We thus conclude that the support of \(\Delta(\alpha)\) is \(C\). Note also that every facet of \(Q(\alpha)\) is of the form \[Q(\alpha)\cap\big{\{}\gamma\mid\langle v_{i},\gamma\rangle=\alpha_{i}\big{\}}\] for some (nonzero) \(v_{i}\), hence the corresponding ray of \(\Delta(\alpha)\) is \(\mathbf{R}_{\geq 0}v_{i}\). We deduce that when we vary \(\alpha\), the rays of \(\Delta(\alpha)\) belong to a finite set, hence we have finitely many such fans. If we let \(\Delta\) be any common refinement of all such \(\Delta(\alpha)\), we conclude that the support of \(\Delta\) is \(C\) and \(\widetilde{\varphi}_{\alpha}\) is linear on the cones of \(\Delta\) for all \(\alpha\in\mathbf{R}_{>0}^{r}\). This completes the proof of the lemma. _Remark 2.3_.: We now explain a more elementary argument for the existence of the fan \(\Delta\) in Lemma 2.1. This avoids the use of the Duality Theorem in Linear Programming and also makes the choice of fan \(\Delta\) more explicit. First, arguing as in the proof of Caratheodory's theorem (see [29, Proposition 1.15]), we show the following **Claim**: in the definition of \(\widetilde{\varphi}_{\alpha}(v)\) it is enough to only consider those \(\lambda=(\lambda_{1},\dots,\lambda_{r})\in\mathbf{R}_{\geq 0}^{r}\) with the property that the \(v_{i}\) with \(i\in J(\lambda):=\{i\mid\lambda_{i}\neq 0\}\) are linearly independent. In order to see this, it is enough to show that if the \(v_{i}\) with \(i\in J(\lambda)\) are linearly dependent, then we can find \(\lambda^{\prime}=(\lambda^{\prime}_{1},\dots,\lambda^{\prime}_{r})\in \mathbf{R}_{\geq 0}^{r}\) such that \(\sum_{i=1}^{r}\lambda_{i}v_{i}=\sum_{i=1}^{r}\lambda^{\prime}_{i}v_{i}\) and we have \(\sum_{i=1}^{r}\lambda^{\prime}_{i}\alpha_{i}\leq\sum_{i=1}^{r}\lambda_{i} \alpha_{i}\) and \(J(\lambda^{\prime})\subsetneq J(\lambda)\). Note that, by assumption, we have a relation \(\sum_{i\in J(\lambda)}b_{i}\lambda_{i}=0\) such that \(J:=\{i\in J(\lambda)\mid b_{i}\neq 0\}\) is nonempty. After possibly multiplying this relation with \(-1\), we may and will assume that \(\sum_{i\in J}b_{i}\alpha_{i}\geq 0\) and \(b_{i}>0\) for some \(i\in J\) (we use here the fact that \(\alpha_{i}\geq 0\) for all \(i\)). Let \(j\in J\) be such that \[\tfrac{\lambda_{j}}{b_{j}}=\min\left\{\tfrac{\lambda_{i}}{b_{i}}\mid i\in J,b _{i}>0\right\}.\] In this case, it is straightforward to see that if \(\lambda^{\prime}_{i}=\lambda_{i}-\tfrac{\lambda_{j}}{b_{j}}b_{i}\) for all \(i\in J\) and \(\lambda^{\prime}_{i}=0\) for \(i\not\in J\), then \(\lambda^{\prime}\in\mathbf{R}_{\geq 0}^{r}\) and we have \(J(\lambda^{\prime})\subseteq J(\lambda)\smallsetminus\{j\}\) and \[v=\sum_{i=1}^{r}\lambda^{\prime}_{i}v_{i}\quad\text{and}\quad\sum_{i=1}^{r} \lambda^{\prime}_{i}\alpha_{i}\leq\sum_{i=1}^{r}\lambda_{i}\alpha_{i}.\] This proves the claim. Let \(\Lambda\) be the set of those \(J\subseteq\{1,\dots,r\}\) such that the \(v_{i}\) with \(i\in J\) are linearly independent. For every \(J\in\Lambda\), let \(\sigma_{J}\) be the convex cone in \(N_{\mathbf{R}}\) generated by the \(v_{i}\) with \(i\in J\). It is a consequence of Caratheodory's theorem that \(C=\bigcup_{J\in\Lambda}\sigma_{J}\). Consider a fan \(\Delta\) with support \(C\) such that every cone \(\sigma_{J}\), for \(J\in\Lambda\), is a union of cones in \(\Delta\). We now show that for every \(\alpha\in\mathbf{R}_{\geq 0}^{r}\) and every \(\tau\in\Delta\), the restriction \(\widetilde{\varphi}_{\alpha}|_{\tau}\) is a linear function. Note first that if \(J\in\Lambda\) and for some \(v\in\sigma_{J}\) we write \(v=\sum_{i\in J}\lambda_{i}v_{i}\), then each \(\lambda_{i}\) is given by a linear function of \(v\); therefore \(\sum_{i\in J}\lambda_{i}\alpha_{i}\) is given by a linear function \(\ell_{J}\) of \(v\). We next note that if \(w\) lies in the relative interior \(\operatorname{Relint}(\tau)\) of \(\tau\) and \(J\in\Lambda\), then \(w\in\sigma_{J}\) if and only if \(\tau\subseteq\sigma_{J}\). Indeed, by construction of \(\Delta\), we have \(\sigma_{1},\dots,\sigma_{d}\in\Delta\) such that \(\sigma_{J}=\bigcup_{j=1}^{d}\sigma_{j}\), hence \(\sigma_{J}\cap\tau=\bigcup_{j=1}^{d}(\sigma_{j}\cap\tau)\). Since \(\Delta\) is a fan, each \(\sigma_{j}\cap\tau\) is a face of \(\tau\), so the union contains a point in \(\operatorname{Relint}(\tau)\) if and only if \(\tau\subseteq\sigma_{j}\) for some \(j\), in which case \(\tau\subseteq\sigma_{J}\). Our claim thus implies that \[\widetilde{\varphi}_{\alpha}(v)=\min\left\{\ell_{J}(v)\mid\tau\subseteq\sigma_ {J}\right\}\quad\text{for all}\quad v\in\operatorname{Relint}(\tau). \tag{4}\] It is well-known (and easy to see) that (4) implies that \(-\widetilde{\varphi}_{\alpha}\) is convex on \(\operatorname{Relint}(\tau)\). Since \(\widetilde{\varphi}_{\alpha}\) is a convex function on \(C\) (the easy argument for this was given in the proof of Lemma 2.1), it follows that we have a linear function \(\ell\) on \(N_{\mathbf{R}}\) such that \(\widetilde{\varphi}_{\alpha}=\ell\) on \(\operatorname{Relint}(\tau)\). Given any \(v\in\tau\), if \(v^{\prime}\in\operatorname{Relint}(\tau)\), then \(v+v^{\prime}\in\operatorname{Relint}(\tau)\), and the convexity of \(\widetilde{\varphi}_{\alpha}\) implies that \[\ell(v+v^{\prime})=\widetilde{\varphi}_{\alpha}(v+v^{\prime})\leq\widetilde{ \varphi}_{\alpha}(v)+\widetilde{\varphi}_{\alpha}(v^{\prime})=\widetilde{ \varphi}_{\alpha}(v)+\ell(v^{\prime}).\] Therefore we have \(\ell\leq\widetilde{\varphi}_{\alpha}\) on \(\tau\). On the other hand, it is an immediate consequence of the definition of \(\widetilde{\varphi}_{\alpha}\) that if \((w_{m})_{m\geq 1}\) is a sequence of vectors in \(C\) with \(\lim_{m\to\infty}w_{m}=v\), then \(\widetilde{\varphi}_{\alpha}(v)\leq\liminf_{m\to\infty}\widetilde{\varphi}_{ \alpha}(w_{m})\). By taking \(w_{m}\in\operatorname{Relint}(\tau)\), we see that \(\widetilde{\varphi}_{\alpha}\leq\ell\) on \(\tau\). We thus conclude that \(\widetilde{\varphi}_{\alpha}|_{\tau}\) is a linear function. We can now give the proof of the result from [1]. Proof of Proposition 1.1.: Recall first that every fan admits a smooth refinement (which has the same support), see [10, Theorem 8.5]. Furthermore, it is clear that if \(\Delta\) is a smooth fan whose cones satisfy (1), then any smooth refinement of \(\Delta\) satisfies the same property. Let \[T=\{m\in S\mid\mathfrak{a}_{m}\neq 0\}\quad\text{and}\quad S_{+}=\{m\in S \mid\ell m\in T\text{ for some }\ell\in\mathbf{Z}_{>0}\}.\] Since \(R(\mathfrak{a}_{\bullet})\) is finitely generated, we can choose \(m_{1},\dots,m_{r}\in S\) such that over suitable subsets in a finite affine open cover of \(X\), \(R(\mathfrak{a}_{\bullet})\) is generated over \(\mathcal{O}_{X}\) by elements in degrees in \(\{m_{1},\dots,m_{r}\}\). We may and will assume that \(m_{i}\in T\) for all \(i\), so \(T\) is generated by \(m_{1},\dots,m_{r}\). Therefore the saturation \(S_{+}\) of \(T\) is finitely generated. Note that if \(\Delta_{0}\) satisfies the condition in the proposition for \((\mathfrak{a}_{m})_{m\in S_{+}}\), then we may take \(\Delta\) to be any smooth fan with support \(C\) with the property that every cone of \(\Delta_{0}\) is a union of cones of \(\Delta\). Indeed, the condition (1) holds trivially on the cones not contained in the support of \(\Delta_{0}\). We thus may and will assume that \(S=S_{+}\). Since \(R(\mathfrak{a}_{\bullet})\) is finitely generated, for every \(m\in S\), the \(\mathcal{O}_{X}\)-algebra \(\bigoplus_{\ell\geq 0}\mathfrak{a}_{\ell m}\) is finitely generated (see [1, Lemma 4.8]). In this case it follows from [1, Chap. III, Section 1, Proposition 3] that there is a positive integer \(d\) such that \(\mathfrak{a}_{d\ell m}=\mathfrak{a}_{dm}^{\ell}\) for all \(\ell\geq 1\). We denote the smallest such \(d\) by \(d_{m}\). In what follows, it is convenient to use the formalism of asymptotic multiplicities, as in [1]. If \(v\) is a discrete valuation of the function field of \(X\), having center on \(X\), and if \(m\in S\), then we put \[v^{\mathfrak{a}\bullet}(m):=\inf_{\ell}\frac{v(\mathfrak{a}_{\ell m})}{\ell} =\lim_{\ell\to\infty}\frac{v(\mathfrak{a}_{\ell m})}{\ell},\] where both the infimum and the limit are over those \(\ell\) such that \(\mathfrak{a}_{\ell m}\neq 0\). Note that by definition of \(d_{m}\), we have \(v^{\mathfrak{a}\bullet}(m)=\frac{v(\mathfrak{a}_{\ell dm_{m}m})}{\ell d_{m}}\) for every \(m\in S\) and every \(\ell\in\mathbf{Z}_{>0}\). Our choice of \(m_{1},\dots,m_{r}\) implies that for every \(m\in S\), we have \[\mathfrak{a}_{m}=\sum_{\ell_{1},\dots,\ell_{r}}\mathfrak{a}_{m_{1}}^{\ell_{1} }\cdots\mathfrak{a}_{m_{r}}^{\ell_{r}}, \tag{5}\] where the sum is over all \(\ell_{1},\dots,\ell_{r}\in\mathbf{Z}_{\geq 0}\) with \(m=\sum_{i=1}^{r}\ell_{i}m_{i}\). We now show that for every \(m\in S\), we have \[v^{\mathfrak{a}\bullet}(m)=\inf\left\{\sum_{i=1}^{r}\lambda_{i}\cdot v( \mathfrak{a}_{m_{i}})\mid\lambda_{1},\dots,\lambda_{r}\in\mathbf{Q}_{\geq 0},m= \sum_{i=1}^{r}\lambda_{i}m_{i}\right\}. \tag{6}\] In order to prove "\(\leq\)", note that given \(\lambda_{1},\dots,\lambda_{r}\in\mathbf{Q}_{\geq 0}\) with \(m=\sum_{i=1}^{r}\lambda_{i}m_{i}\), we may choose \(\ell\in\mathbf{Z}_{>0}\) such that \(\ell\lambda_{i}\in\mathbf{Z}\) for all \(i\). In this case the inclusion \(\prod_{i}\mathfrak{a}_{m_{i}}^{\ell\lambda_{i}}\subseteq\mathfrak{a}_{\ell m}\) implies \[v(\mathfrak{a}_{\ell m})\leq\sum_{i=1}^{r}\ell\lambda_{i}\cdot v(\mathfrak{a} _{m_{i}})\] and thus \[v^{\mathfrak{a}\bullet}(m)\leq\frac{v(\mathfrak{a}_{\ell m})}{\ell}\leq\sum_ {i=1}^{r}\lambda_{i}\cdot v(\mathfrak{a}_{m_{i}}).\] This gives the inequality "\(\leq\)" in (6). In order to prove the opposite inequality, note that if \(\ell\in\mathbf{Z}_{>0}\) is such that \(\mathfrak{a}_{\ell m}\neq 0\), then it follows from (5) that there are \(\ell_{1},\dots,\ell_{r}\in\mathbf{Z}_{\geq 0}\) such that \(\sum_{i}\ell_{i}m_{i}=\ell m\) and \[v(\mathfrak{a}_{\ell m})\geq\sum_{i=1}^{r}\ell_{i}\cdot v(\mathfrak{a}_{m_{i}}).\] Dividing by \(\ell\) and then letting \(\ell\) vary, we obtain the inequality "\(\geq\)" in (6). It follows from (6) that we may apply Lemma 2.1 to obtain a fan \(\Delta\) (that we may assume to be smooth), with support \(C\), such that for every valuation \(v\) as above, we have \(v^{\mathfrak{a}_{\bullet}}(m+m^{\prime})=v^{\mathfrak{a}_{\bullet}}(m)+v^{ \mathfrak{a}_{\bullet}}(m^{\prime})\) whenever \(m,m^{\prime}\in S\) lie in the same cone of \(\Delta\). Let \(d\) be the least common multiple of the \(d_{w}\), when \(w\) runs over the primitive ray generators of \(\Delta\). In this case, if \(\sigma\) is a cone in \(\Delta\) with primitive ray generators \(e_{1},\ldots,e_{s}\), then for every \(p_{1},\ldots,p_{s}\in\mathbf{Z}_{\geq 0}\), if \(m=\sum_{i=1}^{s}p_{i}e_{i}\), then \[v(\mathfrak{a}_{dm})\leq\sum_{i=1}^{s}p_{i}\cdot v(\mathfrak{a}_{de_{i}})= \sum_{i=1}^{s}p_{i}\cdot v^{\mathfrak{a}_{\bullet}}(de_{i})=v^{\mathfrak{a}_{ \bullet}}(dm)\leq v(\mathfrak{a}_{dm}), \tag{7}\] where the first inequality follows from the inclusion \(\prod_{i}\mathfrak{a}_{de_{i}}^{p_{i}}\subseteq\mathfrak{a}_{m}\). Therefore all inequalities in (7) are equalities. Since \[v(\mathfrak{a}_{dm})=v\big{(}\mathfrak{a}_{de_{1}}^{p_{1}}\cdots\mathfrak{a}_ {de_{s}}^{p_{s}}\big{)}\] for every discrete valuation \(v\) of the function field of \(X\) that has center on \(X\), it follows from [13, Proposition 6.8.2] that \[\overline{\mathfrak{a}_{dm}}=\overline{\mathfrak{a}_{de_{1}}^{p_{1}}\cdots \mathfrak{a}_{de_{s}}^{p_{s}}}.\] This completes the proof. ### Acknowledgments We would like to thank Vlad Lazic for motivating us to write this erratum and for several comments on preliminary versions. We are also indebted to Sasha Barvinok for his comments in connection with Lemma 2.1, especially for the suggestion to use the Duality Theorem in Linear Programming in the proof.
2309.12544
Stability and Statistical Inversion of Travel time Tomography
In this paper, we consider the travel time tomography problem for conformal metrics on a bounded domain, which seeks to determine the conformal factor of the metric from the lengths of geodesics joining boundary points. We establish forward and inverse stability estimates for simple conformal metrics under some a priori conditions. We then apply the stability estimates to show the consistency of a Bayesian statistical inversion technique for travel time tomography with discrete, noisy measurements.
Ashwin Tarikere, Hanming Zhou
2023-09-22T00:03:54Z
http://arxiv.org/abs/2309.12544v3
# Stability and statistical inversion of travel time tomography ###### Abstract. In this paper, we consider the travel time tomography problem for conformal metrics on a bounded domain, which seeks to determine the conformal factor of the metric from the lengths of geodesics joining boundary points. We establish forward and inverse stability estimates for simple conformal metrics under some a priori conditions. We then apply the stability estimates to show the consistency of a Bayesian statistical inversion technique for travel time tomography with discrete, noisy measurements. ## 1. Introduction Consider a smooth, bounded, and simply connected domain \(\Omega\subseteq\mathbb{R}^{m}\), with \(m\geq 2\). Given a Riemannian metric \(g\) on \(\overline{\Omega}\), we define the associated _boundary distance function_\(\Gamma_{g}:\partial\Omega\times\partial\Omega\to[0,\infty)\) by \[\Gamma_{g}(\xi,\eta)=\inf\left\{\int_{\gamma}d|g|:=\int_{0}^{T}|\dot{\gamma}(t )|_{g}\,dt\ :\ \gamma\in C^{1}([0,T],\overline{\Omega}),\ \gamma(0)=\xi,\ \gamma(T)=\eta\right\}\] for all \(\xi,\eta\in\partial\Omega\). In other words, \(\Gamma_{g}(\xi,\eta)\) is the Riemannian distance (with respect to \(g\)) between the boundary points \(\xi\) and \(\eta\). We consider the following inverse problem: _Can we recover the metric \(g\) in the interior of the domain from the boundary distance function \(\Gamma_{g}\)?_ This inverse problem, called the _boundary rigidity problem_ in mathematics literature, arose in geophysics in an attempt to determine the inner structure of the earth, such as the sound speed or index of refraction, from measurements of travel times of seismic waves on the earth's surface. This is called the _inverse kinematic problem_ or the _travel time tomography problem_ in seismology [16, 44]. The boundary rigidity problem is not solvable in general. Consider, for example, a unit disk with a metric whose magnitude is large (and therefore, geodesic speed is low) near the center of the disk. In such cases, it is possible that all distance minimizing geodesics connecting boundary points avoid the large metric region, and therefore one can not expect to recover the metric in this region from the boundary distance function. In view of this restriction, one needs to impose additional geometric conditions on the metric to be reconstructed. One such condition is _simplicity_. A metric \(g\) on \(\overline{\Omega}\) is said to be simple if the boundary \(\partial\Omega\) is strictly convex w.r.t. to \(g\) and any two points on \(\overline{\Omega}\) can be joined by a unique distance minimizing geodesic. Michel conjectured that simple metrics are boundary distance rigid [20], and this has been proved in dimension two [33]. In dimensions \(\geq 3\), this is known for generic simple metrics [35]. When caustics appear, a completely new approach was established in [36, 37] for the boundary rigidity problem in dimensions \(\geq 3\), assuming a convex foliation condition. Boundary rigidity problems for more general dynamical systems can be found in [10, 2, 47, 31, 17, 45, 34]. We also refer to [9, 38] for summaries of recent developments on the boundary rigidity problem. The boundary rigidity problem for general Riemannian metrics has a natural gauge: isometries of \((\overline{\Omega},g)\) that preserve \(\partial\Omega\) will also preserve the boundary distance function. In this paper, we restrict our attention to the problem of determining metrics from a fixed conformal class. Let \(\bar{g}\) be a fixed "background" metric on \(\overline{\Omega}\) which is simple and has \(C^{3}\) regularity. For any positive function \(n\in C^{3}(\overline{\Omega})\), define \[g_{n}:=n^{2}\bar{g},\] which is a new Riemannian metric on \(\overline{\Omega}\) that is conformal to \(\bar{g}\). Our goal is to recover the parameter \(n\) from the boundary distance function of \(g_{n}\). In this problem, the gauge of isometries does not appear, and one expects to be able to uniquely determine the conformal factor \(n\) from \(\Gamma_{g_{n}}\). It is known that simple metrics from the same conformal class are boundary rigid for all \(m\geq 2\)[25, 24, 27]. To be precise, if \(n_{1},n_{2}\in C^{3}(\overline{\Omega})\) are such that \(g_{n_{1}},g_{n_{2}}\) are both simple metrics on \(\overline{\Omega}\), then \(\Gamma_{g_{n_{1}}}=\Gamma_{g_{n_{2}}}\) if and only if \(n_{1}=n_{2}\). To simplify notation, we will henceforth denote \(\Gamma_{g_{n}}\) by simply \(\Gamma_{n}\). ### Stability estimates for the deterministic inverse problem The uniqueness aspect of the boundary rigidity problem for conformal simple metrics has been quite well understood through the aforementioned studies. The first topic of this paper is the _stability_ of the boundary rigidity problem, i.e., quantitative lower bounds on the change in \(\Gamma_{n}\) corresponding to a change in the parameter \(n\). Stability is important in practice, as we hope the inversion method for travel time tomography will be stable under perturbations of the data, e.g., by noise. Conditional stability estimates for simple metrics can be found in [43, 35, 36], where the metrics are assumed _a priori_ to be close to a given one. When considering a fixed conformal class, various stability estimates without the closeness assumption have been established in [24, 26, 3]. In [24] the following stability result has been proved for the 2D boundary rigidity problem with the Euclidean background metric: \[\|n_{1}-n_{2}\|_{L^{2}(\Omega)}\leq\frac{1}{\sqrt{2\pi}}\|d_{\xi}(\Gamma_{n_{1} }-\Gamma_{n_{2}})(\xi,\eta)\|_{L^{2}(\partial\Omega\times\partial\Omega)}. \tag{1}\] Here, \(d_{\xi}\) is the exterior derivative operator with respect to \(\xi\) and the \(L^{2}\) norms are taken with respect to the standard Euclidean metric. Notice that since the boundary distance function is symmetric, this estimate essentially says that the \(L^{2}\)-norm of \(n_{1}-n_{2}\) can be controlled by the \(H^{1}\)-norm of \(\Gamma_{n_{1}}-\Gamma_{n_{2}}\). For dimensions \(\geq 3\), there are generalizations [3, 26] of (1) with more complicated expressions (see also Theorem 2.1). However, the estimates of [3, 26] are not in standard Sobolev or Holder norms, which makes them inconvenient for applications. In this paper, we establish stability estimates similar to (1) for all dimensions \(\geq 2\), without any _a priori_ closeness assumptions on \(n_{1},n_{2}\). Before giving the statement of our results, we need to define some function spaces for the conformal parameter \(n\). **Definition 1.1**.: Let \(\Omega_{0}\) be a smooth, relatively compact subdomain of \(\Omega\), and let \(\lambda,\Lambda,\ell,L\) be real numbers such that \[0<\lambda<1<\Lambda,\qquad 0<\ell<L.\] We define \(\mathcal{N}_{\lambda,\Lambda,\ell,L}(\Omega_{0})\) to be the set of all functions \(n\in C^{3}(\overline{\Omega})\) that satisfy the following conditions: 1. The metric \(g_{n}=n^{2}\bar{g}\) is a simple metric on \(\overline{\Omega}\). 2. \(\lambda<n(x)<\Lambda\) for all \(x\in\overline{\Omega}\) and \(n\equiv 1\) on \(\overline{\Omega}\setminus\Omega_{0}\). 3. Let \(\exp_{n}(x,v)\) denote the exponential map with respect to \(g_{n}\) based at \(x\in\overline{\Omega}\) and acting on \(v\in T_{x}\overline{\Omega}\). Then the derivative of \(\exp_{n}(x,\cdot)\) satisfies (2) \[\ell|w|_{\bar{g}}<|D_{v}\exp_{n}(x,v)(w)|_{\bar{g}}<L|w|_{\bar{g}}\] for all \(x\in\overline{\Omega}\), \(v\in\operatorname{dom}(\exp_{n}(x,\cdot))\), and \(w\in T_{v}T_{x}\overline{\Omega}\cong T_{x}\overline{\Omega}\). We also let \[\mathcal{N}_{\lambda,\ell}(\Omega_{0}):=\bigcup_{\Lambda>1,\,L>0}\mathcal{N}_{ \lambda,\Lambda,\ell,L}(\Omega_{0}).\] The class of metrics associated with these function spaces includes any metric with non-positive sectional curvature that is conformal to \(\bar{g}\) and equal to \(\bar{g}\) in a neighborhood of \(\partial\Omega\). Indeed, suppose \(g_{n}=n^{2}\bar{g}\) is such a metric. Then \((\overline{\Omega},g_{n})\) is free of conjugate points by the curvature assumption, and \(\partial\Omega\) remains strictly convex with respect to \(g_{n}\) since \(g_{n}\equiv\bar{g}\) near \(\partial\Omega\). Therefore, \(g_{n}\) is a simple metric. Moreover, it follows from the Rauch Comparison Theorem that its exponential map \(\exp_{n}\) satisfies (2) for sufficiently large \(L\) and any \(\ell<1\) (see, e.g., [6, Corollary 1.35]). _Remark 1.1_ (Notation).: Let \(T:W_{1}\to W_{2}\) be a linear map between normed vector spaces. Given real numbers \(m,M\), we will use the notation \[m\prec T\prec M\] as shorthand for \[m\|w\|_{W_{1}}<\|Tw\|_{W_{2}}<M\|w\|_{W_{1}}.\] Using this notation, (2) can be rewritten as \[\ell\prec D_{v}\exp_{n}(x,v)\prec L. \tag{3}\] We will also use \(\|T\|_{op}\) to denote the _operator norm_ of \(T\): \[\|T\|_{op}:=\sup\left\{\|Tw\|_{W_{2}}\ :\ w\in W_{1},\ \|w\|_{W_{1}}=1\right\}.\] _Remark 1.2_.: Let \(\delta>0\) be the distance (w.r.t. to \(\bar{g}\)) between \(\partial\Omega\) and \(\overline{\Omega}_{0}\), and let \(\xi,\eta\in\partial\Omega\) be any pair of boundary points such that \(\operatorname{dist}_{\bar{g}}(\xi,\eta)<\delta\). For any \(n\in\mathcal{N}_{\lambda,\ell}(\Omega_{0})\), \(g_{n}\) coincides with \(\bar{g}\) on \(\overline{\Omega}\setminus\Omega_{0}\), and consequently, we have \(\Gamma_{n}(\xi,\eta)=\operatorname{dist}_{\bar{g}}(\xi,\eta)\). In particular, \(\Gamma_{n_{1}}(\xi,\eta)=\Gamma_{n_{2}}(\xi,\eta)\) for all \(n_{1},n_{2}\in\mathcal{N}_{\lambda,\ell}(\Omega_{0})\). We are now ready to state our result on stability estimates for the boundary rigidity problem. **Theorem 1.2**.: _Let \(\Omega,\Omega_{0},\bar{g}\) be as before, and let \(\lambda,\ell\) be real numbers such that_ \[0<\lambda<1,\qquad 0<\ell.\] _Then there exists a constant \(C_{1}(\Omega,\Omega_{0},\bar{g},\ell)>0\) such that for all \(n_{1},n_{2}\in\mathcal{N}_{\lambda,\ell}(\Omega_{0})\),_ \[\|n_{1}-n_{2}\|_{L^{2}(\Omega)}\leq C_{1}\lambda^{2-m}\|d_{\xi}(\Gamma_{n_{1}} -\Gamma_{n_{2}})(\xi,\eta)\|_{L^{2}(\partial\Omega\times\partial\Omega)}.\] Here, the \(L^{2}\) norms are taken with respect to the background metric \(\bar{g}\), and \(d_{\xi}\) represents the exterior derivative operator with respect to \(\xi\). We will apply the above stability estimate to study a statistical inversion technique for travel time tomography. For this purpose, we also need the following continuity (or "forward stability") estimate of \(\Gamma_{n}\). To the best of our knowledge, no such continuity estimate has been published before. **Theorem 1.3**.: _Let \(\Omega,\Omega_{0},\bar{g}\) be as before, and let \(\lambda,\Lambda,\ell,L\) be real numbers such that_ \[0<\lambda<1<\Lambda,\qquad 0<\ell<L.\] _Then there exists a constant \(C_{2}(\Omega,\Omega_{0},\bar{g},\ell,L)>0\) such that for all \(n_{1},n_{2}\in\mathcal{N}_{\lambda,\Lambda,\ell,L}(\Omega_{0})\),_ \[\|\Gamma_{n_{1}}-\Gamma_{n_{2}}\|_{L^{2}(\partial\Omega\times\partial\Omega) }\leq C_{2}\frac{\Lambda^{m/2}}{\lambda}\|n_{1}-n_{2}\|_{L^{2}(\Omega)}.\] ### The statistical inverse problem The boundary rigidity problem is nonlinear, and geodesics are curved in general, so it is hard to derive explicit inversion formulas. Some reconstruction algorithms and numerical implementations based on theoretical analyses can be found in [7, 8, 46]. Typically, inversion methods in travel time tomography take an optimization approach with appropriate regularization. This is a deterministic approach which seeks to minimize some mismatch functional that quantifies the difference between the observations and the forecasts (synthetic data). However, this approach generally does not work well for non-convex problems. Moreover, various approximations in numerical methods can introduce systematic (random) error to the reconstruction procedure. In this paper, we apply the above stability estimates (Theorems 1.2 and 1.3) to study a Bayesian inversion technique for the travel time tomography problem. The Bayesian inversion technique provides a reasonable solution for ill-posed inverse problems when the number of available observations is limited, which is a common scenario in practice. Applications of Bayesian inversion to seismology can be found in [19, 40], which are based on the general paradigm of infinite dimensional Bayesian inverse problems developed by Stuart [39]. However, most studies in the literature are concerned with waveform inversion, which is more PDE-based. On the other hand, there are very few results on statistical guarantees for the Bayesian approach to seismic inverse problems. These motivate us to apply Stuart's Bayesian inversion framework to produce a rigorous statistical analysis of the problem of recovering the wave speed from the (noisy) travel time measurements. For statistical inversion, it is convenient to rewrite the conformal factor \(n\) using an exponential parameter: For any \(\beta\geq 3\), let \(C_{0}^{\beta}(\Omega_{0})\) denote the closure in the Holder space \(C^{\lfloor\beta\rfloor,\beta-\lfloor\beta\rfloor}(\overline{\Omega}_{0})\) of the subspace of all smooth functions compactly supported in \(\Omega_{0}\). Given any function \(c\in C_{0}^{3}(\Omega_{0})\), we define the corresponding conformal factor \(n_{c}\) by \[n_{c}(x)=\begin{cases}e^{c(x)}&\text{if }x\in\Omega_{0},\\ 1&\text{if }x\in\overline{\Omega}\setminus\Omega_{0}.\end{cases} \tag{4}\] It is easy to see that \(n_{c}\) is a positive \(C^{3}\) function on \(\overline{\Omega}\). To simplify notation, we will denote the corresponding boundary distance function \(\Gamma_{n_{c}}\) by simply \(\Gamma_{c}\). Our goal is to reconstruct the exponential parameter \(c\) from error-prone measurements of \(\Gamma_{c}\) on finitely many pairs of boundary points \((X_{i},Y_{i})\), \(i=1,\ldots,N\). Following the general paradigm of Bayesian inverse problems, we assume that \(c\) arises from a prior probability distribution \(\Pi\) on \(C_{0}^{3}(\Omega_{0})\). We will construct \(\Pi\) so that it is supported in a subset of \(C_{0}^{3}(\Omega_{0})\) of the following form: **Definition 1.4**.: Let \(\ell,M>0\) and \(\beta\geq 3\). We define \(\mathcal{C}_{\ell,M}^{\beta}(\Omega_{0})\) as the set of all functions \(c\in C_{0}^{\beta}(\Omega_{0})\) that satisfy the following conditions: 1. The metric \(g_{n_{c}}=n_{c}^{2}\bar{g}\) is a simple metric on \(\overline{\Omega}\). 2. The derivative of \(\exp_{n_{c}}(x,\cdot)\) satisfies \[D_{w}\exp_{n_{c}}(x,w)\succ\ell\] for all \(x\in\overline{\Omega}\) and \(w\in\operatorname{dom}(\exp_{n_{c}}(x,\cdot))\). 3. \(\left\|c\right\|_{C^{\lfloor\beta\rfloor,\beta-\lfloor\beta\rfloor}(\overline {\Omega}_{0})}<M\). We will show in Section 2 that if \(c\in\mathcal{C}_{\ell,M}^{\beta}(\Omega_{0})\), the corresponding conformal parameter \(n_{c}\in\mathcal{N}_{\lambda,\Lambda,\ell,L}(\Omega_{0})\) for appropriate choices of \(\lambda,\Lambda\) and \(L\). The precise construction of \(\Pi\) is described in Section 3. _Remark 1.3_ (Notation).: Henceforth, we will denote \(C^{\lfloor\beta\rfloor,\beta-\lfloor\beta\rfloor}\) by simply \(C^{\beta}\). _Remark 1.4_.: It is known that small perturbations of simple metrics are again simple. Therefore, \(\mathcal{C}^{\beta}_{\ell,M}(\Omega_{0})\) is an open subset of \(C^{\beta}_{0}(\Omega_{0})\). The pairs of boundary points \((X_{i},Y_{i})\) between which the distance measurements are to be made are chosen according to the rule \[(X_{i},Y_{i})\stackrel{{\text{i.i.d.}}}{{\sim}}\mu,\] where \(\mu\) is the uniform probability measure on \(\partial\Omega\times\partial\Omega\) induced by the background metric \(\bar{g}\). The actual distance measurements between these points are assumed to be of the form \[\Gamma_{i}=e^{\epsilon_{i}}\Gamma_{c}(X_{i},Y_{i}),\] where \(\epsilon_{i}\) are i.i.d. \(N(0,\sigma^{2})\) normal random variables (\(\sigma>0\) is fixed) that are also independent of \((X_{j},Y_{j})_{j=1}^{N}\). We will assume for simplicity that \(\sigma=1\). Define \[Z_{c}=\log\Gamma_{c},\] and for \(i=1,\ldots,N\), \[Z_{i} =\log\Gamma_{i}\] \[=Z_{c}(X_{i},Y_{i})+\epsilon_{i}.\] All of our measurements can be summarized using the data vector \[\mathcal{D}_{N}=(X_{i},Y_{i},Z_{i})_{i=1}^{N}\in(\partial\Omega\times \partial\Omega\times\mathbb{R})^{N}. \tag{5}\] For convenience, let us define \(\mathcal{X}=\partial\Omega\times\partial\Omega\times\mathbb{R}\). Next, let \(P_{c}^{N}\) denote the probability law of \(\mathcal{D}_{N}|c\). It is easy to see that \(P_{c}^{N}=\times_{i=1}^{N}P_{c}^{(i)}\), where for each \(i\in\{1,\ldots,N\}\), \(P_{c}^{(i)}\) is equal to the probability law of \((X_{i},Y_{i},Z_{i})\). More explicitly, for each \(i\in\{1,\ldots,N\}\), \[dP_{c}^{(i)}(x,y,z)=p_{c}d\mu(x,y)dz,\] where \[p_{c}(x,y,z)=\frac{1}{\sqrt{2\pi}}\exp\left\{-\frac{1}{2}\left(z-Z_{c}(x,y) \right)^{2}\right\}.\] We denote the posterior distribution of \(c|\mathcal{D}_{N}\) by \(\Pi(\cdot|\mathcal{D}_{N})\). By Corollary 2.7, the map \((c,(x,y,z))\mapsto p_{c}(x,y,z)\) is jointly Borel-measurable from \(C^{3}_{0}(\Omega_{0})\times\mathcal{X}\) to \(\mathbb{R}\). So it follows from standard arguments (see [14, p. 7] ) that the posterior distribution is well-defined and takes the form \[\Pi(A|\mathcal{D}_{N})=\frac{\int_{A}\prod_{i=1}^{N}p_{c}(X_{i},Y_{i},Z_{i})d \Pi(c)}{\int\prod_{i=1}^{N}p_{c}(X_{i},Y_{i},Z_{i})d\Pi(c)}\] for any Borel set \(A\subseteq C^{3}_{0}(\Omega_{0})\). Our posterior estimator for \(c\) will be the posterior mean \[\overline{c}_{N}=\mathbb{E}^{\Pi}[c|\mathcal{D}_{N}]. \tag{6}\] **Theorem 1.5**.: _Suppose that the true parameter \(c_{0}\) is smooth and compactly supported in \(\Omega_{0}\), and is such that \(g_{n_{c_{0}}}\) is a simple metric on \(\overline{\Omega}\). Then there is a well defined prior distribution \(\Pi\) on \(C^{3}_{0}(\Omega_{0})\) such that the posterior mean \(\overline{c}_{N}\) satisfies_ \[\|\overline{c}_{N}-c_{0}\|_{L^{2}(\Omega)}\to 0\] _in \(P_{c_{0}}^{N}\)- probability, as \(N\to\infty\)._ A more precise version of this result is stated in Theorem 3.1 in Section 3, which in fact requires significantly weaker regularity assumptions on \(c_{0}\). It also specifies an explicit \(N^{-\omega}\) rate of convergence, where \(\omega\) is a positive constant that can be made arbitrarily close to \(1/4\). To prove Theorem 1.5, we apply the analytic techniques developed in recent consistency studies of statistical inversion of the geodesic X-ray transform [21] and related non-linear problem arising in polarimetric neutron tomography [22, 23]. The forward and inverse stability estimates for the measurement operators (like the ones in Theorems 1.2 and 1.3) play a key role in the arguments of these references. The analysis of theoretical guarantees for statistical inverse problems is currently a very active topic. Recent progress for various linear and non-linear inverse problems include [11, 12, 1, 21, 28, 22, 23, 30, 5, 4]. See also the recent lecture notes [29]. The paper is structured as follows. In Section 2, we establish the forward and inverse stability estimates for the boundary distance function. Section 3 is devoted to proving the statistical consistency of Bayesian inversion for the boundary rigidity problem. **Acknowledgement:** HZ is partly supported by the NSF grant DMS-2109116. ## 2. Forward and Inverse continuity estimates In order to prove the statistical consistency of the proposed Bayesian estimator, we need to establish quantitative upper and lower bounds on the magnitude of change in the boundary distance function \(\Gamma_{n}\) corresponding to a change in the conformal parameter \(n\) of the metric. This is the content of Theorems 1.2 and 1.3, which we will prove in this section. We will also use these estimates to establish similar bounds for the map \(c\mapsto Z_{c}=\log\Gamma_{c}\), when \(c\) belongs to the parameter space \(\mathcal{C}^{\beta}_{\ell,M}(\Omega_{0})\) defined in Definition 1.4. ### Stability estimates We begin with the proof of Theorem 1.2. As we noted in the introduction, such an estimate has already been proved for dimension \(m=2\) by Mukhometov in [24]. For general \(m\geq 2\), we have the following result by Beylkin [3]. Also see [26, Lemma 4]. **Theorem 2.1** ([3]).: _Let \(n_{1},n_{2}\in C^{3}(\overline{\Omega})\) be such that \(g_{n_{1}},g_{n_{2}}\) are simple metrics on \(\overline{\Omega}\). Then_ \[\begin{split}\int_{\Omega}(n_{1}-n_{2})&(n_{1}^{m-1 }-n_{2}^{m-1})\,\text{dVol}_{\bar{g}}\\ &\leq C_{m}\int_{\partial\Omega_{\xi}\times\partial\Omega_{ \eta}}\sum_{a+b=m-2}d_{\xi}(\Gamma_{n_{1}}-\Gamma_{n_{2}})\wedge d_{\eta}( \Gamma_{n_{1}}-\Gamma_{n_{2}})\wedge(d_{\xi}d_{\eta}\Gamma_{n_{1}})^{a}\wedge (d_{\xi}d_{\eta}\Gamma_{n_{2}})^{b}\,,\end{split} \tag{7}\] _where \(\text{dVol}_{\bar{g}}\) is the Riemannian volume form induced by \(\bar{g}\), and \(d_{\xi}\) and \(d_{\eta}\) represent the exterior derivative operators on \(\partial\Omega\) with respect to \(\xi\) and \(\eta\) respectively. Given local coordinates \((\xi^{1},\ldots,\xi^{m-1})\) for \(\xi\) and \((\eta^{1},\ldots,\eta^{m-1})\) for \(\eta\), we have \(d_{\xi}=d\xi^{i}\frac{\partial}{\partial\xi^{i}}\), \(d_{\eta}=d\eta^{j}\frac{\partial}{\partial\eta^{j}}\), and \(d_{\xi}d_{\eta}=d\xi^{i}\wedge d\eta^{j}\frac{\partial^{2}}{\partial\xi^{i} \partial\eta^{j}}\). The constant_ \[C_{m}=\frac{(-1)^{\frac{(m-1)(m-2)}{2}}\Gamma(m/2)}{2\pi^{m/2}(m-1)!}\] _depends only on the dimension \(m\)._ We will show that when \(n_{1},n_{2}\in\mathcal{N}_{\lambda,\ell}(\Omega_{0})\), the inequality (7) leads to the desired stability estimate. **Lemma 2.2**.: _Let \(n\in\mathcal{N}_{\lambda,\ell}(\Omega_{0})\). Then the corresponding boundary distance function \(\Gamma_{n}\) satisfies_ \[|d_{\xi}\Gamma_{n}(\xi,\eta)|_{\bar{g}}\leq 1,\quad|d_{\eta}\Gamma_{n}(\xi,\eta)|_{ \bar{g}}\leq 1,\] _and_ \[|\nabla^{\xi}\nabla^{\eta}\Gamma_{n}(\xi,\eta)|_{\bar{g}}\leq\frac{(1+\ell^{-1 })}{\lambda}\operatorname{dist}_{\bar{g}}(\xi,\eta)^{-1}\] _for all \(\xi,\eta\in\partial\Omega\) with \(\xi\neq\eta\). Here, \(\nabla^{\xi},\nabla^{\eta}\) denote the covariant derivative operators with respect to \(\xi\) and \(\eta\) respectively, and \(\operatorname{dist}_{\bar{g}}(\xi,\eta)\) is the distance from \(\xi\) to \(\eta\) with respect to the metric \(\bar{g}\)._ Proof.: Given \(\xi,\eta\in\partial\Omega\) with \(\xi\neq\eta\), let \(v(\xi,\eta)\) denote the unit vector (with respect to \(g_{n}\)) at \(\eta\) tangent to the geodesic from \(\xi\) to \(\eta\). It follows from the First Variation Formula that the gradient (with respect to \(g_{n}\)) of \(\Gamma_{n}(\xi,\cdot)\) is given by \[\operatorname{grad}_{\eta}\Gamma_{n}(\xi,\eta)=\Pi_{\eta}v(\xi,\eta), \tag{8}\] where \(\Pi_{\eta}:T_{\eta}\overline{\Omega}\to T_{\eta}\partial\Omega\) is the orthogonal projection map onto the tangent space of the boundary. Since \(g_{n}=\bar{g}\) on \(\partial\Omega\), it follows immediately that \[|d_{\eta}\Gamma_{n}(\xi,\eta)|_{\bar{g}}=|\operatorname{grad}_{\eta}\Gamma_{n }(\xi,\eta)|_{g_{n}}=|\Pi_{\eta}v(\xi,\eta)|_{g_{n}}\leq|v(\xi,\eta)|_{g_{n}}=1.\] Similar arguments show that \(|d_{\xi}\Gamma_{n}(\xi,\eta)|_{\bar{g}}\leq 1\) as well. Next, let \((\xi^{1},\ldots,\xi^{m-1})\) and \((\eta^{1},\ldots,\eta^{m-1})\) be local coordinates for \(\partial\Omega\) around \(\xi\) and \(\eta\) respectively. We can extend these coordinate charts to boundary normal coordinates \((\xi^{1},\ldots,\xi^{m})\) and \((\eta^{1},\ldots,\eta^{m})\) by taking \(\xi^{m}\) and \(\eta^{m}\) to be the corresponding distance functions from the boundary. With respect to these coordinates, we may rewrite (8) as \[\operatorname{grad}_{\eta}\Gamma_{n}(\xi,\eta)=\sum_{j=1}^{m-1}v^{j}(\xi,\eta )\frac{\partial}{\partial\eta^{j}}. \tag{9}\] We can extend both sides of this equality to \((1,0)\)-tensor fields on \(\partial\Omega_{\xi}\times\partial\Omega_{\eta}\), while maintaining the equality. Taking covariant derivatives of both sides with respect to \(\xi\), we get \[\nabla^{\xi}\operatorname{grad}_{\eta}\Gamma_{n}(\xi,\eta)=\sum_{i,j=1}^{m-1} \frac{\partial v^{j}}{\partial\xi^{i}}(\xi,\eta)\frac{\partial}{\partial\eta ^{j}}\otimes d\xi^{i}. \tag{10}\] Here, we have used the fact that the product connection on \(\partial\Omega_{\xi}\times\partial\Omega_{\eta}\) satisfies \(\nabla_{\partial_{\xi_{i}}}\partial_{\eta_{j}}=0\) for all \(i,j\). Recall that \(g_{n}\) is a simple metric, and its exponential map \(\exp_{n}(x,\cdot)\) at any \(x\in\overline{\Omega}\) is a diffeomorphism onto \(\overline{\Omega}\). Let \(w(x,\cdot):\overline{\Omega}\to T_{x}\overline{\Omega}\) denote its inverse map. Since \(D_{v}\exp_{n}(x,v)\succ\ell\) for all \(v\) in the domain of \(\exp_{n}(x,\cdot)\), we have \[\|D_{y}w(x,y)\|_{op}<\ell^{-1}\qquad\text{for all }y\in\overline{\Omega}. \tag{11}\] Now observe that we have the identity \[v(\xi,\eta)=-\frac{w(\eta,\xi)}{\Gamma_{n}(\xi,\eta)}.\] So by (9) and (10), \[\nabla^{\xi}\operatorname{grad}_{\eta}\Gamma_{n}(\xi,\eta) =-\sum_{i,j=1}^{m-1}\left\{\frac{1}{\Gamma_{n}(\xi,\eta)}\frac{ \partial w^{j}(\eta,\xi)}{\partial\xi^{i}}-\frac{w^{j}(\eta,\xi)}{\Gamma_{n}( \xi,\eta)^{2}}\frac{\partial\Gamma_{n}(\xi,\eta)}{\partial\xi^{i}}\right\}\frac {\partial}{\partial\eta^{j}}\otimes d\xi^{i}\] \[=-\frac{1}{\Gamma_{n}(\xi,\eta)}\left\{\sum_{i,j=1}^{m-1}\frac{ \partial w^{j}(\eta,\xi)}{\partial\xi^{i}}\frac{\partial}{\partial\eta^{j}} \otimes d\xi^{i}\right\}+\frac{1}{\Gamma_{n}(\xi,\eta)}v(\xi,\eta)\otimes d_{ \xi}\Gamma_{n}(\xi,\eta). \tag{12}\] Observe that \(\sum_{i,j=1}^{m-1}\frac{\partial w^{j}(\eta,\xi)}{\partial\xi^{i}}\frac{ \partial}{\partial\eta^{j}}\otimes d\xi^{i}\) is precisely the tensor form of the linear map \[\Pi_{\eta}\circ D_{y}w(\eta,y)\big{|}_{y=\xi}\circ\Pi_{\xi},\] where \(\Pi_{\xi}\) and \(\Pi_{\eta}\) are, as before, orthogonal projections from \(T_{\xi}\overline{\Omega}\to T_{\xi}\partial\Omega\) and \(T_{\eta}\overline{\Omega}\to T_{\eta}\partial\Omega\) respectively. Therefore, \[\left|\sum_{i,j=1}^{m-1}\frac{\partial w^{j}(\eta,\xi)}{\partial\xi^{i}}\frac {\partial}{\partial\eta^{j}}\otimes d\xi^{i}\right|_{\bar{g}}\leq\left\|D_{y} w(\eta,y)\big{|}_{y=\xi}\right\|_{op}<\ell^{-1}.\] Combining this with (12), we get \[|\nabla^{\xi}d_{\eta}\Gamma_{n}(\xi,\eta)|_{\bar{g}} =|\nabla^{\xi}\operatorname{grad}_{\eta}\Gamma_{n}(\xi,\eta)|_{ \bar{g}}\] \[\leq\frac{\ell^{-1}}{\Gamma_{n}(\xi,\eta)}+\frac{|v(\xi,\eta)|_{ \bar{g}}|d_{\xi}\Gamma_{n}(\xi,\eta)|_{\bar{g}}}{\Gamma_{n}(\xi,\eta)}\] \[\leq\frac{(1+\ell^{-1})}{\Gamma_{n}(\xi,\eta)}.\] Finally, applying the simple estimate \[\operatorname{dist}_{\bar{g}}(\xi,\eta)\leq\frac{1}{\lambda}\Gamma_{n}(\xi, \eta),\] we get \[|\nabla^{\xi}\nabla^{\eta}\Gamma_{n}(\xi,\eta)|_{\bar{g}}=|\nabla^{\xi}d_{ \eta}\Gamma_{n}(\xi,\eta)|_{\bar{g}}\leq\frac{(1+\ell^{-1})}{\lambda} \operatorname{dist}_{\bar{g}}(\xi,\eta)^{-1}.\] This completes the proof. With these estimates in hand, we're now ready to prove Theorem 1.2. Proof of Theorem 1.2.: Consider the inequality (7) from Theorem 2.1. For \(n_{1},n_{2}\in\mathcal{N}_{\lambda,\ell}(\Omega_{0})\), the left hand side becomes \[\int_{\Omega}(n_{1}-n_{2})^{2}(n_{1}^{m-2}+n_{1}^{m-3}n_{2}+\cdots+n_{2}^{m-2} )d\mathrm{Vol}_{\bar{g}}\geq(m-1)\lambda^{m-2}\|n_{1}-n_{2}\|_{L^{2}(\Omega)}^ {2}. \tag{13}\] Now consider the right hand side of (7). By Lemma 2.2, \[|d_{\xi}d_{\eta}\Gamma_{n}|_{\bar{g}}=\left|\operatorname{Alt}\left(\nabla^{ \xi}\nabla^{\eta}\Gamma_{n}\right)\right|_{\bar{g}}\leq\frac{(1+\ell^{-1})}{ \lambda}\operatorname{dist}_{\bar{g}}(\xi,\eta)^{-1}.\] Therefore, the right hand side of (7) is bounded above by \[|C_{m}|\int_{\partial\Omega\times\partial\Omega}|d_{\xi}(\Gamma_{n_{1 }}-\Gamma_{n_{2}})|_{\bar{g}}|d_{\eta}(\Gamma_{n_{1}}-\Gamma_{n_{2}})|_{\bar{g} }\sum_{a+b=m-2}|d_{\xi}d_{\eta}\Gamma_{n_{1}}|_{\bar{g}}^{a}|d_{\xi}d_{\eta} \Gamma_{n_{2}}|_{\bar{g}}^{b}\,d\sigma_{\bar{g}}\] \[\leq(m-1)|C_{m}|\frac{(1+\ell^{-1})^{m-2}}{\lambda^{m-2}}\int_{ \partial\Omega\times\partial\Omega}|d_{\xi}(\Gamma_{n_{1}}-\Gamma_{n_{2}})|_{ \bar{g}}|d_{\eta}(\Gamma_{n_{1}}-\Gamma_{n_{2}})|_{\bar{g}}|\operatorname{ dist}_{\bar{g}}(\xi,\eta)|^{2-m}\,d\sigma_{\bar{g}},\] where \(d\sigma_{\bar{g}}\) is the surface measure on \(\partial\Omega\times\partial\Omega\) induced by \(\bar{g}\). Observe that by Remark 1.2, we have \((\Gamma_{n_{1}}-\Gamma_{n_{2}})(\xi,\eta)=0\) for all \(\xi,\eta\in\partial\Omega\) with \(\operatorname{dist}_{\bar{g}}(\xi,\eta)<\delta\). Therefore, the above expression is further bounded above by \[(m-1)|C_{m}|\frac{(1+\ell^{-1})^{m-2}}{\lambda^{m-2}}\delta^{2-m} \int_{\partial\Omega\times\partial\Omega}|d_{\xi}(\Gamma_{n_{1}}-\Gamma_{n_{2} })|_{\bar{g}}|d_{\eta}(\Gamma_{n_{1}}-\Gamma_{n_{2}})|_{\bar{g}}|d\sigma_{\bar {g}}.\] \[\lesssim_{m,\delta,\ell}\lambda^{2-m}\left(\|d_{\xi}(\Gamma_{n_{ 1}}-\Gamma_{n_{2}})\|_{L^{2}(\partial\Omega\times\partial\Omega)}^{2}+\|d_{ \eta}(\Gamma_{n_{1}}-\Gamma_{n_{2}})\|_{L^{2}(\partial\Omega\times\partial \Omega)}^{2}\right)\] \[\lesssim_{m,\delta,\ell}\lambda^{2-m}\|d_{\xi}(\Gamma_{n_{1}}- \Gamma_{n_{2}})\|_{L^{2}(\partial\Omega\times\partial\Omega)}^{2}\] since \(\|d_{\xi}(\Gamma_{n_{1}}-\Gamma_{n_{2}})\|_{L^{2}}=\|d_{\eta}(\Gamma_{n_{1}}- \Gamma_{n_{2}})\|_{L^{2}}\) by symmetry. Combining this with (13), we get \[\|n_{1}-n_{2}\|_{L^{2}(\Omega)}^{2}\lesssim_{m,\delta,\ell}\lambda^{2(2-m)}\|d _{\xi}(\Gamma_{n_{1}}-\Gamma_{n_{2}})\|_{L^{2}(\partial\Omega\times\partial \Omega)}^{2}\] and the theorem follows. Recall that we parametrized the conformal parameter \(n\) of the metric \(g_{n}\) by a function \(c\) belonging to the parameter space \(\mathcal{C}_{\ell,M}^{\beta}(\Omega_{0})\), as defined in (4). We assumed that our input data consists of finitely many measurements of the function \(Z_{c}=\log\Gamma_{c}\). In the following corollary, we translate Theorem 1.2 into stability estimates for the map \(c\mapsto Z_{c}\) using simple Lipschitz estimates for the exponential function: For all \(x,y\in[M_{1},M_{2}]\), \[e^{M_{1}}|x-y|\leq|e^{x}-e^{y}|\leq e^{M_{2}}|x-y|. \tag{14}\] This immediately implies that for all \(c_{1},c_{2}\in\mathcal{C}_{\ell,M}^{\beta}(\Omega_{0})\), \[e^{-M}\|c_{1}-c_{2}\|_{L^{2}(\Omega_{0})}\leq\|n_{c_{1}}-n_{c_{2}}\|_{L^{2}( \Omega)}\leq e^{M}\|c_{1}-c_{2}\|_{L^{2}(\Omega_{0})}. \tag{15}\] **Corollary 2.3**.: _For any \(M>0\), there exists a constant \(C_{1}^{\prime}=C_{1}^{\prime}(\Omega,\Omega_{0},\bar{g},\ell,M)>0\) such that_ \[\|c_{1}-c_{2}\|_{L^{2}(\Omega_{0})}\leq C_{1}^{\prime}\|Z_{c_{1}}-Z_{c_{2}}\|_ {H^{1}(\partial\Omega\times\partial\Omega)}\] _for all \(c_{1},c_{2}\in\mathcal{C}_{\ell,M}^{3}(\Omega_{0})\)._ Proof.: Let \(c_{1},c_{2}\in\mathcal{C}_{\ell,M}^{3}(\Omega_{0})\). Then \(n_{c_{1}},n_{c_{2}}\in\mathcal{N}_{\lambda,\ell}(\Omega_{0})\) for \(\lambda=e^{-M}\). So it follows from Theorem 1.2 that \[\|n_{c_{1}}-n_{c_{2}}\|_{L^{2}(\Omega)}\leq C_{1}e^{(m-2)M}\|d_{\xi}(\Gamma_{c_ {1}}-\Gamma_{c_{2}})\|_{L^{2}(\partial\Omega\times\partial\Omega)}. \tag{16}\] By (15), the left hand side of the above equation is bounded below by \(e^{-M}\|c_{1}-c_{2}\|_{L^{2}(\Omega_{0})}\). Now, rewrite \(d_{\xi}(\Gamma_{c_{1}}-\Gamma_{c_{2}})\) as \[d_{\xi}(\Gamma_{c_{1}}-\Gamma_{c_{2}}) =d_{\xi}(e^{Z_{c_{1}}}-e^{Z_{c_{2}}})\] \[=e^{Z_{c_{1}}}d_{\xi}Z_{c_{1}}-e^{Z_{c_{2}}}d_{\xi}Z_{c_{2}}\] \[=e^{Z_{c_{1}}}d_{\xi}(Z_{c_{1}}-Z_{c_{2}})+(e^{Z_{c_{1}}}-e^{Z_{c_{2 }}})d_{\xi}Z_{c_{2}}.\] It follows from Remark 1.2 that if \((\xi,\eta)\in\operatorname{supp}(\Gamma_{c_{1}}-\Gamma_{c_{2}})\), we have \(\operatorname{dist}_{\bar{g}}(\xi,\eta)\geq\delta\), and consequently, \[e^{-M}\delta\leq\Gamma_{c_{j}}(\xi,\eta)\leq e^{M}\operatorname{diam}_{\bar{g}}( \Omega),\qquad j=1,2.\] Therefore, by applying (14) along with the fact that \(|d_{\xi}\Gamma_{c_{j}}|_{\bar{g}}\leq 1\) by Lemma 2.2, we get \[|d_{\xi}(\Gamma_{c_{1}}-\Gamma_{c_{2}})|_{\bar{g}} \leq|\Gamma_{c_{1}}||d_{\xi}(Z_{c_{1}}-Z_{c_{2}})|_{\bar{g}}+| \Gamma_{c_{1}}-\Gamma_{c_{2}}||d_{\xi}\Gamma_{c_{2}}|_{\bar{g}}/|\Gamma_{c_{2}}|\] \[\leq e^{M}\operatorname{diam}_{\bar{g}}(\Omega)|d_{\xi}(Z_{c_{1} }-Z_{c_{2}})|_{\bar{g}}+\frac{|e^{Z_{c_{1}}}-e^{Z_{c_{2}}}|}{e^{-M}\delta}\] \[\leq e^{M}\operatorname{diam}_{\bar{g}}(\Omega)|d_{\xi}(Z_{c_{1} }-Z_{c_{2}})|_{\bar{g}}+\frac{e^{M}\operatorname{diam}_{\bar{g}}(\Omega)}{e^{- M}\delta}|Z_{c_{1}}-Z_{c_{2}}|,\] where \(\operatorname{diam}_{\bar{g}}(\Omega)\) denotes the diameter of \(\Omega\) with respect to the metric \(\bar{g}\). This further implies \[\|d_{\xi}(\Gamma_{c_{1}}-\Gamma_{c_{2}})\|_{L^{2}(\partial\Omega\times \partial\Omega)}\lesssim_{\Omega,\bar{g},\delta,\ell,M}\|Z_{c_{1}}-Z_{c_{2}} \|_{H^{1}(\partial\Omega\times\partial\Omega)}.\] Combining this with (15) and (16), we get \[\|c_{1}-c_{2}\|_{L^{2}(\Omega_{0})}\lesssim_{\Omega,\bar{g},\delta,\ell,M}\|Z _{c_{1}}-Z_{c_{2}}\|_{H^{1}(\partial\Omega\times\partial\Omega)}.\] This completes the proof. ### Forward continuity estimates We now move on to the proof of Theorem 1.3. The key idea is to use _upper bounds_ on \(D_{v}\exp_{n_{j}}(x,v)\) to control \(\|\Gamma_{n_{1}}-\Gamma_{n_{2}}\|_{L^{2}}\) with respect to \(\|n_{1}-n_{2}\|_{L^{2}}\). We begin by introducing some notation. Let \(S\overline{\Omega}\) denote the unit sphere bundle on \(\overline{\Omega}\), that is, \[S\overline{\Omega}=\{(x,v)\in T\overline{\Omega}\ :\ |v|_{\bar{g}}=1\}.\] The boundary of \(S\overline{\Omega}\) consists of unit tangent vectors at \(\partial\Omega\). Specifically, \[\partial S\overline{\Omega}=\{(x,v)\in S\overline{\Omega}\ :\ x\in\partial\Omega\}.\] Let \(\nu\) denote the inward unit normal vector field along \(\partial\Omega\) with respect to the metric \(\bar{g}\). We define the bundles of _inward pointing_ and _outward pointing_ unit tangent vectors on \(\partial\Omega\) as follows: \[\partial_{+}S\overline{\Omega} :=\left\{(\xi,v)\in\partial S\overline{\Omega}\ :\ \langle v,\nu_{\xi}\rangle_{\bar{g}}\geq 0\right\},\quad \text{and}\] \[\partial_{-}S\overline{\Omega} :=\left\{(\xi,v)\in\partial S\overline{\Omega}\ :\ \ \langle v,\nu_{\xi}\rangle_{\bar{g}}\leq 0\right\}.\] We also set \[\partial_{0}S\overline{\Omega}:=\partial_{+}S\overline{\Omega}\cap\partial_{-} S\overline{\Omega}.\] This coincides with \(S\partial\Omega\), the unit sphere bundle on \(\partial\Omega\). Next, let \(n\in N_{\lambda,\ell}(\Omega_{0})\). For \((\xi,v)\in\partial_{+}S\overline{\Omega}\), we let \(\gamma_{n}(\xi,v,t)=\exp_{n}(\xi,tv)\) denote the unit speed geodesic (with respect to \(g_{n}\)) starting at \(\xi\) with initial direction \(v\) at time \(t=0\). We define \(\tau_{n}(\xi,v)\) to be the time at which \(\gamma_{n}(\xi,v,\cdot)\) exits \(\overline{\Omega}\). It is known (see [32]) that for simple manifolds, \(\tau_{n}\) is a \(C^{1}\) function of \(\partial_{+}S\overline{\Omega}\), and \(\tau_{n}(\xi,v)=0\) if and only if \(v\in S_{\xi}\partial\Omega\). We also define \(\eta_{n}(\xi,v)\) and \(u_{n}(\xi,v)\) as the point and direction at which \(\gamma_{n}(\xi,v,\cdot)\) exits \(\overline{\Omega}\). In other words, \[\eta_{n}(\xi,v) :=\gamma_{n}(\xi,v,\tau_{n}(\xi,v)),\quad\text{and}\] \[u_{n}(\xi,v) :=\dot{\gamma}_{n}(\xi,v,\tau_{n}(\xi,v)).\] **Lemma 2.4**.: _Let \(n\in\mathcal{N}_{\lambda,\Lambda,\ell,L}(\Omega_{0})\). Then for all \((\xi,v)\in\partial_{+}S\overline{\Omega}\),_ \[\|D_{v}\tau_{n}(\xi,v)\|_{op}\leq L\frac{\tau_{n}(\xi,v)}{\langle\nu,u\rangle_ {\bar{g}}}\leq\frac{L\Lambda\operatorname{diam}_{\bar{g}}(\Omega)}{\langle\nu, u\rangle_{\bar{g}}},\] _where \(\nu=\nu_{\eta_{n}(\xi,v)}\) and \(u=u_{n}(\xi,v)\)._ Proof.: Let \(\rho\in C^{1}(\overline{\Omega})\) be such that \(\rho^{-1}(0)=\partial\Omega\) and \(\rho(x)=\operatorname{dist}_{\bar{g}}(x,\partial\Omega)\) for \(x\) near \(\partial\Omega\). Consider the function \[f(t,v)=\rho(\exp_{n}(\xi,tv)).\] Observe that \[\frac{\partial f}{\partial t}\Big{|}_{t=\tau_{n}(\xi,v)}=\big{\langle}( \operatorname{grad}\rho)_{\eta_{n}(\xi,v)},u_{n}(\xi,v)\big{\rangle}_{\bar{g}} =\langle\nu,u\rangle_{\bar{g}}.\] On the other hand, \[D_{v}f(t,v) =D\rho_{\exp_{n}(\xi,tv)}\circ\big{(}tD_{w}\exp_{n}(\xi,w)\big{|} _{w=tv}\big{)}\] \[\Rightarrow D_{v}f\big{|}_{(\tau_{n}(\xi,v),v)} =\tau_{n}(\xi,v)\Pi^{\nu}\circ D_{w}\exp_{n}(\xi,w)\big{|}_{w= \tau_{n}(\xi,v)v},\] where \(\Pi^{\nu}\) is the linear map given by \[\Pi^{\nu}(w)=\langle\nu,w\rangle_{\bar{g}}\qquad\text{for all }w\in T_{\eta_{n}(\xi,v)} \overline{\Omega}.\] Now differentiating the identity \(f(\tau_{n}(\xi,v),v)=0\) with respect to \(v\), we get \[0 =\frac{\partial f}{\partial t}\Big{|}_{(\tau_{n}(\xi,v),v)}D_{v} \tau_{n}(\xi,v)+D_{v}f\big{|}_{(\tau_{n}(\xi,v),v)}\] \[=\langle\nu,u\rangle_{\bar{g}}D_{v}\tau_{n}(\xi,v)+\tau_{n}(\xi, v)\Pi^{\nu}\circ D_{w}\exp_{n}(\xi,w)\big{|}_{w=\tau_{n}(\xi,v)v}.\] Therefore, \[D_{v}\tau_{n}(\xi,v) =-\frac{\tau_{n}(\xi,v)}{\langle\nu,u\rangle_{\bar{g}}}\Pi^{\nu} \circ D_{w}\exp_{n}(\xi,w)\big{|}_{w=\tau_{n}(\xi,v)v}\] \[\Rightarrow\|D_{v}\tau_{n}(\xi,v)\|_{op} \leq\frac{\tau_{n}(\xi,v)}{\langle\nu,u\rangle_{\bar{g}}}\left\|D _{w}\exp_{n}(\xi,w)\right|_{w=\tau_{n}(\xi,v)v}\Big{\|}_{op}\] \[\leq L\left[\frac{\tau_{n}(\xi,v)}{\langle\nu,u\rangle_{\bar{g}}} \right],\] as required. Now the lemma follows by observing that \[\tau_{n}(\xi,v)\leq\operatorname{diam}_{g_{n}}(\Omega)\leq\Lambda\operatorname {diam}_{\bar{g}}(\Omega)\] for all \((\xi,v)\in\partial_{+}S\overline{\Omega}\). We are now ready to prove Theorem 1.3. Recall that the notation \(\int_{\gamma}fd|g|\) denotes the integral of a function \(f\) along the curve \(\gamma\) with respect to the arc-length metric induced by \(g\). Proof of Theorem 1.3.: Fix \(\xi\in\partial\Omega\), and define the sets \[B_{1}(\xi) :=\{\eta\in\partial\Omega\ :\ \Gamma_{n_{1}}(\xi,\eta)\leq\Gamma_{n_{2 }}(\xi,\eta)\},\] \[B_{2}(\xi) :=\{\eta\in\partial\Omega\ :\ \Gamma_{n_{2}}(\xi,\eta)\leq\Gamma_{n_{1}}( \xi,\eta)\}.\] Suppose \(\eta\in B_{1}(\xi)\), and let \(\gamma_{1}(\xi,\eta)\) denote the unit speed geodesic with respect to \(g_{n_{1}}\) from \(\xi\) to \(\eta\). Clearly, \(\Gamma_{n_{1}}(\xi,\eta)=\int_{\gamma_{1}(\xi,\eta)}n_{1}d|\bar{g}|\), whereas \(\Gamma_{n_{2}}(\xi,\eta)\leq\int_{\gamma_{1}(\xi,\eta)}n_{2}d|\bar{g}|\). So we have \[(\Gamma_{n_{2}}-\Gamma_{n_{1}})(\xi,\eta)\leq\int_{\gamma_{1}(\xi,\eta)}(n_{2 }-n_{1})d|\bar{g}|=\int_{\gamma_{1}(\xi,\eta)}\frac{(n_{2}-n_{1})}{n_{1}}d|g_{n _{1}}|.\] This implies \[(\Gamma_{n_{2}}-\Gamma_{n_{1}})^{2}(\xi,\eta) \leq\Gamma_{n_{1}}(\xi,\eta)\int_{\gamma_{1}(\xi,\eta)}\frac{(n_{2} -n_{1})^{2}}{n_{1}^{2}}d|g_{n_{1}}|\quad\text{(by Cauchy-Schwarz)}\] \[=\Gamma_{n_{1}}(\xi,\eta)\int_{0}^{\Gamma_{n_{1}}(\xi,\eta)} \frac{(n_{2}-n_{1})^{2}}{n_{1}^{2}}(\gamma_{1}(\xi,\eta,t))dt\] \[\leq\frac{\Gamma_{n_{1}}(\xi,\eta)}{\lambda^{2}}\int_{0}^{\Gamma _{n_{1}}(\xi,\eta)}(n_{2}-n_{1})^{2}(\exp_{n_{1}}(\xi,tv_{n_{1}}(\xi,\eta)))dt,\] where \(v_{n_{1}}(\xi,\eta)=\dot{\gamma}_{n_{1}}(\xi,\eta,0)\), that is, the unit tangent vector at \(\xi\) that points towards \(\eta\). This implies \[\int_{B_{1}(\xi)}(\Gamma_{n_{2}}-\Gamma_{n_{1}})^{2}(\xi,\eta)d\eta \leq\frac{\Lambda\operatorname{diam}_{\bar{g}}(\Omega)}{\lambda^ {2}}\int_{\partial\Omega}\int_{0}^{\Gamma_{n_{1}}(\xi,\eta)}(n_{2}-n_{1})^{2} (\exp_{n_{1}}(\xi,tv_{n_{1}}(\xi,\eta)))dtd\eta \tag{17}\] \[=\frac{\Lambda\operatorname{diam}_{\bar{g}}(\Omega)}{\lambda^{2}} \int_{\partial_{+}S_{\xi}\overline{\Omega}}\int_{0}^{\tau_{n_{1}}(\xi,v)}(n_{2 }-n_{1})^{2}(\exp_{n_{1}}(\xi,tv))|\det[D_{v}\eta_{n_{1}}(\xi,v)]dtdv.\] by the change of variables formula. (Here, \(d\eta\) is the surface measure on \(\eta\in\partial\Omega\) with respect to \(\bar{g}\).) We now find an upper bound for \(|\det[D_{v}\eta_{n_{1}}]|\) on the support of the integrand. Recall that by definition, \[\eta_{n_{1}}(\xi,v)=\exp_{n_{1}}(\xi,\tau_{n_{1}}(\xi,v)v).\] With the canonical identification of \(T_{v}S_{\xi}\overline{\Omega}\) with a subspace of \(T_{\xi}\overline{\Omega}\), we get \[D_{v}\eta_{n_{1}}(\xi,v) =D_{w}\exp_{n_{1}}(\xi,w)\big{|}_{w=\tau_{n_{1}}(\xi,v)v}\circ D_ {v}(\tau_{n_{1}}(\xi,v)v)\] \[=D_{w}\exp_{n_{1}}(\xi,w)\big{|}_{w=\tau_{n_{1}}(\xi,v)v}\circ \big{(}\tau_{n_{1}}(\xi,v)\mathrm{Id}+v\otimes D_{v}\tau_{n_{1}}(\xi,v)\big{)}.\] Here, \(v\otimes D_{v}\tau_{n_{1}}(\xi,v)\) should be interpreted as the map \[w\in T_{v}S_{\xi}\overline{\Omega}\subseteq T_{\xi}\overline{\Omega}\qquad \mapsto\qquad[D_{v}\tau_{n_{1}}|_{(\xi,v)}(w)]v\in T_{\xi}\overline{\Omega}.\] So we have \[\|D_{v}\eta_{n_{1}}(\xi,v)\|_{op} \leq\Big{\|}D_{w}\exp_{n_{1}}(\xi,w)\big{|}_{w=\tau_{n_{1}}(\xi,v)v}\Big{\|}_{op}\left(\tau_{n_{1}}(\xi,v)+\|D_{v}\tau_{n_{1}}(\xi,v)\|_{op}\right)\] \[\leq L\left(\Lambda\operatorname{diam}_{\bar{g}}(\Omega)+\frac{L \Lambda\operatorname{diam}_{\bar{g}}(\Omega)}{\langle\nu(\eta_{n_{1}}(\xi,v)), u_{n_{1}}(\xi,v)\rangle_{\bar{g}}}\right)\] by Lemma 2.4. Now since \(\Omega_{0}\) is a relatively compact subset of \(\Omega\), there exists an \(\varepsilon\in(0,1)\) such that if \(\langle\nu(\eta_{n_{1}}(\xi,v)),u_{n_{1}}(\xi,v)\rangle_{\bar{g}}<\varepsilon\), the geodesic \(\gamma_{n_{1}}(\xi,v,\cdot)\) lies entirely within \(\overline{\Omega}\setminus\Omega_{0}\), and therefore, \[(n_{2}-n_{1})^{2}(\exp_{n_{1}}(\xi,tv))=0\qquad\text{for all }t\in[0,\tau_{n_{1}} (\xi,v)].\] Therefore, on the support of the integrand in the right hand side of (17), we have the bounds \[\|D_{v}\eta_{n_{1}}(\xi,v)\|_{op}\leq L\left(\Lambda\operatorname{diam}_{\bar {g}}(\Omega)+\frac{L\Lambda\operatorname{diam}_{\bar{g}}(\Omega)}{\varepsilon} \right)\lesssim_{\Omega,\Omega_{0},\bar{g},L}\Lambda,\] and consequently \[|\det[D_{v}(\eta_{n_{1}}(\xi,v))]|\lesssim_{\Omega,\Omega_{0},\bar{g},L} \Lambda^{m-1}.\] Applying this bound to the right hand side of (17), we get \[\int_{B_{1}(\xi)}(\Gamma_{n_{1}}-\Gamma_{n_{2}})^{2}(\xi,\eta)d\eta \lesssim\frac{\Lambda^{m}}{\lambda^{2}}\int_{\partial_{+}S_{\xi} \overline{\Omega}}\int_{0}^{\tau_{n_{1}}(\xi,v)}(n_{2}-n_{1})^{2}(\exp_{n_{1}} (\xi,tv))dtdv\] \[\sim\frac{\Lambda^{m}}{\lambda^{2}}\int_{\operatorname{dom}(\exp_ {n_{1}}(\xi,\cdot))}\frac{(n_{2}-n_{1})^{2}(\exp_{n_{1}}(\xi,w))}{|w|_{\bar{g}} ^{m-1}}dw\] Again by Remark 1.2, we have \((n_{2}-n_{1})^{2}(\exp_{n_{1}}(\xi,w))=0\) for all \(w\in\operatorname{dom}(\exp_{n_{1}}(\xi,\cdot))\) with \(|w|_{\bar{g}}\leq\delta\). Therefore, we get \[\int_{B_{1}(\xi)}(\Gamma_{n_{1}}-\Gamma_{n_{2}})^{2}(\xi,\eta)d\eta\lesssim \frac{\Lambda^{m}}{\lambda^{2}\delta^{m-1}}\int_{\operatorname{dom}(\exp_{n_ {1}}(\xi,\cdot))}(n_{2}-n_{1})^{2}(\exp_{n_{1}}(\xi,w))dw.\] We now make the change of variable \(x=\exp_{n_{1}}(\xi,w)\). The assumption that \(D_{w}\exp_{n_{1}}(\xi,w)\succ\ell\) implies that the inverse \(w_{n_{1}}(\xi,\cdot)\) of \(\exp_{n_{1}}(\xi,\cdot)\) satisfies \(\|D_{x}w_{n_{1}}(\xi,x)\|_{op}<\ell^{-1}\), and consequently, \[|\det(D_{x}w_{n_{1}}(\xi,x))|<\ell^{-m}.\] Therefore, \[\int_{B_{1}(\xi)}(\Gamma_{n_{1}}-\Gamma_{n_{2}})^{2}(\xi,\eta)d\eta \lesssim\frac{\Lambda^{m}}{\lambda^{2}}\int_{\Omega}(n_{2}-n_{1}) ^{2}(x)|\det(D_{x}w_{n_{1}}(\xi,x))|d\operatorname{Vol}_{\bar{g}}(x)\] \[\lesssim\frac{\Lambda^{m}}{\lambda^{2}\ell^{m}}\int_{\Omega}(n_{ 2}-n_{1})^{2}(x)d\operatorname{Vol}_{\bar{g}}(x).\] By analogous arguments, we also have \[\int_{B_{2}(\xi)}(\Gamma_{n_{1}}-\Gamma_{n_{2}})^{2}(\xi,\eta)d\eta\lesssim \frac{\Lambda^{m}}{\lambda^{2}\ell^{m}}\int_{\Omega}(n_{2}-n_{1})^{2}(x)d \operatorname{Vol}_{\bar{g}}(x).\] Adding the last two inequalities, we get \[\int_{\partial\Omega}(\Gamma_{n_{1}}-\Gamma_{n_{2}})^{2}(\xi,\eta )d\eta \lesssim\frac{\Lambda^{m}}{\lambda^{2}\ell^{m}}\|n_{1}-n_{2}\|_{L^ {2}(\Omega)}^{2}\] \[\Rightarrow\int_{\partial\Omega}\int_{\partial\Omega}(\Gamma_{n_ {1}}-\Gamma_{n_{2}})^{2}(\xi,\eta)d\eta d\xi \lesssim\frac{\Lambda^{m}}{\lambda^{2}\ell^{m}}\|n_{1}-n_{2}\|_{L^ {2}(\Omega)}^{2}\] \[\Rightarrow\|\Gamma_{n_{1}}-\Gamma_{n_{2}}\|_{L^{2}(\partial \Omega\times\partial\Omega)} \lesssim_{\Omega,\Omega_{0},\bar{g},\ell,L}\frac{\Lambda^{m/2}}{ \lambda}\|n_{1}-n_{2}\|_{L^{2}(\Omega)}.\] This completes the proof. Next, we derive the analogous continuity estimate for the map \(c\mapsto Z_{c}\). The key step is to show that for any \(M>0\), the operator norm of the derivative of \(\exp_{n_{c}}(x,v)\) is uniformly bounded for all \(c\in\mathcal{C}^{3}_{\ell,M}(\Omega_{0})\) and \((x,v)\in\operatorname{dom}(\exp_{n_{c}})\). We begin with a simple lemma. **Lemma 2.5**.: _Let \((\mathcal{M},g)\) be a Riemannian manifold whose curvature tensor \(R\) satisfies_ \[\|R\|=\sup\left\{|R(u,v)w|_{g}:u,v,w\in S\mathcal{M}\right\}<\infty.\] _Then any Jacobi field \(J\) along a unit speed geodesic \(\gamma:[0,T]\to\mathcal{M}\) satisfies the norm bounds_ \[|J(t)|_{g}^{2}+|\dot{J}(t)|_{g}^{2}\leq e^{(1+\|R\|)t}\left(|J(0)|_{g}^{2}+|\dot {J}(0)|_{g}^{2}\right)\qquad\text{for all }t\in[0,T].\] Proof.: Set \(f(t)=|J(t)|_{g}^{2}+|\dot{J}(t)|_{g}^{2}\). Since \(J\) is a Jacobi field, it satisfies the equation \[\ddot{J}(t)+R(J(t),\dot{\gamma}(t))\dot{\gamma}(t)=0.\] Therefore, \[f^{\prime}(t) =2\langle J(t),\dot{J}(t)\rangle_{g}+2\langle\dot{J}(t),\ddot{J} (t)\rangle_{g}\] \[=2\langle J,\dot{J}\rangle_{g}+2\langle\dot{J},-R(J,\dot{\gamma} )\dot{\gamma}\rangle_{g}\] \[\leq 2|J|_{g}|\dot{J}|_{g}+2|\dot{J}|_{g}\|R\||J|_{g}|\dot{\gamma} |_{g}^{2}\] \[\leq(1+\|R\|)f(t).\] So it follows that \[f(t)\leq e^{(1+\|\mathbb{R}\|)t}f(0)\qquad\text{for all }t\in[0,T].\] Next, let us recall the definition of the canonical metric on the tangent bundle of a Riemannian manifold, also called the Sasaki metric. Let \((\mathcal{M},g)\) be a Riemannian manifold, \((x,w)\in T\mathcal{M}\), and \(V_{1},V_{2}\in T_{(x,w)}T\mathcal{M}\). Then we may choose curves \(\alpha_{j}(s)=(\sigma_{j}(s),v_{j}(s))\) in \(T\mathcal{M}\), defined on \((-\varepsilon,\varepsilon)\), such that \[\alpha_{j}(0)=(x,w),\qquad\dot{\alpha}_{j}(0)=V_{j},\qquad\text{for }j=1,2.\] The inner product of \(V_{1},V_{2}\) with respect to the Sasaki metric is defined to be \[\langle V_{1},V_{2}\rangle_{g}:=\langle v_{1}(0),v_{2}(0)\rangle_{g}+\langle \dot{v}_{1}(0),\dot{v}_{2}(0)\rangle_{g},\] where \(\dot{v}_{1}(s),\dot{v}_{2}(s)\) are the covariant derivatives of \(v_{1}(s),v_{2}(s)\) along the curves \(\sigma_{1}(s),\sigma_{2}(s)\) respectively. Note that we are using the same notation for the Sasaki metric as for the original metric \(g\). Now, for any \(C^{1}\) map \(F:T\mathcal{M}\to\mathcal{M}\), the operator norm of the total derivative of \(F\) at \((x,w)\in T\mathcal{M}\) is given by \[\|DF(x,w)\|_{op}:=\sup\{|DF(x,w)(V)|_{g}\ :\ V\in T_{(x,w)}T\mathcal{M},\,|V|_{g}=1\}.\] We will show that if \(c\in\mathcal{C}^{3}_{\ell,M}(\Omega_{0})\), the total derivative of \(\exp_{n_{c}}\) is bounded above in the operator norm. **Proposition 2.6**.: _For any \(M>0\), there exists \(L=L(M)>0\) such that for all \(c\in\mathcal{C}^{3}_{\ell,M}(\Omega_{0})\), the total derivative of the exponential map of \(g_{n_{c}}\) satisfies_ \[\|D\exp_{n_{c}}(x,w)\|_{op}<L\] _for all \(x\in\overline{\Omega}\) and \(w\in\operatorname{dom}(\exp_{n_{c}}(x,\cdot))\). In particular, \(n_{c}\in\mathcal{N}_{\lambda,\Lambda,\ell,L}(\Omega_{0})\)._ Proof.: Suppose \(c\in\mathcal{C}^{3}_{\ell,M}(\Omega_{0})\). Fix \((x,w)\in\operatorname{dom}(\exp_{n_{c}})\), and let \(V\in T_{(x,w)}T\overline{\Omega}\). It suffices to show that \[|D\exp_{n_{c}}(x,w)(V)|_{\bar{g}}<L|V|_{\bar{g}}.\] Choose a curve \(\alpha(s)=(\sigma(s),v(s))\) in \(T\overline{\Omega}\), defined on \((-\varepsilon,\varepsilon)\), such that \(\alpha(0)=(x,w)\) and \(\dot{\alpha}(0)=V\). Consider the family of geodesics \(\Phi:(-\varepsilon,\varepsilon)\times[0,1]\to\overline{\Omega}\) defined by \[\Phi(s,t)=\exp_{n_{c}}(\sigma(s),tv(s)).\] The variation field of this family of geodesics is \[J(t):=\partial_{s}\exp_{n_{c}}(\sigma(s),tv(s))\big{|}_{s=0},\] which is a Jacobi field along \(\gamma(t):=\Phi(0,t)\). Observe that \[J(1)=\partial_{s}\exp_{n_{c}}(\sigma(s),v(s))\big{|}_{s=0}=D\exp_{n_{c}}(x,w)( V),\] which is precisely the quantity whose norm we want to estimate. Let \(R\) be the Riemann curvature tensor of \((\overline{\Omega},g_{n_{c}})\), and let \(R^{i}_{jkl}\) denote its tensor coefficients with respect to a fixed global coordinate chart on \(\overline{\Omega}\). Then we have \[R^{i}_{jkl}=\partial_{k}\Gamma^{i}_{lj}-\partial_{l}\Gamma^{i}_{kj}+\Gamma^{i}_{ km}\Gamma^{m}_{lj}-\Gamma^{i}_{lm}\Gamma^{m}_{kj},\] where \[\Gamma^{l}_{jk}=\frac{1}{2}n_{c}^{-2}\bar{g}^{lm}\left(\partial_{j}(n_{c}^{2} \bar{g}_{km})+\partial_{k}(n_{c}^{2}\bar{g}_{jm})-\partial_{m}(n_{c}^{2}\bar{g }_{jk})\right).\] This implies that for any \(x\in\overline{\Omega}\), \[\max_{ijkl}|R^{i}_{jkl}(x)|\lesssim_{\bar{g}}1+n_{c}(x)^{-2}\|n_{c}\|_{C^{2}}^ {2}\lesssim e^{4M}(1+M)^{4}.\] Therefore, for any \(x\in\overline{\Omega}\) and unit tangent vectors \(u,v,w\in S_{x}\Omega\), \[|R(u,v)w|_{g_{c}}\lesssim n_{c}(x)\left(\max_{ijkl}|R^{i}_{jkl}(x)u^{j}v^{k}w^ {l}|\right)\lesssim e^{5M}(1+M)^{4}\] \[\Rightarrow\|R\|\leq Ce^{5M}(1+M)^{4}\] for some \(C>0\). Taking \(L^{2}>\exp(1+C^{\prime}e^{5M}(1+M)^{4})\) and applying Lemma 2.5, we get \[|D\exp_{c}(x,w)(V)|^{2}_{g_{c}}=|J(1)|^{2}_{g_{n_{c}}} <L^{2}\left(|J(0)|^{2}_{g_{n_{c}}}+|\dot{J}(0)|^{2}_{g_{n_{c}}}\right)\] \[=L^{2}\left(|\dot{\sigma}(0)|^{2}+|\dot{v}(0)|^{2}\right)=L^{2}|V| ^{2}_{\bar{g}}.\] This completes the proof. **Corollary 2.7**.: _There exists a constant \(C^{\prime}_{2}=C^{\prime}_{2}(\Omega,\Omega_{0},\bar{g},\ell,M)>0\) such that for all \(c_{1},c_{2}\in\mathcal{C}^{3}_{\ell,M}(\Omega_{0})\),_ \[\|Z_{c_{1}}-Z_{c_{2}}\|_{L^{2}(\partial\Omega\times\partial\Omega)}\leq C^{ \prime}_{2}\|c_{1}-c_{2}\|_{L^{2}(\Omega_{0})}.\] Proof.: We know from Theorem 1.3, Proposition 2.6, and equation (15) that \[\|\Gamma_{c_{1}}-\Gamma_{c_{2}}\|_{L^{2}(\partial\Omega\times\partial\Omega)} \lesssim_{\Omega,\Omega_{0},\bar{g},\ell,M}\|c_{1}-c_{2}\|_{L^{2}(\Omega_{0})}.\] Now consider \[\|\Gamma_{c_{1}}-\Gamma_{c_{2}}\|_{L^{2}(\partial\Omega\times\partial\Omega)} ^{2}=\int_{\partial\Omega\times\partial\Omega}\left|e^{Z_{c_{1}}}-e^{Z_{c_{2}}} \right|^{2}\ d\xi d\eta.\] Recall that there exists \(\delta>0\) such that \(Z_{c_{1}}(\xi,\eta)=Z_{c_{2}}(\xi,\eta)\) whenever \(\operatorname{dist}_{\bar{g}}(\xi,\eta)<\delta\). On the set \(\{\operatorname{dist}_{\bar{g}}(\xi,\eta)\geq\delta\}\), \[e^{-M}\delta \leq\Gamma_{c_{j}}(\xi,\eta)\leq e^{M}\operatorname{diam}_{\bar{g} }(\Omega) \tag{18}\] \[\Rightarrow-M+\log\delta \leq Z_{c_{j}}(\xi,\eta)\leq M+\log|\operatorname{diam}_{\bar{g} }(\Omega)|.\] So by (14), \[|e^{Z_{c_{1}}(\xi,\eta)}-e^{Z_{c_{2}}(\xi,\eta)}|\geq e^{-M}\delta|Z_{c_{1}}( \xi,\eta)-Z_{c_{2}}(\xi,\eta)|\] for all \((\xi,\eta)\in\partial\Omega\times\partial\Omega\). Consequently, \[\|\Gamma_{c_{1}}-\Gamma_{c_{2}}\|_{L^{2}(\partial\Omega\times\partial\Omega)} ^{2}=\int\left|e^{Z_{c_{1}}}-e^{Z_{c_{2}}}\right|^{2}\ d\xi d\eta\geq e^{-2M} \delta^{2}\int|Z_{c_{1}}-Z_{c_{2}}|^{2}\ d\xi d\eta.\] So we conclude that \[\|Z_{c_{1}}-Z_{c_{2}}\|_{L^{2}}\lesssim\|\Gamma_{c_{1}}-\Gamma_{c_{2}}\|_{L^{2 }}\lesssim\|c_{1}-c_{2}\|_{L^{2}}.\] We conclude this section with a technical result that will be necessary for the proof of Theorem 3.7 in Section 3. **Theorem 2.8**.: _Given \(M>0\), there exists a constant \(C_{3}^{\prime}=C_{3}^{\prime}(\Omega,\Omega_{0},\bar{g},\ell,M)>0\) such that for all \(c_{1},c_{2}\in\mathcal{C}_{\ell,M}^{3}(\Omega_{0})\),_ \[\|Z_{c_{1}}-Z_{c_{2}}\|_{H^{2}(\partial\Omega\times\partial\Omega)}\leq C_{3}^ {\prime}.\] Proof.: We know from Theorem 1.3 that \[\|Z_{c_{1}}-Z_{c_{2}}\|_{L^{2}}\lesssim\|c_{1}-c_{2}\|_{L^{2}}\lesssim 2M.\] Next, let \(\xi,\eta\in\partial\Omega\). It follows from Remark 1.2 that if \(\operatorname{dist}_{\bar{g}}(\xi,\eta)<\delta\), then \(Z_{c_{1}}-Z_{c_{2}}\) and all its derivatives are identically \(0\) in a neighborhood of \((\xi,\eta)\). On the other hand, if \(\operatorname{dist}_{\bar{g}}(\xi,\eta)>\delta\), Lemma 2.2 implies \[|d_{\xi}(Z_{c_{1}}-Z_{c_{2}})(\xi,\eta)|_{\bar{g}}\leq\frac{|d_{\xi}\Gamma_{c_ {1}}(\xi,\eta)|_{\bar{g}}}{\Gamma_{c_{1}}(\xi,\eta)}+\frac{|d_{\xi}\Gamma_{c_{2 }}(\xi,\eta)|_{\bar{g}}}{\Gamma_{c_{2}}(\xi,\eta)}\lesssim\frac{e^{M}}{\delta}.\] This shows that \(\|d_{\xi}(Z_{c_{1}}-Z_{c_{2}})\|_{L^{2}}\) is uniformly bounded for \(c_{1},c_{2}\in\mathcal{C}_{\ell,M}^{3}(\Omega_{0})\). By symmetry, \(\|d_{\eta}(Z_{c_{1}}-Z_{c_{2}})\|_{L^{2}}\) is also uniformly bounded. So it only remains to consider the Hessian tensor of \(Z_{c_{1}}-Z_{c_{2}}\). Let \(\nabla\) denote the Levi-Civita connection on \(\partial\Omega_{\xi}\times\partial\Omega_{\eta}\), and let \(\pi^{\xi}:\partial\Omega_{\xi}\times\partial\Omega_{\eta}\to\partial\Omega_{\xi}\) and \(\pi^{\eta}:\partial\Omega_{\xi}\times\partial\Omega_{\eta}\to\partial\Omega_{\eta}\) denote the canonical projection maps. We may decompose \(\nabla\) as \(\nabla^{\xi}+\nabla^{\eta}\), where \(\nabla^{\xi}\) and \(\nabla^{\eta}\) are the covariant derivative operations with respect to \(\xi\) and \(\eta\) respectively. More precisely, given any tensor field \(F\) on \(\partial\Omega_{\xi}\times\partial\Omega_{\eta}\), and any tangent vector \(v\in T(\partial\Omega_{\xi}\times\partial\Omega_{\eta})\), we have \[\nabla_{v}^{\xi}F=\nabla_{(\pi^{\xi})_{*}v_{\xi}}F,\qquad\nabla_{v}^{\eta}F= \nabla_{(\pi^{\eta})_{*}v_{\eta}}F,\] where \((v_{\xi},v_{\eta})\) is the image of \(v\) under the canonical isomorphism from \(T(\partial\Omega_{\xi}\times\partial\Omega_{\eta})\) to \((T\partial\Omega_{\xi})\times(T\partial\Omega_{\eta})\). Correspondingly, the Hessian operator on \(\partial\Omega_{\xi}\times\partial\Omega_{\eta}\) can be decomposed as \[\operatorname{Hess}=\nabla^{2} =(\nabla^{\xi}+\nabla^{\eta})(\nabla^{\xi}+\nabla^{\eta})\] \[=\nabla^{\xi}\nabla^{\xi}+\nabla^{\xi}\nabla^{\eta}+\nabla^{\eta }\nabla^{\xi}+\nabla^{\eta}\nabla^{\eta}\] \[=\operatorname{Hess}_{\xi}+\nabla^{\xi}\nabla^{\eta}+\nabla^{\eta }\nabla^{\xi}+\operatorname{Hess}_{\eta},\] where \(\operatorname{Hess}_{\xi}\) and \(\operatorname{Hess}_{\eta}\) are the Hessian operators with respect to \(\xi\) and \(\eta\) respectively. Now let \(\xi,\eta\in\partial\Omega\) be such that \(\operatorname{dist}_{\bar{g}}(\xi,\eta)>\delta\). Then for \(j=1,2\), \[\nabla^{\xi}\nabla^{\eta}Z_{c_{j}}(\xi,\eta) =\nabla^{\xi}\nabla^{\eta}\log\Gamma_{c_{j}}(\xi,\eta)\] \[=\left(\frac{\nabla^{\xi}\nabla^{\eta}\Gamma_{c_{j}}}{\Gamma_{c_ {j}}}-\frac{d_{\xi}\Gamma_{c_{j}}\otimes d_{\eta}\Gamma_{c_{j}}}{\Gamma_{c_{j} }^{2}}\right)(\xi,\eta).\] By Lemma 2.2, this implies \[|\nabla^{\xi}\nabla^{\eta}Z_{c_{j}}(\xi,\eta)|_{\bar{g}} \leq\frac{|\nabla^{\xi}\nabla^{\eta}\Gamma_{c_{j}}(\xi,\eta)|_{ \bar{g}}}{\Gamma_{c_{j}}(\xi,\eta)}+\frac{|d_{\xi}\Gamma_{c_{j}}(\xi,\eta)|_{ \bar{g}}|d_{\eta}\Gamma_{c_{j}}(\xi,\eta)|_{\bar{g}}}{\Gamma_{c_{j}}^{2}(\xi, \eta)}\] \[\lesssim\frac{1+\ell^{-1}}{\lambda\delta^{2}}+\frac{1}{\delta^{2}}.\] This implies that \(\|\nabla^{\xi}\nabla^{\eta}(Z_{c_{1}}-Z_{c_{2}})\|_{L^{2}}\) is uniformly bounded as well. Finally, consider the fact [42] that \[\operatorname{Hess}_{\xi}\Gamma_{c_{j}}(\xi,\eta)=(D_{w}\exp_{c_{j}}(\xi,w( \xi,\eta)))^{-1}(D_{\xi}\exp_{c_{j}}(\xi,w(\xi,\eta))),\] where \(w(\xi,\cdot)\) is the inverse of \(\exp_{c_{j}}(\xi,\cdot)\) as in Lemma 2.2. Therefore, by Proposition 2.6, \[|\operatorname{Hess}_{\xi}\Gamma_{c_{j}}(\xi,\eta)|_{\bar{g}}\lesssim\ell^{-1}L(M).\] Writing \(Z_{c_{j}}=\log\Gamma_{c_{j}}\), we get \[\operatorname{Hess}_{\xi}Z_{c_{j}}(\xi,\eta) =\operatorname{Hess}_{\xi}\log\Gamma_{c_{j}}(\xi,\eta)\] \[=\left(\frac{\operatorname{Hess}_{\xi}\Gamma_{c_{j}}}{\Gamma_{c_ {j}}}-\frac{d_{\xi}\Gamma_{c_{j}}\otimes d_{\xi}\Gamma_{c_{j}}}{\Gamma_{c_{j}}^ {2}}\right)(\xi,\eta),\] which implies \[|\operatorname{Hess}_{\xi}Z_{c_{j}}(\xi,\eta)|_{\bar{g}} \leq\frac{|\operatorname{Hess}_{\xi}\Gamma_{c_{j}}(\xi,\eta)|_{ \bar{g}}}{\Gamma_{c_{j}}(\xi,\eta)}+\frac{|d_{\xi}\Gamma_{c_{j}}(\xi,\eta)|_{ \bar{g}}^{2}}{\Gamma_{c_{j}}^{2}(\xi,eta)}\] \[\lesssim\frac{\ell^{-1}L}{\lambda\delta^{2}}+\frac{1}{\delta^{2}}.\] So we conclude that \(\|\operatorname{Hess}_{\xi}(Z_{c_{1}}-Z_{c_{2}})\|_{L^{2}}\), and by similar arguments, \(\|\operatorname{Hess}_{\eta}(Z_{c_{1}}-Z_{c_{2}})\|_{L^{2}}\), are both uniformly bounded on \(\mathcal{C}_{\ell,M}^{\beta}(\Omega_{0})\) as well. This proves the result. ## 3. Statistical Inversion through the Bayesian framework As discussed in the Introduction, we will be using the posterior mean of \(c\) given finitely many measurements \(\mathcal{D}_{N}=(X_{i},Y_{i},Z_{i})_{i=1}^{N}\), as an estimator for the true metric parameter \(c_{0}\). Let us begin by describing the prior distribution \(\Pi\) for \(c\in C_{0}^{3}(\Omega_{0})\). We will assume that \(\Pi\) arises from a centered Gaussian probability distribution \(\widetilde{\Pi}\) on the Banach space \(C(\overline{\Omega}_{0})\) that satisfies the following conditions. _Condition 3.1_.: Let \(\beta\geq 3\) and \(\alpha>\beta+\frac{m}{2}\). We assume that \(\widetilde{\Pi}\) is a centered Gaussian Borel probability measure on \(C(\overline{\Omega}_{0})\) that is supported in a separable subspace of \(C_{0}^{\beta}(\Omega_{0})\). Moreover, its _Reproducing Kernel Hilbert space (RKHS)_\((\mathcal{H},\|\cdot\|_{\mathcal{H}})\) must be continuously embedded in the Sobolev space \(H^{\alpha}(\Omega_{0})\). We refer the reader to [14, Chapter 11] or [15, Sections 2.1 and 2.6] for basic facts about Gaussian probability measures and their Reproducing Kernel Hilbert Spaces. We now define the prior \(\Pi\) to be the restriction of \(\widetilde{\Pi}\) to \(\mathcal{C}_{\ell,M}^{\beta}(\Omega_{0})\) in the sense that \[\Pi(A)=\frac{\widetilde{\Pi}\left(A\cap\mathcal{C}_{\ell,M}^{\beta}(\Omega_{0} )\right)}{\widetilde{\Pi}(\mathcal{C}_{\ell,M}^{\beta}(\Omega_{0}))} \tag{19}\] for all Borel sets \(A\subseteq C_{0}^{3}(\Omega_{0})\). We will see in Lemma 3.5 that \(C^{\beta}\)-balls have positive \(\widetilde{\Pi}\)-measure. This together with the fact that \(\mathcal{C}_{\ell,M}^{\beta}(\Omega_{0})\) is an open subset of \(C_{0}^{\beta}(\Omega_{0})\) (c.f. Remark 1.4) implies that \(\widetilde{\Pi}(\mathcal{C}_{\ell,M}^{\beta}(\Omega_{0}))>0\). Therefore, (19) yields a well-defined probability distribution on \(C_{0}^{3}(\Omega_{0})\). **Theorem 3.1**.: _Let \(\Pi\) be a prior distribution on \(C_{0}^{3}(\Omega_{0})\) defined by (19). Assume that the true parameter \(c_{0}\in\mathcal{C}_{\ell,M}^{\beta}(\Omega_{0})\cap\mathcal{H}\), and let \(\overline{c}_{N}\) be the mean (6) of the posterior distribution \(\Pi(\cdot|\mathcal{D}_{N})\) arising from observations (5). Then there exists \(\omega\in(0,1/4)\) such that_ \[P_{c_{0}}^{N}\left(\|\overline{c}_{N}-c_{0}\|_{L^{2}(\Omega_{0})}>N^{-\omega} \right)\to 0\qquad\text{as }N\to\infty.\] _Moreover, \(\omega\) can be made arbitrarily close to \(1/4\) for \(\beta\) large enough._ _Remark 3.1_.: The assumption that \(c_{0}\in\mathcal{C}^{\beta}_{\ell,M}(\Omega_{0})\cap\mathcal{H}\) is weaker than in Theorem 1.5, where we assumed that \(c_{0}\) is smooth, compactly supported in \(\Omega_{0}\), and that \(g_{n_{c_{0}}}\) is simple. Indeed, if \(g_{n_{c_{0}}}\) is a smooth simple metric, \(c_{0}\) necessarily belongs to \(\mathcal{C}^{\beta}_{\ell,M}(\Omega_{0})\) for appropriate values of \(\ell,M\), and any \(\beta\). Moreover, given any \(c_{0}\in H^{\alpha}_{0}(\Omega_{0})\), it is possible to choose \(\widetilde{\Pi}\) so that its RKHS \(\mathcal{H}\) contains \(c_{0}\). Indeed, let \((f(x):x\in\Omega_{0})\) be the so-called _Matern-Whittle process of regularity \(\alpha\)_ (see [14, Example 11.8]), whose corresponding RKHS is \(H^{\alpha}(\Omega_{0})\). It follows from Lemma I.4 in [14] that the sample paths of this process belong almost surely to \(C^{\beta}(\overline{\Omega}_{0})\). Now choose a cut-off function \(\varphi\in C^{\infty}(\overline{\Omega}_{0})\) such that \(\varphi>0\) on \(\Omega_{0}\), \(\varphi\) and all its partial derivatives vanish on \(\partial\Omega_{0}\), and \(\varphi^{-1}c_{0}\in H^{\alpha}(\Omega_{0})\). Define \(\widetilde{\Pi}\) to be the probability law of \((\varphi(x)f(x):x\in\Omega_{0})\). Then \(\mathcal{H}=\{\varphi f:f\in H^{\alpha}(\Omega_{0})\}\), which contains \(c_{0}\). Therefore, Theorem 3.1 is a more general and precise version of Theorem 1.5. ### A General Contraction Theorem Our proof of Theorem 3.1 will follow the same general strategy as in [22], with some modifications necessitated by the fact that our prior \(\Pi\) is not in itself a Gaussian probability measure, but rather the restriction of such a measure to \(\mathcal{C}^{\beta}_{\ell,M}(\Omega_{0})\). We begin with a general posterior contraction result (Theorem 3.2). This is a simplified version of [22, Theorem 5.13], which suffices for us since our prior \(\Pi\) independent of \(N\). Before stating the result, we need to introduce some notation. Recall that for \(c\in\mathcal{C}^{\beta}_{\ell,M}(\Omega_{0})\), we defined \(p_{c}\) as the probability density function \[p_{c}(x,y,z)=\frac{1}{\sqrt{2\pi}}\exp\left\{-\frac{1}{2}(z-Z_{c}(x,y))^{2} \right\}\qquad\text{for all }(x,y,z)\in\mathcal{X},\] where \(\mathcal{X}=\partial\Omega\times\partial\Omega\times\mathbb{R}\). Given \(c_{1},c_{2}\in\mathcal{C}^{\beta}_{\ell,M}(\Omega_{0})\), let \[h(c_{1},c_{2}):=\left(\int_{\mathcal{X}}(\sqrt{p_{c_{1}}}-\sqrt{p_{c_{2}}})^{ 2}d\mu(x,y)\,dz\right)^{1/2}\] denote the Hellinger distance between \(p_{c_{1}}\) and \(p_{c_{2}}\), \[K(c_{1},c_{2}):=\mathbb{E}_{c_{1}}\left[\log\left(\frac{p_{c_{1}}}{p_{c_{2}}} \right)\right]=\int_{\mathcal{X}}\log\left(\frac{p_{c_{1}}}{p_{c_{2}}}\right) p_{c_{1}}d\mu(x,y)\,dz\] the Kullback-Leibler divergence, and \[V(c_{1},c_{2}):=\mathbb{E}_{c_{1}}\left[\log\left(\frac{p_{c_{1}}}{p_{c_{2}}} \right)\right]^{2}.\] Also, for any \(F\subseteq\mathcal{C}^{\beta}_{\ell,M}(\Omega_{0})\) and \(\delta>0\), we let \(\mathcal{N}(F,h,\delta)\) denote the minimum number of \(h\)-balls of radius \(\delta\) needed to cover \(F\). **Theorem 3.2**.: _Let \(\widehat{\Pi}\) be a Borel probability measure on \(C^{3}_{0}(\Omega_{0})\) supported on \(\mathcal{C}^{\beta}_{\ell,M}(\Omega_{0})\). Let \(c_{0}\in\mathcal{C}^{\beta}_{\ell,M}(\Omega_{0})\) be fixed, and let \(\delta_{N}\) be a sequence of positive numbers such that \(\delta_{N}\to 0\) and \(\sqrt{N}\delta_{N}\to\infty\) as \(N\to\infty\). Assume that the following two conditions hold:_ 1. _There exists_ \(C>0\) _such that for all_ \(N\in\mathbb{N}\)_,_ (20) \[\widehat{\Pi}\left(\left\{c\in\mathcal{C}^{\beta}_{\ell,M}(\Omega_{0}):K(c,c_{0 })\leq\delta_{N}^{2},V(c,c_{0})\leq\delta_{N}^{2}\right\}\right)\geq e^{-CN \delta_{N}^{2}}.\] 2. _There exists_ \(\widetilde{C}>0\) _such that_ (21) \[\log\mathcal{N}(\mathcal{C}^{\beta}_{\ell,M}(\Omega_{0}),h,\delta_{N})\leq \widetilde{C}N\delta_{N}^{2}.\] _Now suppose that we make i.i.d. observations \(\mathcal{D}_{N}=(X_{i},Y_{i},Z_{i})_{i=1}^{N}\sim P_{c_{0}}^{N}\). Then for some \(k>0\) large enough, we have_ \[P_{c_{0}}^{N}\left(\widehat{\Pi}\left(\left\{c\in\mathcal{C}_{\ell,M}^{\beta}( \Omega_{0}):h(c,c_{0})\leq k\delta_{N}\right\}|\mathcal{D}_{N}\right)\leq 1-e^{- (C+3)N\delta_{N}^{2}}\right)\to 0 \tag{22}\] _as \(N\to\infty\)._ Proof.: Define \[B_{N}=\left\{c\in\mathcal{C}_{\ell,M}^{\beta}(\Omega_{0}):K(c,c_{0})\leq \delta_{N}^{2},V(c,c_{0})\leq\delta_{N}^{2}\right\},\qquad N\in\mathbb{N}. \tag{23}\] By condition (1) and [15, Lemma 7.3.2], we have that for any \(\zeta>0\) and any probability measure \(\widetilde{m}\) on \(B_{N}\), \[P_{c_{0}}^{N}\left(\int_{B_{N}}\prod_{i=1}^{N}\frac{p_{c}}{p_{c_{0}}}(X_{i},Y _{i},Z_{i})d\widetilde{m}(c)\leq e^{-(1+\zeta)N\delta_{N}^{2}}\right)\leq\frac {1}{\zeta^{2}N\delta_{N}^{2}}.\] In particular, choosing \(\zeta=1\) and taking \(\widetilde{m}\) to be the restriction of \(\widehat{\Pi}\) to \(B_{N}\) followed by normalization, we get that \[P_{c_{0}}^{N}\left(\int_{B_{N}}\prod_{i=1}^{N}\frac{p_{c}}{p_{c_{0}}}(X_{i},Y_ {i},Z_{i})d\widehat{\Pi}(c)\leq\widehat{\Pi}(B_{N})e^{-2N\delta_{N}^{2}} \right)\leq\frac{1}{N\delta_{N}^{2}}\xrightarrow{N\to\infty}0.\] Set \[A_{N}=\left\{\int_{B_{N}}\prod_{i=1}^{N}\frac{p_{c}}{p_{c_{0}}}(X_{i},Y_{i},Z_ {i})d\widehat{\Pi}(c)\geq e^{-(2+C)N\delta_{N}^{2}}\right\},\] where \(C\) is as in condition (1). It is clear that \(A_{N}\supseteq\left\{\int_{B_{N}}\prod_{i=1}^{N}\frac{p_{c}}{p_{c_{0}}}d\widehat {\Pi}(c)\geq\widehat{\Pi}(B_{N})e^{-2N\delta_{N}^{2}}\right\}\), and therefore, \(P_{c_{0}}^{N}(A_{N})\to 1\) as \(N\to\infty\). Next, we consider condition (2). Let \(k>k^{\prime}>0\) be numbers to be determined later. Fix \(N\) and define the function \(N(\varepsilon)=e^{\widetilde{C}N\delta_{N}^{2}}\) for all \(\varepsilon>\varepsilon_{0}=k^{\prime}\delta_{N}\). It follows from condition (2) that for any \(\varepsilon>\varepsilon_{0}\), \[\mathcal{N}(\mathcal{C}_{\ell,M}^{\beta}(\Omega_{0}),h,\varepsilon/4)\leq \mathcal{N}(\mathcal{C}_{\ell,M}^{\beta}(\Omega_{0}),h,k^{\prime}\delta_{N}/4) \leq e^{\widetilde{C}N\delta_{N}^{2}}=N(\varepsilon).\] Therefore, by [15, Theorem 7.1.4], there exist test functions \(\Psi_{N}=\Psi_{N}(\mathcal{D}_{N})\) such that for some \(K>0\), \[P_{c_{0}}^{N}[\Psi_{N}=1]\leq\frac{N(\varepsilon)}{K}e^{-KN\varepsilon^{2}} \quad;\quad\sup_{c:h(c,c_{0})>\varepsilon}\mathbb{E}_{c}^{N}[1-\Psi_{N}]\leq e ^{-KN\varepsilon^{2}}.\] Now let \(l>\widetilde{C}\) be arbitrary. Setting \(k=\sqrt{l/K}\) and \(\varepsilon=k\delta_{N}\), we can see that this implies \[P_{c_{0}}^{N}[\Psi_{N}=1]\to 0\,\,\,\text{as}\,\,N\to\infty\quad;\quad \sup_{c:h(c,c_{0})>k\delta_{N}}\mathbb{E}_{c}^{N}[1-\Psi_{N}]\leq e^{-lN\delta _{N}^{2}}. \tag{24}\] Now define \[F_{N}=\left\{c\in\mathcal{C}_{\ell,M}^{\beta}(\Omega_{0}):h(c,c_{0})\leq k \delta_{N}\right\}\] which is the event whose probability we want to bound. Then by (24), \[P_{c_{0}}^{N}\left(\widehat{\Pi}(F_{N}^{c}|\mathcal{D}_{N})\geq e^{ -(C+3)N\delta_{N}^{2}}\right)\] \[= P_{c_{0}}^{N}\left(\frac{\int_{F_{N}^{c}}\prod_{i=1}^{N}\frac{p_{c }}{p_{c_{0}}}(X_{i},Y_{i},Z_{i})d\widehat{\Pi}(c)}{\int\prod_{i=1}^{N}\frac{p_{ c}}{p_{c_{0}}}(X_{i},Y_{i},Z_{i})d\widehat{\Pi}(c)}\geq e^{-(C+3)N\delta_{N}^{2}},\ \Psi_{N}=0,\ A_{N}\right)+o(1)\] \[\leq P_{c_{0}}^{N}\left((1-\Psi_{N})\int_{F_{N}^{c}}\prod_{i=1}^{N} \frac{p_{c}}{p_{c_{0}}}(X_{i},Y_{i},Z_{i})d\widehat{\Pi}(c)\geq e^{-(2C+5)N \delta_{N}^{2}}\right)+o(1).\] Now by Markov's inequality, this is further bounded above by \[\mathbb{E}_{c_{0}}^{N}\left[(1-\Psi_{N})\int_{F_{N}^{c}}\prod_{i=1 }^{N}\frac{p_{c}}{p_{c_{0}}}(X_{i},Y_{i},Z_{i})d\widehat{\Pi}(c)\right]e^{(2C+ 5)N\delta_{N}^{2}}+o(1)\] \[= \left[\int_{F_{N}^{c}}\mathbb{E}_{c_{0}}^{N}\left[(1-\Psi_{N}) \prod_{i=1}^{N}\frac{p_{c}}{p_{c_{0}}}(X_{i},Y_{i},Z_{i})\right]d\widehat{\Pi}( c)\right]e^{(2C+5)N\delta_{N}^{2}}+o(1)\quad\text{(by Fubini's Theorem)}\] \[= \left[\int_{c:h(c,c_{0})>k\delta_{N}}\mathbb{E}_{c}^{N}[(1-\Psi_{ N})]d\widehat{\Pi}(c)\right]e^{(2C+5)N\delta_{N}^{2}}+o(1)\] \[\leq e^{(2C+5-l)N\delta_{N}^{2}}+o(1).\] Now choosing \(l>2C+5\), the Theorem follows. ### Properties of the Prior In this section, we will verify the assumptions of Theorem 3.2 when \(\widehat{\Pi}=\Pi\). The key ingredient in the arguments is the forward continuity estimate from Corollary 2.7. We begin by observing that the Hellinger distance between \(c_{1},c_{2}\in\mathcal{C}_{\ell,M}^{\beta}(\Omega_{0})\) is equivalent to the \(L^{2}(\partial\Omega\times\partial\Omega)\) distance between \(Z_{c_{1}}\) and \(Z_{c_{2}}\). **Lemma 3.3**.: _There exists \(\kappa=\kappa(\Omega,\bar{g},M)>0\) such that for all \(c_{1},c_{2}\in\mathcal{C}_{\ell,M}^{\beta}(\Omega_{0})\),_ \[\kappa\|Z_{c_{1}}-Z_{c_{2}}\|_{L^{2}}^{2}\leq h^{2}(c_{1},c_{2})\leq\frac{1}{4 \operatorname{Vol}_{\bar{g}}(\partial\Omega)^{2}}\|Z_{c_{1}}-Z_{c_{2}}\|_{L^{2 }}^{2}.\] Proof.: Consider the "Hellinger affinity" function \[\rho(c_{1},c_{2})=\int_{\mathcal{X}}\sqrt{p_{c_{1}}p_{c_{2}}}d\mu=1-\frac{1}{2 }h^{2}(c_{1},c_{2}).\] We have \[\rho(c_{1},c_{2}) = \frac{1}{\sqrt{2\pi}}\int_{\mathcal{X}}\exp\left\{-\frac{1}{4}((z -Z_{c_{1}}(x,y))^{2}+(z-Z_{c_{2}}(x,y))^{2})\right\}d\mu(x,y)\,dz\] \[= \frac{1}{\operatorname{Vol}_{\bar{g}}(\partial\Omega\times\partial \Omega)}\int_{\partial\Omega\times\partial\Omega}\exp\left\{-\frac{1}{4}(Z_{c_ {1}}(x,y)^{2}+Z_{c_{2}}(x,y)^{2})\right\}\] \[\times\left[\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{\infty}\exp \left\{-\frac{1}{2}\left(z-\frac{Z_{c_{1}}+Z_{c_{2}}}{2}\right)^{2}\right\}dz \right]\exp\left\{\frac{1}{8}(Z_{c_{1}}+Z_{c_{2}})^{2}\right\}dx\,dy\] \[= \frac{1}{\operatorname{Vol}_{\bar{g}}(\partial\Omega)^{2}}\int_{ \partial\Omega\times\partial\Omega}\exp\left\{-\frac{1}{8}(Z_{c_{1}}(x,y)-Z_{c _{2}}(x,y))^{2}\right\}dx\,dy.\] Now applying the simple estimate \(e^{-t}\geq 1-t\) for all \(t\geq 0\), we get \[\rho(c_{1},c_{2}) \geq\frac{1}{\operatorname{Vol}_{\bar{g}}(\partial\Omega)^{2}}\int_ {\partial\Omega\times\partial\Omega}\left[1-\frac{1}{8}(Z_{c_{1}}-Z_{c_{2}})^{2 }\right]dx\,dy\] \[=1-\frac{1}{8\operatorname{Vol}_{\bar{g}}(\partial\Omega)^{2}} \|Z_{c_{1}}-Z_{c_{2}}\|_{L^{2}}^{2}.\] Consequently, \[h^{2}(c_{1},c_{2})=2(1-\rho(c_{1},c_{2}))\leq\frac{1}{4\operatorname{Vol}_{ \bar{g}}(\partial\Omega)^{2}}\|Z_{c_{1}}-Z_{c_{2}}\|_{L^{2}}^{2}.\] Next, we use the fact \(Z_{c_{1}},Z_{c_{2}}\) satisfy the uniform bounds (18) on the support of \(Z_{c_{1}}-Z_{c_{2}}\). Consequently, for all \(x,y\in\partial\Omega\), we have \[|Z_{c_{1}}(x,y)-Z_{c_{2}}(x,y)|\leq\Delta, \tag{26}\] where \(\Delta=2M+\log\operatorname{diam}_{\bar{g}}(\Omega)-\log\delta\). Set \(T=\Delta^{2}/8\) and observe that for all \(t\in[0,T]\), \[e^{-t}\leq 1-\left(\frac{1-e^{-T}}{T}\right)t\] by the convexity of \(t\mapsto e^{-t}\). Therefore, for \(\kappa=\frac{1-e^{-T}}{4T}\), we have \[\exp\left\{-\frac{1}{8}(Z_{c_{1}}(x,y)-Z_{c_{2}}(x,y))^{2}\right\}\leq 1- \frac{\kappa}{2}|Z_{c_{1}}(x,y)-Z_{c_{2}}(x,y)|^{2}\] for all \((x,y)\in\partial\Omega\times\partial\Omega\). Integrating both sides of this inequality with respect to \(d\mu(x,y)\) and applying (25), we get \[\rho(c_{1},c_{2}) \leq 1-\frac{\kappa}{2}\|Z_{c_{1}}-Z_{c_{2}}\|_{L^{2}}^{2}\] \[\Rightarrow\ h^{2}(c_{1},c_{2}) \geq\kappa\|Z_{c_{1}}-Z_{c_{2}}\|_{L^{2}}^{2}.\] This completes the proof. Now let us verify Condition (1) of Theorem 3.2 for \(\Pi\). **Lemma 3.4**.: _For \(c_{0}\in\mathcal{C}_{\ell,M}^{\beta}(\Omega_{0})\) and \(t>0\), define_ \[\mathcal{B}_{N}(t)=\{c\in\mathcal{C}_{\ell,M}^{\beta}(\Omega_{0}):\|c-c_{0}\| _{C^{\beta}}\leq\delta_{N}/t\},\] _and let \(B_{N},\Pi,\) and \(\delta_{N}\) be as in Theorem 3.2. Then for some \(t>0\) large enough, \(\mathcal{B}_{N}(t)\subset B_{N}\) for all \(N\in\mathbb{N}\). In particular,_ \[\Pi(B_{N})\geq\Pi(\mathcal{B}_{N}(t)).\] Proof.: We need to verify that if \(t\) is large enough, then for any \(c\in\mathcal{B}_{N}(t)\), we have \(K(c,c_{0})\leq\delta_{N}^{2}\) and \(V(c,c_{0})\leq\delta_{N}^{2}\). Consider a random observation \((X,Y,Z)\), where \((X,Y)\) is a pair of boundary points chosen with respect to the uniform probability measure \(\mu\), and \(Z=Z_{c_{0}}(X,Y)+\epsilon\), with \(\epsilon\sim N(0,1)\) independent of \((X,Y)\). Observe that for any \(c\in\mathcal{B}_{N}(t)\), \[\log\frac{p_{c_{0}}}{p_{c}}(X,Y,Z) =-\frac{1}{2}[(Z-Z_{c_{0}}(X,Y))^{2}-(Z-Z_{c}(X,Y))^{2}] \tag{27}\] \[=\frac{1}{2}(Z_{c}(X,Y)-Z_{c_{0}}(X,Y))^{2}-\epsilon(Z_{c}(X,Y)-Z _{c_{0}}(X,Y)).\] Since \(\mathbb{E}[\epsilon|X,Y]=0\), we have (28) \[\begin{split} K(c,c_{0})&=\mathbb{E}_{c_{0}}\left[ \log\frac{p_{c_{0}}}{p_{c}}(X,Y,Z)\right]\\ &=\mathbb{E}^{\mu}\left[\frac{1}{2}(Z_{c}(X,Y)-Z_{c_{0}}(X,Y))^{ 2}\right]\\ &=\frac{1}{2\operatorname{Vol}_{\bar{g}}(\partial\Omega\times \partial\Omega)}\int_{\partial\Omega\times\partial\Omega}(Z_{c}(x,y)-Z_{c_{0} }(x,y))^{2}\,dx\,dy\\ &=\frac{1}{2\operatorname{Vol}_{\bar{g}}(\partial\Omega)^{2}}\|Z _{c}-Z_{c_{0}}\|_{L^{2}}^{2}\\ &\lesssim\|c-c_{0}\|_{L^{2}}^{2}\qquad\text{(by Corollary \ref{cor:K-c In either case, it is easy to see that \(\|f\|_{C^{s}_{*}}\leq\|f\|_{C^{s}}\) for all \(f\in C^{s}(\overline{\Omega}_{0})\). It turns out that \(C^{s}_{*}(\Omega_{0})\) coincides with the Besov space \(B^{s}_{\infty,\infty}(\Omega_{0})\), which allows us to use various embedding and approximation results from Besov space theory. Before proceeding, let us fix \(\nu>0\) such that \[\nu>\max\left\{\frac{2m}{2(\alpha-\beta)-m},\frac{m}{\beta}\right\},\qquad \text{and define}\qquad\delta_{N}=N^{-1/(2+\nu)}. \tag{30}\] It is easy to verify that \(\delta_{N}\to 0\) and \(\sqrt{N}\delta_{N}=N^{\frac{\nu}{2(2+\nu)}}\to\infty\) as \(N\to\infty\). **Lemma 3.5**.: _Let \(c_{0}\in\mathcal{C}^{\beta}_{\ell,M}(\Omega_{0})\cap\mathcal{H}\), and define \(\delta_{N}\) as in (30). Then for \(t>0\) large enough, there exists \(C^{\prime}=C^{\prime}(\Omega,\Omega_{0},\bar{g},\alpha,\beta,\ell,M,c_{0},t)>0\) such that for all \(N\in\mathbb{N}\),_ \[\Pi(\mathcal{B}_{N}(t))\geq\exp\{-C^{\prime}N\delta_{N}^{2}\}.\] _In particular, there exists \(C=C(\Omega,\Omega_{0},\bar{g},\alpha,\beta,\ell,M,c_{0})>0\) such that for all \(N\in\mathbb{N}\),_ \[\Pi(B_{N})\geq\exp\{-CN\delta_{N}^{2}\}.\] Proof.: The sets \(\{b\in C_{0}^{3}(\Omega_{0}):\|b\|_{C^{\beta}}\leq\delta\}\) for \(\delta>0\) are convex and symmetric. Hence by [15, Corollary 2.6.18], \[\widetilde{\Pi}(\|c-c_{0}\|_{C^{\beta}}\leq\delta_{N}/t)\geq e^{-\|c_{0}\|_{ \mathcal{H}}^{2}/2}\widetilde{\Pi}(\|c\|_{C^{\beta}}\leq\delta_{N}/t).\] Moreover, since \(c_{0}\in\mathcal{C}^{\beta}_{\ell,M}(\Omega_{0})\), which is open with respect to the \(C^{\beta}\) metric, we have for all sufficiently large \(t>0\), \[\Pi(\mathcal{B}_{N}(t))=\Pi(\|c-c_{0}\|_{C^{\beta}}\leq\delta_{N}/t)=\frac{ \widetilde{\Pi}(\|c-c_{0}\|_{C^{\beta}}\leq\delta_{N}/t)}{\widetilde{\Pi}( \mathcal{C}^{\beta}_{\ell,M}(\Omega_{0}))},\] and therefore, \[\Pi(\mathcal{B}_{N}(t))\geq e^{-\|c_{0}\|_{\mathcal{H}}^{2}/2}\frac{ \widetilde{\Pi}(\|c\|_{C^{\beta}}\leq\delta_{N}/t)}{\widetilde{\Pi}(\mathcal{ C}^{\beta}_{\ell,M}(\Omega_{0}))}. \tag{31}\] Next, choose a real number \(\gamma\) such that \[\beta<\gamma<\alpha-\frac{m}{2},\qquad\nu>\frac{2m}{2(\alpha-\gamma)-m}. \tag{32}\] Alternatively, if \(\beta\) is not an integer, we can simply set \(\gamma=\beta\). In either case, we have \(\|f\|_{C^{\beta}}\leq\|f\|_{C^{\gamma}_{*}}\) for all \(f\in C^{\gamma}_{*}(\Omega_{0})\). Now recall our assumption that the RKHS \(\mathcal{H}\) of \(\widetilde{\Pi}\) is continuously embedded into \(H^{\alpha}(\Omega_{0})\). We know from [13, Theorem 3.1.2] that the unit ball \(U\) of this space satisfies \[\log\mathcal{N}(U,\|\cdot\|_{C^{\gamma}_{*}},\varepsilon)\leq\left(\frac{A}{ \varepsilon}\right)^{\frac{m}{(\alpha-\gamma)}}\] for some fixed \(A>0\) and all \(\varepsilon>0\) small enough. Therefore, by [18, Theorem 1.2], there exists \(D>0\) such that for all \(\varepsilon>0\) small enough, \[\widetilde{\Pi}(\|c\|_{C^{\beta}}\leq\varepsilon)\geq\widetilde{\Pi}(\|c\|_{C ^{\gamma}_{*}}\leq\varepsilon)\geq\exp\left\{-D\varepsilon^{-\frac{2m}{2( \alpha-\gamma)-m}}\right\}.\] Consequently, (31) implies that for \(t>0\) large enough, \[\Pi(\mathcal{B}_{N}(t)) \geq\frac{1}{\widetilde{\Pi}(\mathcal{C}^{\beta}_{\ell,M}(\Omega_{ 0}))}\exp\left\{-\frac{\|c_{0}\|_{\mathcal{H}}^{2}}{2}-Dt^{\frac{2m}{2(\alpha- \gamma)-m}}\delta_{N}^{-\frac{2m}{2(\alpha-\gamma)-m}}\right\}\] \[>\frac{1}{\widetilde{\Pi}(\mathcal{C}^{\beta}_{\ell,M}(\Omega_{ 0}))}\exp\left\{-\frac{\|c_{0}\|_{\mathcal{H}}^{2}}{2}-Dt^{\frac{2m}{2(\alpha- \gamma)-m}}\delta_{N}^{-\nu}\right\}\qquad\text{(by \eqref{eq:2.1} and \eqref{eq:2.2})}\] \[=\frac{1}{\widetilde{\Pi}(\mathcal{C}^{\beta}_{\ell,M}(\Omega_{ 0}))}\exp\left\{-\frac{\|c_{0}\|_{\mathcal{H}}^{2}}{2}-Dt^{\frac{2m}{2(\alpha- \gamma)-m}}N\delta_{N}^{2}\right\}\] \[\geq\exp\{-C^{\prime}N\delta_{N}^{2}\}\] for \(C^{\prime}=\log\left(\widetilde{\Pi}(\mathcal{C}^{\beta}_{\ell,M}(\Omega_{0}) )\right)+\frac{\|c_{0}\|_{\mathcal{H}}^{2}}{2}+Dt^{\frac{2m}{2(\alpha-\gamma)- m}}\). It now follows from Lemma 3.4 that for \(t>0\) sufficiently large, there exists \(C>0\) such that \(\Pi(B_{N})\geq\exp\{-CN\delta_{N}^{2}\}\). This completes the proof. Thus, we have verified Condition (1) of Theorem 3.2. The next Lemma verifies Condition (2). **Lemma 3.6**.: _There exists \(\widetilde{C}=\widetilde{C}(\Omega,\Omega_{0},\bar{g},\beta,\ell)>0\) such that_ \[\log\mathcal{N}(\mathcal{C}^{\beta}_{\ell,M}(\Omega_{0}),h,\delta_{N})\leq \widetilde{C}N\delta_{N}^{2}.\] Proof.: In order to construct a covering of \(\mathcal{C}^{\beta}_{\ell,M}(\Omega_{0})\), it suffices to construct such a covering of the \(C^{\beta}_{*}(\Omega_{0})\) - ball of radius \(M\) centered at \(0\). Therefore, if \(U_{\beta}\) denotes the unit ball of \(C^{\beta}_{*}(\Omega_{0})\), \[\log\mathcal{N}(\mathcal{C}^{\beta}_{\ell,M}(\Omega_{0}),\|\cdot\|_{L^{2}}, \delta_{N})\leq\log\mathcal{N}(MU_{\beta},\|\cdot\|_{L^{2}},\delta_{N}).\] Now applying [13, Theorem 3.1.2] to the inclusion \(C^{\beta}_{*}(\Omega_{0})\hookrightarrow L^{2}(\Omega_{0})\), we have \[\log\mathcal{N}(\mathcal{C}^{\beta}_{\ell,M}(\Omega_{0}),\|\cdot\|_{L^{2}}, \delta_{N})\leq\left(\frac{A^{\prime}}{\delta_{N}}\right)^{\frac{m}{\beta}}\] for some \(A^{\prime}>0\). Since \(\nu>m/\beta\), we get \[\log\mathcal{N}(\mathcal{C}^{\beta}_{\ell,M}(\Omega_{0}),\|\cdot\|_{L^{2}}, \delta_{N})\leq b\delta_{N}^{-\nu}=bN\delta_{N}^{2},\] where \(b>0\). Now, Lemma 3.3 and Corollary 2.7 imply that an \(L^{2}\) ball of radius \(\delta_{N}\) centered at any \(c\in\mathcal{C}^{\beta}_{\ell,M}(\Omega_{0})\) is contained in the Hellinger ball of radius \(\frac{C^{\prime}_{2}}{2\operatorname{Vol}_{g}(\delta\Omega)}\delta_{N}\) centered at the same point. Therefore, by suitably rescaling the constant \(b\) to \(\widetilde{C}(\Omega,\Omega_{0},\bar{g},\beta,\ell,M)>0\), we get the desired complexity bound \[\log\mathcal{N}(\mathcal{C}^{\beta}_{\ell,M}(\Omega_{0}),h,\delta_{N})\leq \widetilde{C}N\delta_{N}^{2}.\] ### Posterior Convergence In this section, we will combine the results of Sections 3.1 and 3.2 to prove Theorem 3.1. **Theorem 3.7**.: _Let \(\Pi,\alpha,\beta,M,c_{0}\) be as in Theorem 3.1, \(\nu,\delta_{N}\) as in (30), and \(C>0\) as in Lemma 3.5. Then for \(k^{\prime}>0\) large enough, we have_ \[P_{c_{0}}^{N}\left(\Pi(\{c\in\mathcal{C}^{\beta}_{\ell,M}(\Omega_{0}):\|Z_{c}-Z_ {c_{0}}\|_{L^{2}}\leq k^{\prime}\delta_{N}\}|\mathcal{D}_{N})\geq 1-e^{-(C+3)N \delta_{N}^{2}}\right)\to 1 \tag{33}\] _as \(N\to\infty\). Moreover, for all \(k^{\prime\prime}>0\) large enough,_ \[P_{c_{0}}^{N}\left(\Pi(\{c\in\mathcal{C}^{\beta}_{\ell,M}(\Omega_{0}):\|c-c_{0} \|_{L^{2}}\geq k^{\prime\prime}\delta_{N}^{1/2}\}|\mathcal{D}_{N})\geq e^{-(C+3 )N\delta_{N}^{2}}\right)\to 0 \tag{34}\] _as \(N\to\infty\)._ Proof.: Combining Lemmas 3.5 and 3.6 with Theorem 3.2, we get (33) for all sufficiently large \(k^{\prime}>0\). To get (34), consider the event \[E_{N}=\{c\in\mathcal{C}_{\ell,M}^{\beta}(\Omega_{0}):\|Z_{c}-Z_{c_{0}}\|_{L^{2} }\leq k^{\prime}\delta_{N}\}.\] By Corollary 2.3, for any \(c\in E_{N}\), \[\|c-c_{0}\|_{L^{2}} \leq C_{1}^{\prime}\|Z_{c}-Z_{c_{0}}\|_{H^{1}}\] \[\leq C_{1}^{\prime}\|Z_{c}-Z_{c_{0}}\|_{L^{2}}^{1/2}\|Z_{c}-Z_{c_{0}} \|_{H^{2}}^{1/2}\] by the standard interpolation result for Sobolev spaces. Therefore, by Theorem 2.8, \[\|c-c_{0}\|_{L^{2}}\leq C_{1}^{\prime}(C_{3}^{\prime})^{1/2}(k^{\prime}\delta_ {N})^{1/2}\] Taking \(k^{\prime\prime}>C_{1}^{\prime}(k^{\prime}C_{3}^{\prime})^{1/2}\), we conclude that \[\|c-c_{0}\|_{L^{2}}\leq k^{\prime\prime}\delta_{N}^{1/2}.\] Combining this with (33) gives us (34). The final step in the proof of Theorem 3.1 is to prove that the posterior contraction rate in the above Theorem carries over to the posterior mean \(\overline{c}_{N}=\mathbb{E}^{\Pi}[c|\mathcal{D}_{N}]\) as well. Let \[0<\omega<\frac{1}{2(2+\nu)}.\] We note that \(\omega\) can be made arbitrarily close to \(1/4\) by choosing \(\alpha,\beta\) appropriately. Indeed, if \(\alpha\) and \(\beta\) are sufficiently large, (30) allows \(\nu\) to be arbitrarily close to \(0\). Correspondingly, \(\omega\) can be made arbitrarily close to \(1/4\). Next, define \[\omega_{N}:=k^{\prime\prime}\delta_{N}^{1/2}=k^{\prime\prime}N^{-\frac{1}{2( 2+\nu)}}=o(N^{-\omega})\] where \(k^{\prime\prime}>0\) is as in Theorem 3.7. Proof of Theorem 3.1.: Observe that \[\|\overline{c}_{N}-c_{0}\|_{L^{2}} = \left\|\mathbb{E}^{\Pi}[c|\mathcal{D}_{N}]-c_{0}\right\|_{L^{2}}\] \[\leq \mathbb{E}^{\Pi}\left[\|c-c_{0}\|_{L^{2}}|\mathcal{D}_{N}\right] \quad\text{(by Jensen's inequality)}\] \[\leq \omega_{N}+\mathbb{E}^{\Pi}\left[\|c-c_{0}\|_{L^{2}}\mathds{1}_{ \{\|c-c_{0}\|_{L^{2}}\geq\omega_{N}\}}\big{|}\mathcal{D}_{N}\right]\] \[\leq \omega_{N}+\mathbb{E}^{\Pi}\left[\|c-c_{0}\|_{L^{2}}^{2}|\mathcal{ D}_{N}\right]^{1/2}\left[\Pi(\|c-c_{0}\|_{L^{2}}\geq\omega_{N}|\mathcal{D}_{N}) \right]^{1/2}\] by Cauchy-Schwarz inequality. Now it suffices to show that the second summand on the right hand side is stochastically \(O(\omega_{N})\) as \(N\to\infty\). Arguing as in the proof of Theorem 3.2 and applying Lemma 3.5, we get that the events \[A_{N}^{\prime}=\left\{\int_{\mathcal{C}_{\ell,M}^{\beta}(\Omega_{0})}\prod_{i =1}^{N}\frac{p_{c}}{p_{c_{0}}}(X_{i},Y_{i},Z_{i})d\Pi(c)\geq e^{-(2+C)N\delta _{N}^{2}}\right\}\] satisfy \(P_{c_{0}}^{N}(A_{N}^{\prime})\to 1\) as \(N\to\infty\). Here, \(C\) is as in Lemma 3.5. Now, Theorem 3.7 implies \[P_{c_{0}}^{N}\left(\mathbb{E}^{\Pi}\left[\|c-c_{0}\|_{L^{2}}^{2}| \mathcal{D}_{N}\right]\times\Pi(\|c-c_{0}\|_{L^{2}}\geq\omega_{N}|\mathcal{D}_ {N})>\omega_{N}^{2}\right)\] \[\leq P_{c_{0}}^{N}\left(\mathbb{E}^{\Pi}\left[\|c-c_{0}\|_{L^{2}}^ {2}|\mathcal{D}_{N}\right]e^{-(C+3)N\delta_{N}^{2}}>\omega_{N}^{2}\right)+o(1),\] which is bounded above by \[P_{c_{0}}^{N}\left(e^{-(C+3)N\delta_{N}^{2}}\mathbb{E}^{\Pi}\left[ \|c-c_{0}\|_{L^{2}}^{2}|\mathcal{D}_{N}\right]>\omega_{N}^{2},A_{N}^{\prime} \right)+o(1)\] \[=P_{c_{0}}^{N}\left(e^{-(C+3)N\delta_{N}^{2}}\frac{\int\|c-c_{0} \|_{L^{2}}^{2}\prod_{i=1}^{N}\frac{p_{c}}{p_{c_{0}}}(X_{i},Y_{i},Z_{i})d\Pi(c) }{\int\prod_{i=1}^{N}\frac{p_{c}}{p_{c_{0}}}(X_{i},Y_{i},Z_{i})d\Pi(c)}>\omega_ {N}^{2},A_{N}^{\prime}\right)+o(1) \tag{35}\] \[\leq P_{c_{0}}^{N}\left(\int\|c-c_{0}\|_{L^{2}}^{2}\prod_{i=1}^{N }\frac{p_{c}}{p_{c_{0}}}(X_{i},Y_{i},Z_{i})d\Pi(c)>\omega_{N}^{2}e^{N\delta_{N }^{2}}\right)+o(1)\] using the fact that \(\int\prod_{i=1}^{N}\frac{p_{c}}{p_{c_{0}}}(X_{i},Y_{i},Z_{i})d\Pi(c)\geq e^{-( C+2)N\delta_{N}^{2}}\) on \(A_{N}^{\prime}\). Next, using Markov's inequality, (35) can be further bounded above by \[\leq e^{-N\delta_{N}^{2}}\omega_{N}^{-2}\mathbb{E}_{c_{0}}^{N} \left[\int\|c-c_{0}\|_{L^{2}}^{2}\prod_{i=1}^{N}\frac{p_{c}}{p_{c_{0}}}(X_{i}, Y_{i},Z_{i})d\Pi(c)\right]+o(1)\] \[=e^{-N\delta_{N}^{2}}\omega_{N}^{-2}\int\|c-c_{0}\|_{L^{2}}^{2} \mathbb{E}_{c_{0}}^{N}\left[\prod_{1}^{N}\frac{p_{c}}{p_{c_{0}}}(X_{i},Y_{i},Z _{i})\right]d\Pi(c)+o(1)\quad\text{(by Fubini's Theorem)}\] \[\leq e^{-N\delta_{N}^{2}}\omega_{N}^{-2}\int\|c-c_{0}\|_{L^{2}}^{2 }d\Pi(c)+o(1)\quad\text{(since $\mathbb{E}_{c_{0}}^{N}\left[\prod_{1}^{N}\frac{p_{c}}{p_{c_{0}}} \right]=1$)}\] \[\lesssim e^{-N\delta_{N}^{2}}\omega_{N}^{-2}+o(1)\lesssim e^{-N \delta_{N}^{2}}N^{2\omega}+o(1)\to 0\text{ as }N\to\infty\] This completes the proof.
2309.14135
Structure of hyperbolic polynomial automorphisms of C^2 with disconnected Julia sets
For a hyperbolic polynomial automorphism of C^2 with a disconnected Julia set, and under a mild dissipativity condition, we give a topological description of the components of the Julia set. Namely, there are finitely many "quasi-solenoids" that govern the asymptotic behavior of the orbits of all non-trivial components. This can be viewed as a refined Spectral Decomposition for a hyperbolic map, as well as a two-dimensional version of the (generalized) Branner-Hubbard theory in one-dimensional polynomial dynamics. An important geometric ingredient of the theory is a John-like property of the Julia set in the unstable leaves.
Romain Dujardin, Mikhail Lyubich
2023-09-25T13:45:23Z
http://arxiv.org/abs/2309.14135v1
# Structure of hyperbolic polynomial automorphisms of \(\mathbb{C}^{2}\) with disconnected Julia sets ###### Abstract. For a hyperbolic polynomial automorphism of \(\mathbb{C}^{2}\) with a disconnected Julia set, and under a mild dissipativity condition, we give a topological description of the components of the Julia set. Namely, there are finitely many "quasi-solenoids" that govern the asymptotic behavior of the orbits of all non-trivial components. This can be viewed as a refined Spectral Decomposition for a hyperbolic map, as well as a two-dimensional version of the (generalized) Branner-Hubbard theory in one-dimensional polynomial dynamics. An important geometric ingredient of the theory is a John-like property of the Julia set in the unstable leaves. ###### Contents * 1 Introduction * 2 Preliminaries and notation * 3 External rays * 4 Stable total disconnectedness * 5 Classification of semi-local components of \(K^{+}\) and \(J^{+}\) * 6 Components of \(J\) and \(K\) * 7 Complements * 8 Non-divergence of holonomy and applications * A The core of a quasi-solenoid * B Continuity of affine structure ## 1. Introduction ### Preamble on hyperbolic dynamics The classical _Spectral Decomposition_ of a hyperbolic (Axiom A) real diffeomorphism \(f\) of a compact manifold (developed by Smale, Anosov, Sinai, Bowen, and others) provides us with a rather complete topological picture of its dynamics. Namely, the non-wandering set \(\Omega(f)\) is decomposed into finitely many _basic sets_, each of which modeled on an irreducible Markov chain. Among these basic sets there are several _attractors_ that govern the asymptotic behavior of generic points of the manifold. This picture has become a prototype for numerous other settings, including one-dimensional, non-invertible, holomorphic, partially or non-uniformly hyperbolic dynamical systems. In the context of complex polynomial automorphisms of \(\mathbb{C}^{2}\), hyperbolic maps arise naturally as perturbations of one-dimensional hyperbolic polynomials. They were first studied in the late 1980s by Hubbard and Oberste-Vorth [24, 25] who showed that their topological structure can be fully described in terms of the original one-dimensional maps, whose Julia set and attracting cycles get perturbed to the basic sets of \(f\) (see also Fornaess-Sibony [19]). Computer experiments indicate that, though hyperbolicity is not a prevalent phenomenon in dimension two, there should exist still plenty of non-perturbative examples. The first such candidate (a quadratic Henon map with two co-existing attracting cycles) was proposed by Hubbard; it was further investigated by Oliva in his thesis [38]. However, it is a challenging problem, which requires computer assistance, to prove the hyperbolicity of a particular example, and this one still remains unconfirmed. Some time later, Ishii justified the hyperbolicity of several other non-perturbative Henon maps: see [26, 27, 28] (of course, along with each such example comes an open set of hyperbolic parameters). A systematic theory of hyperbolic polynomial automorphisms of \(\mathbb{C}^{2}\) was launched by Bedford and Smillie in the early 1990's, relying notably on methods from Pluripotential Theory. In particular, they showed in [3] that any such a map only has one non-trivial basic set, its Julia set \(J(f)\), while all others are just attracting cycles. Further combinatorial study of hyperbolic Henon maps was carried out by Ishii and Smillie [29]. In this paper we will reveal a finer structure of the Julia set, related to its connected components, that leads to a finer "spectral decomposition". Namely, under mild dissipativity assumptions, we will show that there are finitely many _quasi-solenoids_ that govern the asymptotic behavior of all non-trivial components. Some of these quasi-solenoids are _tame_ (i.e. lie on the boundary of the basins of some attracting cycles), while others might be _queer_ (we do not know whether they actually exist). Let us conclude this preamble by suggesting a potentially important role that hyperbolic maps may play in the Henon story. They are not only interesting simple models for the general non-uniformly hyperbolic situation, but they may also be seen as "germs" for a Renormalization Theory which would lead to self-similarity features of the parameter spaces. In this respect, renormalizing hyperbolic Henon maps around quasi-solenoids would be the beginning of this story. ### One-dimensional prototype Understanding the topological structure of the Julia set is one of the most basic problems in holomorphic dynamics. For polynomials in one variable, Fatou and Julia proved that the connectivity properties of the Julia set are dictated by the dynamical behavior of critical points. When the critical points do not escape, the Julia set \(J\) is connected; on the contrary, if all critical points do escape, \(J\) is a Cantor set. If \(J\) is connected and locally connected, the theory of external rays of Douady and Hubbard [13] and the theory of geodesic laminations of Thurston [44] give a topological model for the Julia set as the quotient of the circle by an equivalence relation which records the landing pattern of external rays. When the Julia set of a polynomial is disconnected, it admits uncountably many components, and one challenge is to characterize when a component is non-trivial (i.e. not a point) in terms of the induced dynamics on the set of components. It turns out that this happens when and only when this component is preperiodic to a component containing a critical point: this is due to Branner and Hubbard [9] for cubic polynomials, and Qiu and Yin [41] in the general case (based upon the Kahn-Lyubich machinery [30, 31]). Then one may describe non-trivial periodic components by realizing them as Julia sets of connected polynomial-like maps and using the Douady and Hubbard Straightening Theorem [14]. In the hyperbolic case, the above theory is much easier and had belonged to folklore of the field: **Theorem 1.1**.: _Let \(p\) be a hyperbolic polynomial in \(\mathbb{C}\), with a disconnected Julia set. Then the filled Julia set \(K\) has uncountably many components, and only countably of them are non-trivial. Any non-trivial component is preperiodic, and there are finitely many periodic components, each of which containing an attracting periodic point._ Note that this is really a statement about polynomials: there are examples of hyperbolic rational maps on \(\mathbb{P}^{1}\) whose Julia sets are Cantor sets of circles [37]. ### Main result In this article we address similar issues in the setting of polynomial automorphisms of \(\mathbb{C}^{2}\). Let \(f\) be a polynomial automorphism of \(\mathbb{C}^{2}\) with non-trivial dynamics: by this we mean for instance that the algebraic degree of the iterates \(f^{n}\) tend to infinity (see below SS2.1 for more details on this). Its Julia set \(J=J_{f}\) is the set of points at which both \((f^{n})_{n\geq 0}\) and \((f^{-n})_{n\geq 0}\) are not locally normal. We also classically denote by \(K^{+}\) (resp. \(K^{-}\)), the set of points with bounded forward (resp. backward) orbits, \(K=K^{+}\cap K^{-}\) and \(J^{\pm}=\partial K^{\pm}\), so that \(J=J^{+}\cap J^{-}\). The complex Jacobian \(\operatorname{Jac}f\) is a non-zero constant. Thus, replacing \(f\) by \(f^{-1}\) if necessary, without loss of generality we assume from now on that \(|\operatorname{Jac}f|\leq 1\). In this context, the connected vs. disconnected dichotomy for the Julia set was studied by Bedford and Smillie [6], who proved that the connectedness of \(J\), or equivalently of \(K\), is equivalent to the non-existence of "unstable critical points", which are defined as tangencies between certain dynamically defined foliations. (Recall that \(f\) has no critical point in the usual sense, but these unstable critical points play the same role as escaping critical points in dimension one.) Bedford and Smillie also showed that when \(J\) is connected, there is a well-defined family of external rays along unstable manifolds, parameterized by a "solenoid at infinity", which is the inverse limit of the dynamical system defined by \(z\mapsto z^{d}\) on the unit circle. To proceed further and try to extend the Douady-Hubbard description of the Julia set in terms of the combinatorics of external rays, given our current state of knowledge, we need to assume that \(f\) is uniformly hyperbolic. Recall from [3] that \(f\) is said to be _hyperbolic_ if \(J\) is a hyperbolic set, which must then be of saddle type. In this case, \(f\) satisfies Smale's Axiom A in \(\mathbb{C}^{2}\), and the Fatou set is the union of finitely many basins of attraction. (See [27] for an introductory account to this topic, which also discusses some combinatorial/topological models for Julia sets.) By using the convergence of unstable external rays, it was shown in [7] that if \(f\) is hyperbolic and \(J\) is connected, then \(J\) can be described as a finite quotient of the solenoid at infinity. A non-trivial consequence of the results of [5],[6] and [7] is that in this case \(f\) cannot be conservative, that is, \(|\mathrm{Jac}\,f|<1\) (see [7, Cor. A.3]; recall that we assume \(|\mathrm{Jac}\,f|\leq 1\) here). An alternate argument for this fact was given by the first-named author in [15], where it is shown that a hyperbolic automorphism \(f\) with connected Julia set must possess an attracting periodic point, so in particular \(|\mathrm{Jac}\,f|<1\). Surprisingly enough, the existence of an attracting point does not seem to follow easily from the description of \(J\) as a quotient of the solenoid. In this article we focus on the disconnected case. A motivating question is the following conjecture from [15]. **Conjecture 1.2**.: _Let \(f\) be a dissipative and hyperbolic automorphism of \(\mathbb{C}^{2}\), without attracting points. Then \(J\) is a Cantor set._ Our main result is an essentially complete generalization of Theorem 1.1 in two dimensions, under a mild dissipativity assumption. **Main Theorem**.: _Let \(f\) be a hyperbolic polynomial automorphism of \(\mathbb{C}^{2}\), with a disconnected Julia set, and such that \(|\mathrm{Jac}\,f|<1/\deg f\). Then there are uncountably many components of \(J\), which can be of three (mutually exclusive) types:_ 1. _point;_ 2. _leafwise bounded;_ 3. _or quasi-solenoid._ _Quasi-solenoidal components are periodic and there are only finitely many of them. Any component of type (2) is wandering and converges to a quasi-solenoidal one under forward iteration. The components of \(K\) are classified accordingly._ _Under an additional assumption (NDH) on the behavior of stable holonomy between components, any quasi-solenoidal component of \(K\) contains an attracting periodic point._ Here \(\deg f\) refers to the dynamical degree of \(f\), which is the growth rate of algebraic degree under iteration (see SS2.1). By definition, a component of \(J\) is _leafwise bounded_ if it is a relatively bounded subset of some unstable manifold; this implies that its topology is that of a full plane continuum, properly embedded in \(\mathbb{C}^{2}\). A _quasi-solenoid_ is a connected component with local product structure, which is totally disconnected in the stable direction and locally connected and leafwise unbounded in the unstable direction (see Definition 6.2). Components of type (2) are analogous to strictly preperiodic components in dimension \(1\); note however that by the local product structure of \(J\) there are uncountably many of them. Countability is restored by saturating with semi-local stable manifolds (see Theorem 5.20). The meaning of the (NDH) assumption will be explained below. ### Outline Let us discuss some of the main ideas of the proof, which occupies the most part of the paper. First, the assumption on the Jacobian is used to guarantee that _the slices of \(J\) (resp. \(K\)) by stable manifolds are totally disconnected_. It is reminiscent of the stronger _substantial dissipativity_ assumption \(|\mathrm{Jac}\,f|<1/(\deg f)^{2}\) used in [17, 34, 35]. We could indeed use substantial dissipativity and Wiman's Theorem in the style of these papers to achieve stable total disconnectivity. However, hyperbolicity allows for a Hausdorff dimension calculation which gives a better bound on the Jacobian (see Section 4). The key step of the finiteness property in the main theorem is an analysis of geometry of the unstable slices of \(J\) and \(K\). Using external rays, we first show in Section 3 that the complement of \(K\) along unstable manifolds satisfies a weak version of the _John property_. This property implies that the components of \(K\cap W^{u}\) are locally connected, and that locally there are only finitely many components of diameter bounded from below. This finiteness is used to get a classification of _semi-local components_ of \(J^{+}\) and \(K^{+}\). By this we mean that we fix a large bidisk \(\mathbb{B}\) (in adapted coordinates) in which \(J^{+}\) and \(K^{+}\) are vertical-like objects, and we look at components of \(J^{+}\cap\mathbb{B}\) (resp. \(K^{+}\cap\mathbb{B}\)). We prove that _these semi-local components behave like components of \(J\) (resp. \(K\)) for one-dimensional polynomials_: only countably many of them are non-trivial, that is, not reduced to vertical submanifolds, and any non-trivial such component is preperiodic. Besides the finiteness induced by the John-like property, this relies on a key _homogeneity property_ of such a semi-local component: either all its unstable slices are "thin", or all of them are "thick". To prove this _thin-thick dichotomy_ we show that if a semi-local component admits a thin unstable slice, then by a careful choice of \(\mathbb{B}\) we can arrange that the stable foliation of this semi-local component is transverse to \(\partial\mathbb{B}\). It follows that this component has a global product structure in \(\mathbb{B}\) (see Section 5 for details). If \(C\) is a non-trivial component of \(J\), it is easy to see that the \(\omega\)-limit set of \(C\) must be contained in one of the finitely many thick semi-local components of \(J^{+}\). We show that it must have local product structure, hence be a quasi-solenoidal component of \(J\). The main step is the following: for large \(m\neq n\), by the expansion in the unstable direction, the unstable slices of \(f^{m}(C)\) and \(f^{m}(C)\) have a diameter bounded from below, so if \(x_{n}\in f^{n}(C)\) is close to \(x_{m}\in f^{m}(C)\), by the finiteness given by the John-like property, \(f^{n}(C)\) and \(f^{m}(C)\) must correspond one to the other under local stable holonomy. Furthermore, such a quasi-solenoidal component must coincide with the limit set of its semi-local component in \(J^{+}\), and the finiteness of the number of attractors follows (see Section 6). To get a complete generalization of the one-dimensional situation, it remains to show that such a quasi-solenoidal component must "enclose" some attracting periodic point. Unfortunately, all our attempts towards this result stumbled over the following issue: if \(x,y\in J\) are such that \(y\in W^{s}(x)\), the stable holonomy induces a local homeomorphism \(J\cap W^{u}_{\rm loc}(x)\to J\cap W^{u}_{\rm loc}(y)\). The point is that it might not be the case in general that this local homeomorphism can be continued along paths in \(J\cap W^{u}(x)\), even when \(J\cap W^{u}(x)\) is a relatively bounded subset of \(W^{u}(x)\). (Compare with the Reeb phenomenon for foliations, illustrated in Figure 1.) This is a well-known difficulty in hyperbolic dynamics, which was encountered for instance in the classification of Anosov diffeomorphisms (see SS8.1 for a short discussion). If this continuation property holds - this is the Non-Divergence of Holonomy (NDH) property referred to in the main theorem- then we can indeed conclude that non-trivial periodic components of \(K\) contain attracting orbits (see Section 8, in particular Theorem 8.4). This yields in particular a conditional proof of Conjecture 1.2. Let us also note that a simple instance where the NDH property holds is when the stable lamination of \(J^{+}\) is transverse to \(\partial\mathbb{B}\) (for some choice of \(\mathbb{B}\)), a property which can be checked in practice on specific examples. In the course of the paper, we also establish a number of complementary facts, which do not enter into the proof of the main theorem: the existence of an external ray landing at every point of \(J\) (see Theorem 3.4); the structure of attracting basins (see SS 7.2); a simple topological model for the dynamics on Julia components (see SS 7.3); the topological transitivity of quasi-solenoids (see Theorem 8.7). In Appendix A we sketch the construction of the _core_ of a quasi-solenoidal component, which aims at describing its topological structure. ### Notes and acknowledgments Some of these results were already announced at the conference "Analytic Low-Dimensional Dynamics" in Toronto in June 2019. We are grateful to Pierre Berger for pointing out Proposition 4.3 to us. The second-named author was partially supported by an NSF grant, Hagler and Clay Fellowships. Part of this work was carried out during his visits of the Hagler Institute for Advanced Study at Texas A&M, the Center of Theoretical Studies at ETH Zurich, and MSRI at Berkeley. We thank these institutions for their generous support. ## 2. Preliminaries and notation ### Vocabulary of complex Henon maps If \(\mathbb{B}=D\times D\) is a bidisk, we denote by \(\partial^{v}B=\partial D\times D\) (resp. \(\partial^{h}B=D\times\partial D\)) the vertical (resp. horizontal) boundary. An object in \(\mathbb{B}\) is horizontal if it intersects \(\partial\mathbb{B}\) only in \(\partial^{v}\mathbb{B}\), and likewise for vertical objects. A closed horizontal submanifold is a branched cover of finite degree over the first projection. Let us collect some standard facts and notation (see [21, 3, 2, 19]). If \(f\) is a polynomial diffeomorphism of \(\mathbb{C}^{2}\) with non-trivial dynamics, then by making a polynomial change of coordinates we may assume that \(f\) is a composition of complex Henon mappings \((z,w)\mapsto(p_{i}(z)+a_{i}w,a_{i}z)\). In particular \(\deg(f^{n})=(\deg f)^{n}\) for every \(n\geq 0\). We fix such coordinates from now on. As it is customary in this area of research, we will often abuse terminology and simply refer to \(f\) as a _complex Henon map_. The degree of \(f\) is \(d=\prod\deg(p_{i})\geq 2\) and the relation \(\deg(f^{n})=d^{n}\) holds so that \(d\) coincides with the so-called _dynamical degree_ of \(f\). In these adapted coordinates, there exists \(R>0\) such that for the bidisk \(\mathbb{B}:=D(0,R)^{2}\), we have that \(f(\mathbb{B})\cap\mathbb{B}\) (resp. \(f^{-1}(\mathbb{B})\cap\mathbb{B}\)) is horizontally (resp. vertically) contained in \(\mathbb{B}\) and the points of \(\partial^{v}(\mathbb{B})\) (resp. \(\partial^{h}(\mathbb{B})\) escape under forward (resp. backward) iteration. * \(K^{\pm}\) is the set of points with bounded forward orbits under \(f^{\pm 1}\) and \(K=K^{+}\cap K^{-}\). Note that \(K^{+}\) is vertical in \(\mathbb{B}\) and \(f(\mathbb{B}\cap K^{+})\subset K^{+}\). Similarly, \(K^{-}\) is horizontal and \(f^{-1}(\mathbb{B}\cap K^{-})\subset K^{-}\). * \(J^{\pm}=\partial K^{\pm}\) are the forward and backward Julia sets. If \(f\) is dissipative then \(K^{-}=J^{-}\). * \(J=J^{+}\cap J^{-}\) is the Julia set. Following [6], we say that \(f\) is _unstably disconnected_ if for some (and hence any) saddle periodic point \(p\), \(W^{u}(p)\cap K^{+}\) admits a compact component (relative to the topology induced by the biholomorphism \(W^{u}(p)\simeq\mathbb{C}\)), and unstably connected otherwise. If \(f\) is unstably disconnected, then it admits an _unstable transversal_\(\Delta^{u}\), that is a relatively compact domain in \(W^{u}(p)\) which is a horizontal submanifold in \(\mathbb{B}\): indeed pick a bounded Jordan domain \(U\subset W^{u}(p)\) containing a compact component of \(W^{u}(p)\cap K^{+}\) such that \(\partial U\cap K^{+}=\emptyset\) and iterate it forward. ### Hyperbolicity and local product structure Throughout the paper we assume that \(f\) is hyperbolic on \(J\) (hence Axiom A on \(\mathbb{C}^{2}\) by [3]), with hyperbolic splitting \(T\mathbb{C}^{2}|_{J}=E^{u}\oplus E^{s}\). Then there exists a continuous Riemannian metric \(|\cdot|\) on \(J\) and constants \(s<1<u\) such that for any \(x\in J\), and any \(v\in E^{u}(x)\backslash\left\{0\right\}\), \(|Df_{x}\cdot v|\geq u\left|v\right|\) (resp. for any \(v\in E^{s}(x)\), \(|Df_{x}\cdot v|\leq s\left|v\right|\)). By [16], it is enough to assume that \(f\) is hyperbolic on \(J^{\star}\), where \(J^{\star}\) is the closure of saddle periodic points (and a posteriori one deduces that \(J=J^{\star}\)). In this situation the local stable and unstable manifolds of points of \(J\) have local uniform geometry: there exists a uniform \(r>0\) such that for every \(x\in J\), \(W^{u}(x)\) (resp. \(W^{s}(x)\)_is of size \(r\) at \(x\)_, in the sense that it contains a graph of slope at most \(1\) over a disk of radius \(r\) in \(E^{u}(x)\) (resp. \(E^{s}(x)\)). The reader is referred to [8, 1] for a detailed study of this notion. We denote by \(W^{s/u}_{\delta}(x)\) the local stable/unstable manifold of radius \(\delta\) at \(x\), which is by definition the component of \(W^{s/u}(x)\) in \(B(x,\delta)\). When the precise size does not matter, we simply denote them by \(W^{s/u}_{\mathrm{loc}}\). Slightly reducing the expansion constant \(u\) if necessary, given two points \(z,z^{\prime}\) in some local unstable manifold \(W^{u}_{\delta}(x)\), there is a uniform constant \(C\) such that \(d(f^{-n}(z),f^{-n}(z^{\prime}))\leq Cu^{-n}\), for all \(n\geq 0\). There exists \(\delta>0\) and a neighborhood \(\mathcal{N}\) of \(J\) such that the restriction to \(\mathcal{N}\) of the family local stable/unstable manifolds of radius \(\delta\) is a lamination, denoted by \(\mathcal{W}^{u/s}\). The Julia set has local product structure so there is a covering by topological bidisks \(Q\) (flow boxes) such that the laminations \(\mathcal{W}^{u/s}\) are trivial in \(Q\) and \[J\cap Q\simeq(W^{s}_{Q}(x)\cap J)\times(W^{u}_{Q}(x)\cap J)=(W^{s}_{Q}(x) \cap J^{-})\times(W^{u}_{Q}(x)\cap J^{+}).\] It is shown in [3] that the family of global stable and unstable manifolds of points of \(J\) also has a lamination structure, which will be denoted by \(\mathcal{W}^{s/u}\). More precisely, in the dissipative case, \(\mathcal{W}^{s}\) is a lamination of \(J^{+}\) is laminated by stable manifolds and the other hand, \(\mathcal{W}^{u}\) is a lamination of \(J^{-}\backslash\left\{a_{1},\ldots,a_{N}\right\}\), where \(\left\{a_{1},\ldots,a_{N}\right\}\) is the finite set of attracting periodic points of \(f\). No unstable leaf extends across an attracting point, even as a singular analytic set: indeed an unstable leaf is biholomorphic to \(\mathbb{C}\), therefore such an extension would yield a submanifold of \(\mathbb{C}^{2}\) biholomorphic to a (possibly singular) copy of \(\mathbb{P}^{1}\), which is impossible. Under additional dissipativity assumptions, it was shown in [35] that the stable lamination \(\mathcal{W}^{s}\) in \(\mathbb{B}\) can be extended to a \(C^{1}\) foliation in some neighborhood of \(J^{+}\): see Lemma 5.7 below. Let us conclude this paragraph with a useful elementary result. **Lemma 2.1**.: _If \(f\) is hyperbolic, every holomorphic disk contained in \(K^{+}\) is either contained in the Fatou set or in the stable manifold of a point of \(J\)._ Proof.: Indeed, if \(\Delta\) is a disk contained in \(K^{+}\) then \(\Delta\) is a Fatou disk, i.e. \((f^{n}|_{\Delta})_{n\geq 0}\) is a normal family. Now there are two possibilities: either \(\Delta\) is contained in \(\mathrm{Int}(K^{+})\) hence in the Fatou set, or it intersects \(J^{+}\). In the latter case, either \(\Delta\) is contained in a stable leaf or by [2, Lem. 6.4], \(\Delta\) must have a transversal intersection with some unstable manifold, so by the Inclination Lemma it is not a Fatou disk, which is a contradiction. ### Affine structure Global stable and unstable manifolds are uniformized by \(\mathbb{C}\), so they admit a natural affine structure. Since any automorphism of \(\mathbb{C}\) is affine, \(f\) acts affinely on leaves. In particular there is a well defined notion of a round disk, which is \(f\)-invariant. Likewise, the Euclidean distance is well-defined in the leaves, up to a multiplicative constant. For any \(x\in J\) we choose a uniformization \(\psi_{x}^{u}:\mathbb{C}\stackrel{{\sim}}{{\longrightarrow}}W^{u }(x)\) such that \(\psi_{x}^{u}(0)=x\) and \(|(\psi_{x}^{u})^{\prime}(0)|=1\) **Lemma 2.2**.: _The family of uniformizations \((\psi_{x}^{u})_{x\in J}\) is continuous up to rotations, that is, if \(x_{n}\to x\) then \((\psi_{x_{n}}^{u})\) is a normal family and its cluster values are of the form \(\psi_{x}^{u}(e^{i\theta}\cdot)\)._ Proof.: The result follows from the continuity of the affine structure on the unstable leaves (see Theorem B.1). It is unclear whether the assignment \(J\ni x\mapsto\psi_{x}^{u}\) can be chosen to be continuous, that is, if a consistent choice of rotation factor \(e^{i\theta}\) can be made. This can be done locally but there might be topological obstructions to extend the continuity to \(J\). Notice that the \((\psi_{x}^{u})\) provide a normalization for the leafwise Euclidean distance. The normalized Euclidean distance on \(W^{u}(x)\) will be denoted by \(d_{x}^{u}\). If \(C\subset W^{u}(x)\), its diameter with respect to \(d_{x}^{u}\) will be denoted by \(\operatorname{Diam}_{x}\). By Lemma 2.2, \(d_{x}^{u}\) varies continuously with \(x\). For \(R>0\) we let \(D_{x}^{u}(x,R):=\psi_{x}^{u}(D(0,R))\). By construction, \(f\) is a uniformly expanding linear map in these affine coordinates, that is \(f\circ\psi_{x}^{u}=\psi_{f(x)}^{u}(\lambda_{x}^{u})\), with \(|\lambda_{x}^{u}|=\|df|_{E_{x}^{u}}\big{\|}\). By hyperbolicity there is a positive constant \(C\) such that for every \(x\in J\), \[\left|\prod_{i=0}^{n-1}\lambda_{f^{i}(x)}^{u}\right|\geqslant Cu^{n}, \tag{1}\] where \(u>1\) was defined in SS2.2. By the Koebe Distortion Theorem there exists a uniform \(r>0\) such that the \(D^{u}(x,r)\) are contained in the flow boxes (see e.g. [8, Lemma 3.7]). By the local bounded geometry of the leaves, the distance induced by the affine structure on the \(D^{u}(x,r)\) is equivalent to that induced by the ambient Hermitian structure. Then, iterating finitely many times we can promote this result on the \(D^{u}(x,R)\) for every given \(R>0\). All the above discussion holds for stable manifolds, with superscripts \(u\) replaced by \(s\). ### Connected and semi-local components For every \(x\in J\) (or more generally \(x\in K^{+}\cap\mathbb{B}\)) we denote by \(K_{\mathbb{B}}^{+}(x)\) the connected component of \(x\) in \(K^{+}\cap\mathbb{B}\), which is a vertical subset of \(\mathbb{B}\). It follows from the Henon-like property that \(f(K_{\mathbb{B}}^{+}(x))\subset K_{\mathbb{B}}^{+}(f(x))\), thus \(f\) induces a (non-invertible) dynamical system on the set of connected components of \(K^{+}\cap\mathbb{B}\). The same discussion applies to components of \(J^{+}\cap\mathbb{B}\). More generally, for any closed connected subset \(C\subset J\) (resp. \(C\subset K\)), we define \(J_{\mathbb{B}}^{+}(C)\) (resp. \(K_{\mathbb{B}}^{+}(C)\)) to be the connected component of \(C\) in \(J^{+}\cap\mathbb{B}\) (resp. \(K^{+}\cap\mathbb{B}\)). Of course for \(x\in C\), \(J_{\mathbb{B}}^{+}(x)=J_{\mathbb{B}}^{+}(C)\) holds. A related concept is \(W_{\mathbb{B}}^{s}(x)\), the component of \(\mathbb{B}\cap W^{s}(x)\) containing \(x\). If we set \(W_{\mathbb{B}}^{s}(C)=\bigcup_{x\in C}W_{\mathbb{B}}^{s}(x)\) then \(W_{\mathbb{B}}^{s}(C)\) is contained in \(K_{\mathbb{B}}^{+}(C)\) but this inclusion may be strict. This phenomenon may happen when for some \(x\in C\), \(W^{s}_{\mathbb{B}}(x)\) is tangent to \(\partial\mathbb{B}\) (see Figure 1). For \(x\in K\), we denote by \(K^{s}(x)\) (resp. \(K^{u}(x)\)) the connected component of \(K\cap W^{s}(x)=K^{-}\cap W^{s}(x)\) (resp. \(K\cap W^{u}(x)=K^{+}\cap W^{u}(x)\)) containing \(x\), and also \(K(x)\) its connected component in \(K\). For \(x\in J\), we define \(J^{s}(x)\), \(J^{u}(x)\) and \(J(x)\) similarly. More generally, if needed, we use the notation \(\operatorname{Comp}_{E}(x)\) for the connected component of \(x\) in a set \(E\). We use the subscript 'i' to denote topological operations (interior, closure, etc.) relative to the intrinsic topology in stable/unstable manifolds. **Lemma 2.3**.: _Assume that \(f\) is hyperbolic. Then every connected component of \(K^{+}\cap\mathbb{B}\) has a connected boundary, which is a component of \(J^{+}\cap\mathbb{B}\)._ Proof.: Observe that if \(p\) is an interior point of \(K^{+}\cap L\), where \(L\) is a horizontal line, then it belongs to a Fatou disk. Since \(L\) is not contained in \(J^{+}\), by Lemma 2.1, we get that \(p\in\operatorname{Int}(K^{+})\). This implies that for every \(x\in K^{+}\cap\mathbb{B}\), \(\partial K^{+}_{\mathbb{B}}(x)\subset\bigcup_{t\in\mathbb{D}}\partial_{L_{t}} (K^{+}_{\mathbb{B}}(x)\cap L_{t})\), where \(L_{t}=\mathbb{D}\times\{t\}\) and \(\partial_{L_{t}}\) refers to the boundary in \(L_{t}\). The converse inclusion is obvious, so \(\partial K^{+}_{\mathbb{B}}(x)\cap\mathbb{B}=\bigcup_{t\in\mathbb{D}}\partial _{L_{t}}(K^{+}_{\mathbb{B}}(x)\cap L_{t})\). Since \(K^{+}_{\mathbb{B}}(x)\cap L_{t}\) is compact and Figure 1. Discontinued holonomy. The green components belong to \(K^{+}_{\mathbb{B}}(C)\) but not to \(W^{s}_{\mathbb{B}}(C)\) (in blue). The red part of \(C\) cannot be followed under stable holonomy to \(C^{\prime}\) due to a Reeb-like phenomenon. polynomially convex, and obviously \(K_{\mathbb{B}}^{+}(x)=\bigcup_{t\in\mathbb{D}}K_{\mathbb{B}}^{+}(x)\cap L_{t}\), this means that \(K_{\mathbb{B}}^{+}(x)\) is obtained from \(\partial K_{\mathbb{B}}^{+}(x)\cap\mathbb{B}\) by filling the holes of all components of \(\partial_{L}(K_{\mathbb{B}}^{+}(x)\cap L)\) in every horizontal line. Now assume \(\partial K_{\mathbb{B}}^{+}(x)\cap\mathbb{B}\) is disconnected, so we can write it as \(B_{1}\cup B_{2}\), where each \(B_{i}\) is relatively open and \(B_{1}\cap B_{2}=\emptyset\). In every horizontal slice \(L\), \(B_{i}\cap L\) must be a union of components of \(\partial_{L}(K_{\mathbb{B}}^{+}(x)\cap L)\). For \(i=1,2\), let \(\widehat{B}_{i}\) be the set obtained by filling the holes of \(B_{i}\) in each horizontal line in \(\mathbb{B}\). The previous discussion shows that \(K_{\mathbb{B}}^{+}(x)=\widehat{B}_{1}\cup\widehat{B}_{2}\), where the \(\widehat{B}_{i}\) are relatively open in \(K_{\mathbb{B}}^{+}(x)\) and disjoint. This is a contradiction, therefore \(\partial K_{\mathbb{B}}^{+}(x)\cap\mathbb{B}\) is connected. For the second statement, simply observe that if \(D\subset J^{+}\cap\mathbb{B}\) is a connected set such that \(\partial K_{\mathbb{B}}^{+}(x)\cap\mathbb{B}\subset D\), then \(D\) is contained in \(K_{\mathbb{B}}^{+}(x)\) and also in \(\partial K^{+}\) so \(D\subset\partial K_{\mathbb{B}}^{+}(x)\cap\mathbb{B}\) and we are done. ### Basic properties of leafwise components Here we assume that \(f\) is a hyperbolic and dissipative complex Henon map. The following result is well-known. **Lemma 2.4**.: _For every \(x\in K\) we have \(\operatorname{Int}_{\mathrm{i}}(K^{u}(x))\subset\operatorname{Int}(K^{+})\) and \(\partial_{\mathrm{i}}(K^{u}(x))\subset J\). In particular if \(\operatorname{Int}_{\mathrm{i}}(K^{u}(x))\) is non-empty, each of its components is contained in an attracting basin. Likewise \(\operatorname{Int}_{\mathrm{i}}K^{s}(x)=\emptyset\) and \(J^{s}(x)=K^{s}(x)\)._ Proof.: Indeed, since stable and unstable manifolds cannot coincide along some open set, if \(\Delta\) is a disk contained in \(K^{u}(x)\), it follows from Lemma 2.1 that \(\Delta\subset\operatorname{Int}(K^{+})\), and the remaining conclusions follow. For \(x\) in \(J\), \(K^{u}(x)\) may be bounded or unbounded for the intrinsic (leafwise) topology. By the maximum principle, \(K^{u}(x)\) is polynomially convex, so if \(K^{u}(x)\) (or equivalently \(J^{u}(x)\)) is leafwise bounded, then \(K^{u}(x)\) is simply the polynomially convex hull of \(J^{u}(x)\) (i.e. is obtained by filling in the leafwise bounded components of the complement). **Lemma 2.5**.: _Given \(x\in K\), in the following properties we have \((iv)\Leftrightarrow(iii)\Rightarrow(ii)\Leftrightarrow(i)\):_ 1. \(K^{u}(x)\) _is leafwise bounded;_ 2. \(J^{u}(x)\) _is leafwise bounded;_ 3. \(W_{\mathbb{B}}^{u}(x)\) _is leafwise bounded;_ 4. \(W_{\mathbb{B}}^{u}(x)\) _is a closed horizontal submanifold of_ \(\mathbb{B}\)_._ _Furthermore if (ii) holds, then (iii) holds for \(f^{n}(x)\) for sufficiently large \(n\)._ Proof.: The implication \((i)\Rightarrow(ii)\) follows directly from the fact that \(J^{u}(x)=\partial_{\mathrm{i}}K^{u}(x)\). Now assume that \(J^{u}(x)\) is leafwise bounded. Working in \(W^{u}(x)\simeq\mathbb{C}\), we have that \(K^{u}(x)\) is a closed connected polynomially convex set and \(J^{u}(x)\) is a bounded connected component of \(\partial_{\mathrm{i}}K^{u}(x)\). Since every point of \(J^{u}(x)\) lies on the boundary of \(W^{u}(x)\backslash K^{+}\) (for the intrinsic topology), the compact set obtained by filling the holes of \(J^{u}(x)\) must be \(K^{u}(x)\), so the converse implication holds. Since \(K^{u}(x)\subset W_{\mathbb{B}}^{u}(x)\), obviously (_iii_) implies (_i_). Conversely, \(K^{u}(x)\) is the decreasing intersection of the sequence of components of \(x\) in \(W^{u}(x)\cap f^{-n}(\mathbb{B})\). Hence, if \(K^{u}(x)\) is leafwise bounded it follows that \(\operatorname{Comp}_{W^{u}(x)\cap f^{-n}(\mathbb{B})}(x)\) is leafwise bounded for large enough \(n\), and so does \(W^{u}(f^{n}(x))\cap\mathbb{B}\). Recall that for every \(x\), \(W^{u}(x)\) is an injectively immersed copy of \(\mathbb{C}\), whose image is a leaf of the lamination of \(J^{-}\backslash\left\{a_{1},\ldots,a_{N}\right\}\). Here the \(a_{i}\) are the attracting points, and a leaf never extends to a submanifold in the neighborhood of \(a_{i}\) (1). In particular, \(J^{-}\) is laminated near \(\partial\mathbb{B}\). If \(W^{u}_{\mathbb{B}}(x)\) is leafwise bounded, then it is of the form \(\psi^{u}_{x}(\Omega)\), where \(\Omega\) is some bounded open set in \(\mathbb{C}\). Since \(\psi^{u}\) extends to a neighborhood of \(\overline{\Omega}\), \(W^{u}_{\mathbb{B}}(x)\) it is a properly embedded submanifold of \(\mathbb{B}\), which extends to a neighborhood of \(\overline{\mathbb{B}}\). So (_iii_) implies (_iv_). Finally, if (_iv_) holds, since \(J^{-}\) is a lamination near \(\partial\mathbb{B}\), we see that \(W^{u}_{\mathbb{B}}(x)\) extends to a submanifold \(S\) in a neighborhood of \(\overline{\mathbb{B}}\). Then \(W^{u}_{\mathbb{B}}(x)\) is relatively compact in \(S\subset W^{u}(x)\) so if \(\Omega\) is such that \(\psi^{u}_{x}(\Omega)=W^{u}_{\mathbb{B}}(x)\) then \(\Omega\) is relatively compact in \(\mathbb{C}\), and (_iii_) follows. Footnote 1: Indeed otherwise this would induce a compactification of unstable manifolds, yielding an embedding of \(\mathbb{P}^{1}\) into \(\mathbb{C}^{2}\). ## 3. External rays In this section we study external rays along the unstable lamination (i.e. along \(J^{-}\)) for a hyperbolic complex Henon map. The existence and convergence properties of external rays were studied in the unstably connected case in [6, 7]. Recall that when \(|\mathrm{Jac}(f)|<1\), unstable connectedness is equivalent to the connectedness of \(J\). The results that we prove here do not rely on any unstable connectivity or dissipativity assumption, nevertheless what we have in mind is the case of a dissipative unstably disconnected map. ### Escaping from \(K^{+}\) along an external ray By definition, an _unstable external ray_ (simply called "external rays" in the following) is a piecewise smooth continuous path contained in a leaf \(W^{u}(x)\) of the unstable lamination, which is a union of gradient lines of \(G^{+}|_{W^{u}(x)}\) outside the (leafwise locally finite) set of critical points of \(G^{+}|_{W^{u}(x)}\). As usual we assume that \(G^{+}\) is strictly monotone along external rays (which will be considered as ascending or descending depending on the context). We do not prescribe rules for the behavior of rays hitting critical points, so in particular there is no attempt at defining a notion of "external map". In the next proposition the length of curves is relative to the ambient metric in \(\mathbb{C}^{2}\). We show that external rays ascend fairly quickly. **Proposition 3.1**.: _Let \(f\) be a hyperbolic polynomial automorphism of \(\mathbb{C}^{2}\) of dynamical degree \(d>1\). For every \(r_{1}<r_{2}\) there exists \(\ell(r_{1},r_{2})\) such that for every \(x\in J^{-}\backslash K^{+}\) such that if \(G^{+}(x)=r_{1}\), any external ray through \(x\) reaches \(\left\{G^{+}=r_{2}\right\}\) along a path whose length is bounded by \(\ell(r_{1},r_{2})\). In addition \(\ell(r_{1},r_{2})\) is bounded by a function \(\overline{\ell}(r_{2})\) depending only on \(r_{2}\). Furthermore \(\ell(r_{1},r_{2})\to 0\) when \(r_{1}\to r_{2}\) and \(\overline{\ell}(r_{2})=O(r_{2}^{\alpha})\) when \(r_{2}\to 0\), for some \(\alpha>0\)._ _Remark 3.2_.: Notice that no dissipativity is assumed here so the result holds along stable leaves as well. Proof.: Start with \(r_{1}=1\) and \(r_{2}=d\). In \(J^{-}\cap\left\{1\leq G^{+}\leq d\right\}\) the leaves of \(\mathcal{W}^{u}\) have uniform geometry and no leaf of \(\mathcal{W}^{u}\) is contained in an equipotential hypersurface of the form \(\left\{G^{+}=C\right\}\), in particular unstable critical points have uniform order. Thus by compactness and continuity of \(G^{+}\), we infer the existence of uniform \(\delta_{0}\) and \(\ell_{0}\) such that for every \(x\in J^{-}\cap\{1\leq G^{+}\leq d\}\), any external ray through \(x\) of length \(\ell_{0}\) reaches \(\{G^{+}=r\}\) with \(r\geq G^{+}(x)+\delta_{0}\). By concatenating such pieces of rays, we deduce the conclusion of the proposition for \(r_{1}=1\) and \(r_{2}=d\) (and \(\ell(1,d)\leq(d-1)\ell_{0}/\delta_{0}\)). Pulling back finitely many times and concatenating again, we get a similar conclusion for \(\{r_{0}\leq G^{+}\leq d\}\) for any fixed \(r_{0}\). Let us now fix \(r_{0}\) such that \(\{0<G\leq dr_{0}\}\cap J^{-}\) is contained in \(W^{u}_{\rm loc}(J)\). Any piece of external ray between the levels \(\{G^{+}=r_{0}/d^{n}\}\) and \(\{G^{+}=r_{0}/d^{n-1}\}\) is the pull-back of a piece of external ray in \(\{r_{0}\leq G^{+}\leq dr_{0}\}\). Thus by concatenation it follows that any external ray starting from \(\{G^{+}=r_{0}/d^{n}\}\) reaches \(\{G^{+}=r_{0}\}\) along a path of length bounded by \(\leq C\ell(r_{0},dr_{0})\sum_{k=1}^{n}u^{-k}\), where \(u\) is the expansion constant introduced in SS2.2. This proves the existence of the functions \(\ell(r_{1},r_{2})\) and \(\overline{\ell}(r_{2})\) The same ideas imply immediately that \(\ell(r_{1},r_{2})\to 0\) when \(r_{1}\to r_{2}\). For the last statement simply note that for every \(r_{1}<r_{2}\leq r_{0}\), \[\ell(r_{1}<r_{2})\leq C\sum_{k=k_{0}}^{\infty}u^{-k}=O(u^{-k_{0}})\] where \(k_{0}\) is the greatest integer such that \(r_{0}d^{-k_{0}}\geq r_{2}\), therefore \(\ell(r_{1}<r_{2})=O(r_{2}^{\alpha})\), with \(\alpha=\frac{\log u}{\log d}\). It is easy to deduce from these ideas that all (descending) external rays land. However, since there is no well defined external map, the characterization of the set of landing points does not seem to follow directly from this landing property. **Corollary 3.3** (John-Holder property).: _There exists a constant \(\alpha>0\) such that for any sufficiently small \(\eta>0\), for any \(x\in J^{-}\backslash K^{+}\) sufficiently close to \(K^{+}\), there exists a path of length at most \(O(\eta^{\alpha})\) in \(W^{u}(x)\backslash K^{+}\) joining \(x\) to a point \(\eta\)-far from \(K^{+}\)._ Proof.: By the previous proposition, there exists a path of length \(O(r^{\alpha_{1}})\) joining \(x\) to a point \(y\) such that \(G^{+}(y)=r\). Now the Green function is Holder continuous (see [19]) and that \(K^{+}=\{G^{+}=0\}\), so \(d(x,K^{+})\geq Cr^{\alpha_{2}}\). The result follows. This John-Holder property has deep consequences for the topology of \(K^{+}\cap W^{u}(x)\), which will play an important role in the paper. Intuitively it means that there cannot exist long "channels" between local components of \(K^{+}\). This property is strongly reminiscent of the so-called John condition for plane domains, which have been much studied in one-dimensional dynamics, in relation with non-uniform hyperbolicity (see e.g. [12, 23]). In the Henon context, it was shown in [7] that for unstably connected hyperbolic maps, the components of \(W^{u}(x)\backslash K^{+}\) satisfy the John property. It is very likely that using the continuity of affine structure along unstable leaves, their arguments can be adapted to the disconnected case as well: this would upgrade Corollary 3.3 to the actual John condition. One advantage of this weaker property is that it makes no reference to the affine structure of the leaves, so it is more flexible and may be adapted to semi-local situations (e.g. Henon-like maps). ### Accesses and landing **Theorem 3.4**.: _Let \(f\) be a hyperbolic polynomial automorphism of \(\mathbb{C}^{2}\) with dynamical degree \(d>1\)._ 1. _For every_ \(x\in J\)_,_ \(D^{u}(x,1)\backslash K^{+}\) _admits finitely many connected components, and at least one of these components contains_ \(x\) _in its closure._ 2. _For any component_ \(\Omega\) _of_ \(D^{u}(x,1)\backslash K^{+}\) _such that_ \(\overline{\Omega}\ni x\) _there is an external ray landing at_ \(x\) _through_ \(\Omega\)_._ For the proof, it is convenient to work in the affine coordinates given by the unstable parameterizations. We work in the disks \(D^{u}(x,1)\) and measure path length relative to the normalized affine metric, which is equivalent to the ambient one. Proof.: The first observation is that \(D^{u}(x,1)\backslash K^{+}\) contains \(x\) in its closure: otherwise \(x\) would lie in the leafwise interior of \(K^{+}\), thus contradicting Lemma 2.4. Furthermore, by the maximum principle, if \(y\in D^{u}(x,1)\backslash K^{+}\) is arbitrary, the component of \(y\) in \(D^{u}(x,1)\backslash K^{+}\) reaches the boundary of \(D^{u}(x,1)\). We claim that there exists \(\eta_{1}>0\) such that for any \(x\in J\) and any component \(\Omega\) of \(D^{u}(x,1)\backslash K^{+}\) such that \(\Omega\cap D^{u}\left(x,1/4\right)\neq\emptyset\), then: \[\sup G^{+}|_{D^{u}(x,1/2)\cap\Omega}\geq\eta_{1}.\] This follows directly from Proposition 3.1: indeed there exists \(\eta_{1}>0\) such that any point of \(J^{-}\backslash K^{+}\) reaches \(\{G^{+}=\eta_{1}\}\) along a path of length \(1/4\). By the Holder continuity of \(G^{+}\), we infer that any such component \(\Omega\) contains a disk of radius \(C\eta_{1}^{\alpha}\), so there are finitely many of them. In particular if \((x_{n})\) is a sequence in \(D^{u}(x,1)\backslash K^{+}\) converging to \(x\), infinitely many of them must belong to the same component \(\Omega\) of \(D^{u}(x,1)\backslash K^{+}\), which shows that \(\overline{\Omega}\) contains \(x\). This proves assertion (1) of the theorem. Fix now a component \(\Omega\) of \(D^{u}(x,1)\backslash K^{+}\) such that \(\overline{\Omega}\ni x\). Let \(\eta_{1}\) be as above and fix \(\varepsilon\) such that \(\varepsilon<\eta_{1}/d\) and \(\ell(\varepsilon,d\varepsilon)<\min\left(1/2,(u-1)/2\right)\) where \(\ell(\cdot)\) is as in Proposition 3.1 and the constant \(u\) was defined in SS2.2. We do the following construction: for every point \(y\in\left\{G^{+}=\varepsilon\right\}\cap\overline{D}^{u}\left(x,1/2\right)\), we consider all ascending external rays emanating from \(y\) until they reach \(\{G^{+}=d\varepsilon\}\). The lengths of the corresponding rays is not larger than \(\ell(\varepsilon,d\varepsilon)\). These are the rays of \(0^{\text{th}}\) generation and we denote by \(E_{0}\) the set of their endpoints (2), which by the assumption on \(\ell(\varepsilon,d\varepsilon)\) is contained in \(\{G^{+}=d\varepsilon\}\cap D^{u}(x,1)\). We note that \(E_{0}\) is a closed set because it is the ending point set of a compact family of external rays. Since \(\varepsilon<\eta_{1}/d\), \(E_{0}\) has non-empty intersection with \(\Omega\). Footnote 2: Recall that since we do not prescribe the behavior of external rays at critical points of \(G^{+}\) there is no reason that external rays fill up the whole unstable lamination, so \(E_{0}\) could be smaller than \(\left\{G^{+}=d\varepsilon\right\}\) Performing the same construction in \(D^{u}(f(x),1)\) we obtain a set of rays of \(0\)th generation in that disk, which connect \(\{G^{+}=\varepsilon\}\cap\overline{D}^{u}\left(f(x),1/2\right)\) to \(\{G^{+}=d\varepsilon\}\), and their endpoints lie in \[\left\{G^{+}=d\varepsilon\right\}\cap\overline{D}^{u}\left(f(x),\frac{1}{2}+ \ell(\varepsilon,d\varepsilon)\right).\] The pull-backs of these rays by \(f\) have their endpoints in \[\left\{G^{+}=\varepsilon\right\}\cap\overline{D}^{u}\left(x,\frac{1}{u}\left( \frac{1}{2}+\ell(\varepsilon,d\varepsilon)\right)\right)\subset\left\{G^{+}= \varepsilon\right\}\cap D^{u}\left(x,\frac{1}{2}\right),\] by the assumption on \(\ell(\varepsilon,d\varepsilon)\). These are the rays of 1st generation in \(D^{u}(x,1)\). We define \(E_{1}\subset E_{0}\) to be the closed set of points for which we can concatenate a ray of 0th generation with a ray of 1st generation to descend all the way to \(\left\{G^{+}=\varepsilon/d\right\}\). Notice that \(f(\Omega)\cap D^{u}(f(x),1)\) is not necessarily connected, so it is a union of components of \(D^{u}(f(x),1)\backslash K^{+}\), and since \(\overline{f(\Omega)}\ni f(x)\), at least one of these components reaches \(D^{u}(f(x),1/2)\), so it contains rays of 0th generation. This shows that \(E_{1}\) has non-empty intersection with \(\Omega\). Continuing inductively this construction, we obtain a decreasing sequence \((E_{n})\) of closed subsets in \(\left\{G^{+}=d\varepsilon\right\}\cap D^{u}(x,1)\), each of which intersecting \(\Omega\). If \(e\in\bigcap_{n}E_{n}\cap\Omega\), then there is a ray through \(e\) (hence in \(\Omega\)) converging to \(K^{+}\), whose part in \(\left\{\varepsilon d^{-n-1}\leqslant G^{+}\leqslant\varepsilon d^{-n}\right\}\) is the pull-back under \(f^{n}\) of a piece of external ray in \(D^{u}(f^{n}(x),1)\). Therefore this ray lands at \(x\), and the proof of assertion (2) is complete. _Remark 3.5_.: The existence of a convergent external ray along any access to a saddle periodic point can be obtained exactly as in the 1-dimensional case (see [18]), without assuming uniform hyperbolicity. In that case the Denjoy-Carleman-Ahlfors Theorem is used instead of the John-Holder property to guarantee the finiteness of the number of local components. ### Topology of \(K^{+}\cap W^{u}\) In this section we review the consequences of Corollary 3.3 for the topology of unstable components of \(K^{+}\). **Theorem 3.6**.: _Let \(f\) be a hyperbolic Henon map. Then for every \(x\in J\):_ 1. _every component of_ \(K^{+}\cap W^{u}(x)\) _(resp._ \(J^{+}\cap W^{u}(x)\)_) is locally connected;_ 2. _for any smoothly bounded domain_ \(\Omega\subset W^{u}(x)\)_, for every_ \(\delta>0\)_,_ \(K^{+}\cap\Omega\) _(resp._ \(J^{+}\cap\Omega\)_) admits at most finitely many components of diameter larger than_ \(\delta\)_._ As before this follows from [7] when \(f\) is unstably connected (see Theorems 3.5 and 5.6 there), so we focus on the unstably disconnected case. In this case it is known that \(K^{+}\cap W^{u}(x)\) has uncountably many point components (see [7, Thm 3.1]). Using (_ii_) we can be more precise: **Corollary 3.7**.: _Let \(f\) be hyperbolic and unstably disconnected. Then for every \(x\in J\), all but at most countably many components of \(K^{+}\cap W^{u}(x)\) are points._ Let us stress that the conclusions of the theorem follow solely from Corollary 3.3 together with some elementary topological considerations. Remark also that the assumption that \(\Omega\) has smooth boundary in (_ii_) is necessary: indeed otherwise it could cut a component of \(K^{+}\) in infinitely many parts of large diameter (think e.g. of the closed unit square cut out by some comb-like domain). Part or all of Theorem 3.6 is presumably known to specialists, however for completeness we provide some details. Let us first define a notion of "fast escaping from a compact set". **Definition 3.8**.: Let \(\Omega\) be a smoothly bounded domain in \(\mathbb{C}\) and \(K\) be a closed subset in \(\Omega\subset\mathbb{C}\). We say that \(K\) satisfies the _fast escaping property_ in \(\Omega\) if there exists an increasing continuous function \(\ell\) with \(\ell(0)=0\) such that for any sufficiently small \(\eta>0\) and any \(x\notin K\), there exists a path \(\gamma:[0,1]\to\Omega\backslash K\) of length at most \(\ell(\eta)\) such that \(\gamma(0)=x\) and \(d(\gamma(1),K)\geq\eta\). Corollary 3.3 asserts that if \(f\) is hyperbolic, then for every \(x\in J\), and any leafwise bounded domain \(\Omega\subset W^{u}(x)\), \(K^{+}\cap W^{u}(x)\) satisfies the fast escaping property in \(\Omega\) with \(\ell(\eta)=c\eta^{\alpha}\). Note that both properties (_i_) and (_ii_) in Theorem 3.6 are local in \(W^{u}(x)\) so the choice of ambient or leafwise topology or metric is harmless. The following lemma takes care of item (_ii_) of the theorem. **Lemma 3.9**.: _Let \(K\) be a closed subset of a smoothly bounded domain \(\Omega\subset\mathbb{C}\), satisfying the fast escaping property. Then for every \(\delta>0\), there are at most finitely many components of \(K\) (resp. of \(\operatorname{Int}(K)\), of \(\partial K\)) of diameter greater than \(\delta\)._ Proof.: We first prove the result for \(K\) and \(\operatorname{Int}(K)\) and then explain how to modify the proof to deal with \(\partial K\). Let us first assume that \(\Omega\) is the unit square \(Q\), and denote by \(\pi_{1}\) and \(\pi_{2}\) the coordinate projections of \(Q\). Assume by contradiction that there are infinitely many components \((C_{i})_{i\geq 0}\) of \(K\) with diameter \(\geq\delta\). Then there exists \(\pi\in\{\pi_{1},\pi_{2}\}\) such that infinitely many \(C_{i}\) satisfy \(\operatorname{Diam}(\pi(C_{i}))\geq\delta/2\). Therefore there is an interval \(I\) of length \(\delta/4\) such that for infinitely many \(i\), \(C_{i}\) disconnects the strip \(\pi^{-1}(I)\), and we conclude that \(\pi^{-1}(I)\backslash\bigcup C_{i}\) has infinitely many connected components \(U_{j}\) going all the way across the strip. (Notice that the \(U_{j}\) may contain other points of \(K\).) Let \(c\) be the center point of \(I\). Since the \(C_{i}\) are distinct components of \(K\), for each \(j\) there exists a point \(x_{j}\) in \(U_{j}\cap\pi^{-1}(c)\) which does not belong to \(K\). If \(\eta\) is chosen such that \(\ell(\eta)=\delta/20\) we infer from the fast escaping property that for every \(j\), \(U_{j}\) contains a disk of radius \(\eta\), which is the desired contradiction. For \(\operatorname{Int}(K)\) the argument is identical except that instead of \(c\) we take a small open interval \(I^{\prime}\) about \(c\) and argue that if the \(C_{i}\) are distinct components of \(\operatorname{Int}(K)\), there exists \(x_{j}\in U_{j}\cap\pi^{-1}(I^{\prime})\) which does not belong to \(K\). In the general case, take a square \(Q\) such that \(\Omega\Subset Q\) and replace \(K\) by \(K^{\prime}=\overline{K\cap\Omega}\). Let us check that \(K^{\prime}\) satisfies the fast escaping property in \(Q\). Indeed, if \(x\in Q\backslash K^{\prime}\) we have either \(x\in\Omega\), \(x\in\partial\Omega\) or \(x\in Q\backslash\overline{\Omega}\). In the first case we take the path \(\gamma\) given by the fast escaping property of \(K\) in \(\Omega\). In the second case, any small ball \(B\) about \(x\) intersects \(\Omega\backslash K\), and we simply take a path starting from some \(x^{\prime}\in B\cap(\Omega\backslash K)\). Finally in the last case we use the fact that \(\overline{\Omega}\) has the fast escaping property in \(Q\). By the first part of the proof we conclude that \(K^{\prime}\) has finitely many components of diameter \(\geq\delta\). Since any component of \(K\) (resp. \(\operatorname{Int}(K)\)) is contained in a component of \(K^{\prime}\) (resp. \(\operatorname{Int}(K^{\prime})\)), we are done. The proof that \(\partial K\) admits only finitely many components of diameter greater than \(\delta\) goes exactly along the same lines. We assume that there are infinitely many components \(C_{i}\) of \(\partial K\) disconnecting the strip \(\pi^{-1}(I)\), so that \(\pi^{-1}(I)\backslash\bigcup C_{i}\) also has infinitely many components \(U_{j}\). The difference with the previous case is that some of these components may be completely included in \(K\). We modify the argument as follows. Denote by \(U_{j}^{\prime}\) the components completely included in \(K\) and by \(U_{j}^{\prime\prime}\) the remaining ones. We claim that there are infinitely many \(U_{j}^{\prime\prime}\)'s. Indeed since the \(C_{i}\) are components of \(\partial K\), two components of the form \(U_{j}^{\prime}\) must be separated by a component of the form \(U_{j}^{\prime\prime}\). So there are infinitely many such components. Then we take a small open interval \(I^{\prime}\subset I\) containing \(c\) and we repeat this argument, to obtain that there are infinitely many \(j\)'s such that \(U_{j}^{\prime\prime}\cap\pi^{-1}(I^{\prime})\) contains a point \(x_{j}\) that does not belong to \(K\). Then we proceed with the proof as in the previous case, by constructing infinitely many disjoint disks of radius \(\eta\) in \(Q\) to get a contradiction. Proof of (i) in Theorem 3.6.: Since \(J^{+}\cap W^{u}(x)=\partial_{\mathrm{i}}(K^{+}\cap W^{u}(x))\), general topology implies that local connectivity of \(J^{+}\cap W^{u}(x)\) implies that of \(K^{+}\cap W^{u}(x)\) (see [33, SS49.III]) so it is enough to focus on \(J^{+}\). For convenience we plug in some dynamical information. Since \(f\) is unstably disconnected, it admits an unstable transversal \(\Delta^{u}\), that is a horizontal disk of finite degree in \(\mathbb{B}\) contained in some unstable manifold (of a periodic saddle point, say). For every \(x\in J\), \(W^{s}(x)\) intersects \(\Delta^{u}\): this easily follows from the density of \(W^{s}(x)\) in \(J^{+}\) and the local product structure. Fix \(y\in W^{s}(x)\cap\Delta^{u}\). By using the local holonomy along the stable lamination \(W^{u}_{\mathrm{loc}}(x)\to W^{u}_{\mathrm{loc}}(y)\), we see that \(J^{+}\cap W^{u}(x)\) is locally connected at \(x\) if and only if \(J^{+}\cap W^{u}(y)\) is locally connected at \(y\). Therefore it is enough to show that \(J^{+}\cap\Delta^{u}\) is locally connected. Since \(K^{+}\cap\Delta^{u}\) is polynomially convex and compactly contained in \(\Delta^{u}\) it follows that \(\Omega:=\Delta^{u}\backslash K^{+}\) is connected and \(J^{+}\cap\Delta^{u}=\partial\Omega\). Likewise every component of \(\partial\Omega\) is of the form \(\partial A\), where \(A\) is a component of \(\Delta^{u}\cap K^{+}\). For such a component, by Caratheodory's Theorem local connectivity of \(\partial A\) is equivalent to that of \(A\), which is of course equivalent to local connectivity of \(A\) at every point of its boundary. Let us fix \(x_{0}\in\partial A\): to complete the proof we have to show that \(A\) is locally connected at \(x_{0}\). Assume by contradiction that \(A\) is not locally connected at \(x_{0}\). Then for small \(\varepsilon>0\) such that if \(C\) denotes the component of \(A\cap\overline{B}(x_{0},\varepsilon)\), then \(x_{0}=\lim x_{n}\), where \(x_{n}\) belongs to \(A\backslash C\). Without loss of generality we can assume that \(x_{n}\in B(x_{0},\varepsilon/2)\). Let \(C_{n}=\mathrm{Comp}_{A\cap\overline{B}(x_{0},\varepsilon)}(x_{n})\), which by definition is disjoint from \(C\). Passing to a subsequence if necessary, we may assume that the \(C_{n}\) are disjoint (the construction here is similar to that of convergence continua in [33, SS49.VI]). Since \(C\) and the \(C_{n}\) intersect \(\partial B(x_{0},\varepsilon)\), their diameter is bounded from below by some \(\delta>0\). From this point the proof is similar to that of of Lemma 3.9: we can find an orthogonal projection \(\pi\) such that \(C\) and the \(C_{n}\) cross the strip \(\pi^{-1}(I)\) horizontally and \(\pi^{-1}(I)\backslash(C\cup\bigcup C_{n})\) admits infinitely many connected components \(U_{j}\) going all the way across the strip. If \(\pi^{-1}(c)\) denotes the center line of the strip, for every \(j\), \(\pi^{-1}(c)\cap U_{j}\) has non-trivial intersection with \(\Omega\), and the fast escaping property of \(\Omega\) gives a contradiction as before. ### Complement: John-Holder property in basins We illustrate the comments from SS 3.1 on the versatility of the John-Holder property by sketching a proof of the following result. **Theorem 3.10**.: _Let \(f\) be a hyperbolic polynomial automorphism of \(\mathbb{C}^{2}\), and \(\mathcal{B}\) be an attracting basin. Then the John-Holder property holds in \(\mathcal{B}\), i.e. for any component \(\Omega\) of \(\mathcal{B}\cap W^{u}(x)\) there exists \(\eta_{0}\) depending only on \(\Omega\) such that for any \(y\in\Omega\) sufficiently close to \(J\), there exists a path in of length \(O(\eta^{\alpha})\) in \(W^{u}(x)\) joining \(y\) to a point \(\eta\)-far from \(J\)._ _Remark 3.11_.: A difference between this result and Corollary 3.3 is that in Corollary 3.3 the constant \(\eta_{0}\) is independent of the component of \(W^{u}(x)\backslash K^{+}\), because \(G^{+}\) reaches arbitrary large values in each component. Here the situation is different because \(\mathcal{B}\cap W^{u}(x)\) typically has (infinitely) many small components, so how far we can get from the boundary really depends on the component. Proof.: For convenience we present a proof which is purposely close to that of Proposition 3.1 and Corollary 3.3. Replace \(f\) some iterate so that \(\mathcal{B}\) is the basin of attraction of a fixed point \(a\) with multipliers \(\lambda_{1},\lambda_{2}\), with \(\left|\lambda_{2}\right|\leq\left|\lambda_{1}\right|\). There exists a biholomorphism \(\phi:\mathcal{B}\to\mathbb{C}^{2}\), which conjugates the dynamics to that of the triangular map \((z_{1},z_{2})\mapsto(\lambda_{1}z_{1}+r(z_{2}),\lambda_{2}z_{2})\), where \(r\) is a polynomial which is non-zero only when there is a resonance \(\lambda_{2}\neq\lambda_{1}^{j}\) between the eigenvalues (see [43]). Introduce the function \[\tilde{H}(z_{1},z_{2})=\left|z_{1}-r(z_{2}/\lambda_{2})\right|^{2}+\left|z_{2 }\right|^{2\alpha},\text{ where }\alpha=\frac{\log\lambda_{1}}{\log \lambda_{2}}\geq 1\] and put \(H=\tilde{H}\circ\phi\). This is a smooth strictly psh function on \(\mathcal{B}\) which satisfies \(H\circ f=\left|\lambda_{1}\right|^{2}H\). To get a better analogy with the previous case we may consider \(H^{-1}\) which satisfies \(H^{-1}\circ f=\left|\lambda_{1}\right|^{-2}H^{-1}\), and tends to zero when approaching \(J\). The restriction of this function to any local unstable disk in \(\mathcal{B}\backslash\left\{a\right\}\) is non-constant and one easily checks that its set of critical points is discrete. Arguing in Proposition 3.1, we define a family of rays in \(\mathcal{B}\) by considering gradient lines of \(H\) (or equivalently \(H^{-1}\)) along \(\mathcal{W}^{u}\), first in the fundamental domain \(\left\{\left|\lambda_{1}\right|^{2}\leq H^{-1}\leq 1\right\}\) and then in \(\left\{0<H^{-1}\leq 1\right\}\) by pulling back. It follows that for every component \(\Omega\) of \(\mathcal{B}\cap W^{u}(x)\), for every \(0<r_{1}<r_{2}<\max_{\Omega}\left|H^{-1}\right|\), and any \(y\in\Omega\) such that \(H^{-1}(y)=r_{1}\), there exists a ray of length \(\ell(r_{1},r_{2})=O(r_{2}^{\alpha})\) joining \(y\) to a point of \(\left\{H^{-1}=r_{2}\right\}\). To conclude the argument we need to adapt the proof of Corollary 3.3, which relies on the Holder continuity of the Green function. Instead we use an argument based on uniform hyperbolicity. Indeed, let \(x\in J\) and \(y\in W^{u}_{\text{loc}}(x)\) be such that \(d^{u}(x,y)=\varepsilon\). We want to show that \(H^{-1}(y)\precsim\varepsilon^{\alpha}\) for some \(\alpha\). By the expansion along unstable manifolds and the local uniform geometry it takes at most \(N\leq C\left|\log\varepsilon\right|\) iterates to map \(y\) into a given compact subset of \(\mathcal{B}\). Hence \[H^{-1}(y)=\left|\lambda_{1}\right|^{2N}H^{-1}(f^{N}(y))\leq C\left|\lambda_{1 }\right|^{2N}\leq C\left|\lambda_{1}\right|^{2C\left|\log\varepsilon\right|}=C \varepsilon^{-2C\log\left|\lambda_{1}\right|}\] and we are done. ## 4. Stable total disconnectedness We say that \(f\) (or \(J\)) is _stably totally disconnected_ if for every \(x\in J\), \(W^{s}(x)\cap J^{-}\) is totally disconnected. Note that since \(J\) has local product structure with respect to the stable and unstable laminations, \(W^{s}(x)\cap J=W^{s}(x)\cap J^{-}\). **Proposition 4.1**.: _Let \(f\) be a hyperbolic Henon map. The following assertions are equivalent._ 1. _Every leaf of the stable lamination in_ \(\mathbb{B}\) _is a vertical submanifold of finite degree._ _._ 2. _The leaves of the stable lamination in_ \(\mathbb{B}\) _are vertical submanifolds of uniformly bounded degree._ 3. _For every_ \(x\) _in_ \(J\)_,_ \(J^{s}(x)=K^{s}(x)=\{x\}\)_, that is,_ \(f\) _is stably totally disconnected._ Note that dissipativity is not required here, so this result holds in the unstable direction as well. Proof.: The implication \((ii)\Rightarrow(i)\) is obvious and its converse \((i)\Rightarrow(ii)\) follows from the semi-continuity properties of the degree and is identical to [35, Lemma 5.1]. To prove that \((iii)\Rightarrow(i)\) we use Lemma 2.5 for the stable lamination: indeed if \(J^{s}(x)\) is a point for every \(x\), then all \(4\) conditions of Lemma 2.5 are equivalent, and the equivalence of properties _(ii)_ and _(iii)_ there yield the result. Finally, \((ii)\Rightarrow(iii)\) does not require hyperbolicity and was established in [15, Prop. 2.14]. For convenience, let us recall the argument: for every vertical disk \(D\) of degree \(\leq k\), and every component \(D^{\prime}\) of \(D\cap f(\mathbb{B})\), the modulus of the annulus \(D\backslash D^{\prime}\) is bounded below by \(m=m(k)>0\), and for every \(x\in J\) there is an infinite nest of such annuli surrounding the component of \(x\) in \(W^{s}(x)\cap J\). So \(W^{s}(x)\cap J\) is totally disconnected and we are done. A way to ensure the boundedness of the degrees of semi-local stable manifolds originates in [17] and relies on Wiman's theorem for entire functions. The following result is contained in [35]. **Proposition 4.2**.: _Let \(f\) be a hyperbolic Henon map such that \(|\mathrm{Jac}\,f|\leq d^{-2}\). Then \(f\) is stably totally disconnected._ Proof (sketch).: Fix \(x\in J\) and \(v\in E^{s}(x)\). Uniform hyperbolicity together with the assumption on the Jacobian imply that \(\|df^{n}_{x}(v)\|\leq Cs^{n}\), where \(s<d^{-2}\). Denote as before \(\psi^{s}_{\bullet}\) the normalized stable parameterization. It follows that \(f^{n}\circ\psi^{s}_{x}(\cdot)=\psi^{s}_{f^{n}(x)}(\lambda_{n}\cdot)\), where \(|\lambda_{n}|\leq Cs^{n}\). Then from the relation \[G^{-}\circ\psi^{s}_{x}(\lambda_{n}^{-1}\zeta)=d^{n}G^{-}\circ\psi^{s}_{f^{n}( x)}(\zeta)\] we deduce that \(G^{-}\circ\psi^{s}_{x}\) is a subharmonic function of order smaller than \(1/2\) and Wiman's theorem implies that \(\mathrm{Comp}_{(\psi^{s}_{x})^{-1}(\mathbb{B})}(x)\) is a bounded domain in \(\mathbb{C}\), thus \(W^{s}_{\mathbb{B}}(x)\) has bounded vertical degree and we are done. Another idea, which was communicated to us by Pierre Berger, is to use a Hausdorff dimension argument to prove directly that stable slices of \(J\) are totally disconnected. Indeed the Hausdorff dimension of stable slices of \(J^{-}\) can be estimated using thermodynamic formalism for hyperbolic maps. This turns out to give a better bound on the Jacobian. **Proposition 4.3**.: _Let \(f\) be a hyperbolic Henon map such that \(|\mathrm{Jac}\,f|<d^{-1}\). Then \(f\) is stably totally disconnected._ Proof.: Since \(J\) is a locally maximal hyperbolic set and the dynamics along stable manifolds is conformal, there is an exact formula for the Hausdorff dimension of for any \(x\in J^{-}\), given by: \[\delta^{s}:=\dim_{H}\left(J\cap W^{s}_{\mathrm{loc}}(x)\right)=\frac{h_{\kappa^{s }}(f)}{-\int\log|df|_{E^{s}(x)}|\,d\kappa^{s}(x)} \tag{2}\] (see Pesin's book [39, Thm 22.1]; this goes back to the work of Manning and McCluskey [36]), where \(\kappa^{s}\) is a certain invariant measure (the unique equilibrium state associated to \(\delta^{s}\log|df|_{E^{s}}|\)) and \(h_{\kappa^{s}}(f)\) is its measure theoretic entropy. By the variational principle we have that \(h_{\kappa^{s}}(f)\leq\log d\). On the other hand the Lyapunov exponent in the denominator in the right hand side of (2) is bounded below by \(|\log|\mathrm{Jac}\,f||>\log d\). Therefore we conclude that \(\dim_{H}\left(J\cap W^{s}_{\mathrm{loc}}(x)\right)<1\) from which it follows that \(J\cap W^{s}_{\mathrm{loc}}(x)\) is totally disconnected. **Question 4.4**.: _Is a dissipative hyperbolic Henon map always stably totally disconnected?_ ## 5. Classification of semi-local components of \(K^{+}\) and \(J^{+}\) _Throughout this section, \(f\) is a dissipative and hyperbolic complex Henon map of degree \(d\) with a disconnected Julia set (or equivalently, \(f\) is unstably disconnected)._ We assume moreover that \(f\)_is stably totally disconnected_. The results of SS4 imply that this holds whenever \(|\mathrm{Jac}\,f|<1/d\). We fix a large bidisk \(\mathbb{B}\) as before, and our purpose is to classify the connected components of \(J^{+}\cap\mathbb{B}\) and study the induced dynamics on this set of components. ### Geometric preparations We start with some general lemmas about vertical submanifolds in a bidisk. We define the angle \(\angle(v,w)\) between two complex directions \(v\) and \(w\) at \(x\in\mathbb{C}^{2}\) to be their distance in \(\mathbb{P}(T_{x}\mathbb{C}^{2})\simeq\mathbb{P}^{1}\) relative to the Fubini-Study metric induced by the standard Hermitian structure of \(T_{x}\mathbb{C}^{2}\simeq\mathbb{C}^{2}\). **Lemma 5.1**.: _Let \(M\) be a vertical submanifold in \(\mathbb{D}\times\mathbb{D}\), and let \(a\in\mathbb{D}\) and \(r>0\) such that \(M\) has no horizontal tangency in \(\mathbb{D}\times D(a,2r)\). Then there exists a universal constant \(C_{0}\) such that for any \(x\in\mathbb{D}\times D(a,r)\), the angle between \(T_{x}M\) and the horizontal direction is bounded from below by \(C_{0}r\)._ Proof.: If \(M\) has no horizontal tangency in \(\mathbb{D}\times D(a,2r)\), then \(M\cap(\mathbb{D}\times D(a,2r))\) is the union of \(\deg(M)\) vertical graphs. Let \(\Gamma\) be one of these graphs. Then \(\varphi:=\pi_{1}\circ(\pi_{2}|_{\Gamma})^{-1}\) maps \(D(a,2r)\) into \(2\mathbb{D}\) and \(\Gamma=\{(\varphi(w),w),\ w\in D(a,2r)\}\). By the Cauchy estimate, we get that \(|\varphi^{\prime}|\leq 2/r\) on \(D(a,r)\) and the result follows. A typical use of this result is by taking the contraposite: if a vertical submanifold \(M\) in \(\mathbb{D}\times\mathbb{D}\) has a near horizontal tangency in \(\mathbb{D}\times D(a,r)\), then it has an actual horizontal tangency in \(\mathbb{D}\times D(a,2r)\). Let us denote by \([e_{1}]\in\mathbb{P}(T\mathbb{C}^{2})\) the horizontal direction. **Corollary 5.2**.: _Let \(M\) be a vertical submanifold in \(\mathbb{D}\times\mathbb{D}\) which extends as a vertical submanifold to \(\mathbb{D}\times(3/2)\mathbb{D}\). There exists a universal constant \(C_{1}\) such that if for some \(a\in\mathbb{D}\), there exists \(x\in M\cap(\mathbb{D}\times\{a\})\) such that \(\angle(T_{x}M,[e_{1}])<\theta\), then there exists \(a^{\prime}\in(3/2)\mathbb{D}\) such that \(|a-a^{\prime}|<C_{1}\theta\) and \(M\) is tangent to \(\mathbb{D}\times\{a^{\prime}\}\)._ For the sake of completeness let us also state a slightly stronger result: **Corollary 5.3**.: _Let \(M\) be a vertical submanifold in \(\mathbb{D}\times\mathbb{D}\) of degree at most \(k\) which extends as a vertical submanifold to \(\mathbb{D}\times r_{0}\mathbb{D}\) for some \(r_{0}>1\) (say \(r_{0}=3/2\)). There exists a function \(h=h_{k}\) such that \(h(\theta)\to 0\) as \(\theta\to 0\) with the following property: if \(x\in M\) is such that the angle between \(T_{x}M\) and the horizontal direction is bounded by \(\theta\ll 1\) then there exists \(x^{\prime}\in M\) with \(d(x,x^{\prime})\leqslant h(\theta)\) such that \(M\) has a horizontal tangency at \(x^{\prime}\)._ Proof.: Indeed, letting \(a=\pi_{2}(x)\), and applying Corollary 5.2 we see that the connected component of \(M\) containing \(x\) in \(D(a,C_{1}\theta)\times\mathbb{D}\) cannot be a vertical graph, so it admits a horizontal tangency. Furthermore, an easy compactness argument shows that the diameter of a connected component of \(M\cap D(a,r)\times\mathbb{D}\) is bounded by \(h_{k}(r)\) with \(h_{k}(r)\to 0\) as \(r\to 0\). The result follows. _Remark 5.4_.: It is likely that \(h_{k}(r)=O\left(r^{1/k}\right)\) but the precise argument needs to be found. The following result is a precise version of the Reeb stability theorem (see [11]) which is specialized to our setting. **Lemma 5.5**.: _Let \(x_{0}\in J\) be such that \(W^{s}_{\mathbb{B}}(x_{0})\) is transverse to \(\partial\mathbb{B}\). Then there exists \(\delta\) depending only on \(\min_{y\in W^{s}_{\mathbb{B}}(x_{0})\cap\partial\mathbb{B}}\angle\left(T_{y}W ^{s}_{\mathbb{B}}(x_{0}),[e_{1}]\right)\) such that if \(\tau\subset J^{u}(x_{0})\) is a connected compact set containing \(x_{0}\), of diameter less than \(\delta\), then for every \(x\in\tau\), \(W^{s}(x)\) is transverse to \(\partial\mathbb{B}\), \(\deg W^{s}_{\mathbb{B}}(x)=\deg W^{s}_{r\mathbb{B}}(x_{0})\) and \(\bigcup_{x\in\tau}W^{s}_{\mathbb{B}}(x)\) is homeomorphic to \(\tau\times W^{s}_{\mathbb{B}}(x_{0})\)._ Note that it is slightly abusing to say that \(W^{s}_{\mathbb{B}}(x)\) is transverse to \(\partial(\mathbb{B})\) since \(W^{s}_{\mathbb{B}}(x)\) precisely stops at \(\partial\mathbb{B}\). Of course \(W^{s}_{\mathbb{B}}(x)\) extends to a neighborhood of \(\overline{\mathbb{B}}\) and what we mean is transversality for this extension. _Remark 5.6_.: Later on we will use this lemma with \(r\mathbb{B}\) instead of \(\mathbb{B}\) for \(1\leqslant r\leqslant 2\) (see Proposition 5.12). It will be important there that the constant \(\delta\) is uniform with \(r\in[1,2]\), which easily follows from the proof. Proof.: Set \(\theta=\min_{y\in W^{s}_{\mathbb{B}}(x_{0})\cap\partial\mathbb{B}}\angle \left(T_{y}W^{s}_{\overline{\mathbb{B}}}(x_{0}),[e_{1}]\right)\). The stable lamination in a neighborhood of \(\overline{\mathbb{B}}\) is covered by finitely many flow boxes. So there exists \(r>1\) depending only on \(\theta\) such that \(W^{s}_{r\mathbb{B}}(x_{0})\) is transverse to \(\partial(r\mathbb{B})\). Since the stable leaves in \(\mathbb{B}\) are simply connected, we can apply a local version of the Reeb stability theorem (see [11, Prop. 11.4.8]) which asserts that when \(\tau\subset J\cap W^{u}(x_{0})\) is sufficiently small, for \(x\in\tau\), by local triviality of the stable lamination, the domain \(W^{s}_{r\mathbb{B}}(x_{0})\subset W^{s}(x_{0})\) can be lifted to a domain \(D_{x}\subset W^{s}(x)\), and the collection \(\{D_{x},\ x\in\tau\}\) is topologically a product. Since \(W^{s}_{\mathbb{B}}(x_{0})\) is transverse to \(\partial\mathbb{B}\), \(W^{s}_{\mathbb{B}}(x_{0})\subset W^{s}_{r\mathbb{B}}(x_{0})\) is a smoothly bounded domain and, reducing \(\tau\) if necessary, the transversality persists, \(\operatorname{Comp}_{D_{x}\cap\mathbb{B}}(x)\) varies continuously and \(\bigcup_{x\in\tau}W^{s}_{\mathbb{B}}(x)\) is a product. Finally, if we fix any horizontal line, say close to \(\partial\mathbb{B}\) by transversality and continuity, its number of intersection points with \(W^{s}_{\mathbb{B}}(x)\) is constant, hence the statement on the degree. What remains to be seen is why the size of the allowed transversal \(\tau\) depends only on the minimal angle \(\theta\). This follows from the mechanism of Reeb stability. What we need to know is how far we can push \(x\) in \(\tau\) so as to keep the transversality between \(W^{s}_{\mathbb{B}}(x)\) and \(\partial\mathbb{B}\). Pick \(y\in W^{s}_{\overline{\mathbb{B}}}(x_{0})\cap\partial\mathbb{B}\). Understanding how a neighborhood of \(y\) in \(W^{s}(x_{0})\) evolves when the base point \(x\in\tau\) changes depends on the choice of a path \(\gamma\) joining \(x_{0}\) to \(y\) in \(W^{s}(x_{0})\) and of a covering of \(\gamma\) by a chain of overlapping plaques. (Recall that by definition a _plaque_ is the intersection between a leaf an a flow box.) Notice first that there is a uniform control of the length of a such a path \(\gamma\): for instance we can take an external ray and apply Proposition 3.1 (see Remark 3.2). So the length of a minimal chain of plaques joining \(x_{0}\) to \(y\) is uniformly bounded, and there exists \(\delta=\delta(\theta)\) such that if \(\operatorname{Diam}_{x_{0}}(\tau)<\delta\), then the continuation of the plaque containing \(y\) remains transverse to \(\partial\mathbb{B}\). Finally, the number of plaques required to cover \(\partial W^{s}_{\overline{\mathbb{B}}}(x_{0})\) depends basically on the volume of \(W^{s}_{r\mathbb{B}}(x_{0})\) for some \(r>1\), which in turn depends only on the degree of \(W^{s}_{r^{\prime}\mathbb{B}}(x_{0})\) for some \(r^{\prime}>r\). By Proposition 4.1 this degree is uniformly bounded. So the number of plaques is uniformly bounded and we are done. We will also need the following extension lemma. **Lemma 5.7** ([35, Prop. 5.8]).: _There exists a neighborhood \(\mathcal{N}\) of \(J^{+}\cap\mathbb{B}\) such that the stable lamination \(\mathcal{W}^{s}\) extends to a \(C^{1}\) foliation of \(\mathcal{N}\)._ Observe that in [35] it is assumed that \(|\operatorname{Jac}f|<d^{-2}\) but what is really needed for extending the stable lamination is the boundedness of the vertical degree which holds in our setting (cf. Proposition 4.1). The \(C^{1}\) regularity of the holonomy will not be used in the paper. Using this extension lemma, we can extend Lemma 5.5 to a statement about an open neighborhood of \(W^{s}_{\mathbb{B}}(x_{0})\) with exactly the same proof. **Lemma 5.8**.: _Let \(x_{0}\in J\) be such that \(W^{s}_{\mathbb{B}}(x_{0})\) is transverse to \(\partial\mathbb{B}\). Then there exists \(\delta\) depending only on \(\min_{y\in W^{s}_{r\mathbb{B}}(x_{0})\cap\partial\mathbb{B}}\angle\left(T_{y} W^{s}_{\overline{\mathbb{B}}}(x_{0}),[e_{1}]\right)\) such that for every \(x\in D^{u}(x_{0},\delta)\), \(\mathcal{W}^{s}(x)\) is transverse to \(\partial\mathbb{B}\), \(\deg\mathcal{W}^{s}_{\mathbb{B}}(x)=\deg W^{s}_{\mathbb{B}}(x_{0})\) and \(\bigcup_{x\in D^{u}_{x_{0}}(x_{0},\delta}W^{s}_{\mathbb{B}}(x)\) is homeomorphic to \(D^{u}_{x_{0}}(x_{0},\delta)\times W^{s}_{\mathbb{B}}(x_{0})\)._ ### Thin and thick components In this section we study the geometry of the components of \(J^{+}\cap\mathbb{B}\). The arguments rely mostly on the geometry of the stable lamination, not on the dynamics of \(f\). One main result is that thin components of \(K^{+}\cap\mathbb{B}\) have a simple leaf structure (Proposition 5.12). It follows that for a given component of \(J^{+}\cap\mathbb{B}\), either all its unstable slices are small, or all of them are large (Proposition 5.13). Together with the results of SS3.3 this leads to a description and some regularity properties of components of \(J^{+}\cap\mathbb{B}\) and \(K^{+}\cap\mathbb{B}\). We start with a simple case. **Proposition 5.9**.: _If \(x\in J\) is such that \(K^{u}(x)=J^{u}(x)=\{x\}\) then \(K^{+}_{\mathbb{B}}(x)=J^{+}_{\mathbb{B}}(x)=W^{s}_{\mathbb{B}}(x)\)._ Proof.: As observed above the inclusion \(W^{s}_{\mathbb{B}}(x)\subset K^{+}_{\mathbb{B}}(x)\) is obvious. For the converse inclusion, observe that for every \(n\in\mathbb{Z}\), \(K^{u}(f^{n}(x))=\{f^{n}(x)\}\). For \(n\geq 1\), consider a small loop \(\gamma_{n}\subset W^{u}(f^{n}(x))\) around \(f^{n}(x)\) that is disjoint from \(K^{+}\). By the local product structure we can extend it to a germ of \(3\)-manifold \(\tilde{\gamma}_{n}\) transverse to \(W^{u}(f^{n}(x))\), disjoint from \(K^{+}\), and of size uniformly bounded from below in the stable direction. Since \(W^{s}_{\mathbb{B}}(x)\) has finite vertical degree in \(2\mathbb{B}\), it admits finitely many horizontal tangencies, so we can fix \(1\leqslant r\leqslant 2\) such that \(W^{s}_{r\mathbb{B}}\) is transverse to \(\partial(r\mathbb{B})\). Then by the Inclination Lemma, for large \(n\), \(f^{-n}\left(\widetilde{\gamma}_{n}\right)\) contains a small "tube" around \(W^{s}_{r\mathbb{B}}(x)\) whose boundary is disjoint from \(K^{+}\). It follows that \(K^{+}_{r\mathbb{B}}(x)=W^{s}_{r\mathbb{B}}(x)\), hence \(K^{+}_{\mathbb{B}}(x)\subset W^{s}_{r\mathbb{B}}(x)\cap\mathbb{B}\). Finally \(W^{s}_{r\mathbb{B}}(x)\cap\mathbb{B}\) has finitely many components, and one of them is \(W^{s}_{\mathbb{B}}(x)\), so \(K^{+}_{\mathbb{B}}(x)=W^{s}_{\mathbb{B}}(x)\). Here is a first interesting consequence. **Corollary 5.10**.: _All but countably many components of \(K^{+}\cap\mathbb{B}\) are vertical submanifolds._ Proof.: Fix a global unstable transversal \(\Delta^{u}\) in \(\mathbb{B}\). Then every component of \(K^{+}\cap\mathbb{B}\) intersects \(\Delta^{u}\). Indeed, for any such component \(C\), \(\partial C\) is contained in \(J^{+}\) so it contains stable manifolds. Stable manifolds in \(\mathbb{B}\) are vertical and of finite degree, so they have non-trivial (transverse) intersection with \(\Delta^{u}\). Now if \(C\) is non-trivial, that is, not reduced to a vertical submanifold, then by Proposition 5.9, any component of \(C\cap\Delta^{u}\) is non-trivial, and the result follows from Corollary 3.7. Another case where \(J^{+}_{\mathbb{B}}(x)\) is easily understood is when stable leaves are transverse to \(\partial\mathbb{B}\). **Proposition 5.11**.: _Assume that \(J^{u}(x)\) is a leafwise bounded component such that for every \(y\in J^{u}(x)\), \(W^{s}_{\mathbb{B}}(y)\) is transverse to \(\partial\mathbb{B}\). Then_ \[J^{+}_{\mathbb{B}}(x)=\bigcup_{y\in J^{u}(x)}W^{s}_{\mathbb{B}}(y). \tag{3}\] Note that this result is not true if the transversality assumption is omitted (see Figure 1 for a visual explanation). Proof.: Let \(C\) be defined by the right hand side of (3). Since the \(W^{s}_{\mathbb{B}}(y)\), \(y\in J^{u}(x)\), are transverse to \(\partial\mathbb{B}\), they vary continuously with \(y\). It follows that \(C\) is a closed connected set. To show that \(C=J^{+}_{\mathbb{B}}(x)\), it is convenient to use the extension of the stable lamination to a neighborhood of \(J^{+}\cap\mathbb{B}\) (given in Lemma 5.7). Let \((U_{n})\) be a basis of open neighborhoods of \(J^{u}(x)\) in \(W^{u}(x)\) such that for every \(n\), \(\partial U_{n}\cap J=\emptyset\). For every \(\delta>0\), \(U_{n}\) is contained in the \(\delta\)-neighborhood of \(J^{u}(x)\) for large \(n\). Thus, by Lemma 5.8 the leaves issued from \(U_{n}\) are transverse to \(\partial\mathbb{B}\) and stay close to \(C\). Let \(\widetilde{U}_{n}\) be the saturation of \(U_{n}\) in the extended foliation. Then \((\widetilde{U}_{n})\) is a basis of neighborhoods of \(C\) in \(\mathbb{B}\) such that \(\partial\widetilde{U}_{n}\) is disjoint from \(J^{+}\). We conclude that \(C=J^{+}_{\mathbb{B}}(x)\). The structure of \(J^{+}_{\mathbb{B}}(x)\) is not so easy to describe without this transversality assumption. Still, the argument can (almost) be salvaged if \(J^{u}(x)\) is small enough. This will be a key property in the following. **Proposition 5.12**.: _There exists \(\delta_{1}>0\) such that if \(x\in J\) is such that \(\operatorname{Diam}_{x}(J^{u}(x))\leqslant\delta_{1}\), then there exists \(1\leqslant r\leqslant 2\) such that for every \(y\in J^{u}(x)\), \(W^{s}_{r\mathbb{B}}(y)\) is transverse to \(\partial(r\mathbb{B})\) and \(J^{u}(x)\) can be followed under holonomy along \(W^{s}_{r\mathbb{B}}(x)\). In particular \(J^{+}_{r\mathbb{B}}(x)\) is homeomorphic to \(J^{u}(x)\times W^{s}_{\tau\mathbb{B}}(x)\) and_ \[J^{+}_{\mathbb{B}}(x)\subset J^{+}_{\tau\mathbb{B}}(x)=W^{s}_{\tau\mathbb{B}}(J^{ u}(x))\subset W^{s}_{2\mathbb{B}}(J^{u}(x))=\bigcup_{y\in J^{u}(x)}W^{s}_{2\mathbb{B}}(y). \tag{4}\] Recall that \(\operatorname{Diam}_{x}\) denotes the diameter relative to the normalized leafwise metric \(d^{u}_{x}\) induced by the affine structure. By polynomial convexity, if \(K^{u}(x)\) is leafwise bounded, then \(J^{u}(x)=\partial_{i}K^{u}(x)\) so \(\operatorname{Diam}_{x}(K^{u}(x))=\operatorname{Diam}_{x}(J^{u}(x))\). Recall from SS2.3 that by the Koebe Distortion Theorem, the ambient distance \(d\) and the leafwise Euclidean distance \(d^{u}_{x}\) are equivalent in a small neighborhood of \(x\), with universal bounds, i.e. in some neighborhood of \(x\) in \(W^{u}(x)\) we have \(d/2\leq d^{u}_{x}\leq 2d\). In particular if \(\operatorname{Diam}_{x}(J^{u}(x))\) is small enough then \(\operatorname{Diam}(J^{u}(x))\) and \(\operatorname{Diam}(K^{u}(x))\) are comparable to \(\operatorname{Diam}_{x}(J^{u}(x))\) (where \(\operatorname{Diam}\) denotes the ambient diameter). Proof of Proposition 5.12.: Recall that every leaf of the stable lamination in \(3\mathbb{B}\) is a vertical disk of degree bounded by \(D\), so by the Riemann-Hurwitz formula it admits at most \(D-1\) horizontal tangencies. For \(k=0,\ldots,D\), let \(r_{k}=1+\frac{k}{D}\), and fix \(\theta<\frac{C_{0}}{8D}\), where \(C_{0}\) is as in Lemma 5.1. Let \(x\in J\) be arbitrary. By the pigeonhole principle, there exists \(k\in\{0,\ldots,D-1\}\) such that \(W^{s}_{2\mathbb{B}}(x)\) has no horizontal tangency in \(r_{k+1}\mathbb{B}\backslash r_{k}\mathbb{B}\). So by Lemma 5.1 (scaled to \(2\mathbb{B}\) and applied to any \(a\) such that \(|a|=R(r_{k}+r_{k+1})/2\), where \(R\) is the radius of \(\mathbb{B}\)) we infer that \[\min_{y\in\cap\partial(r^{\prime}_{k}\mathbb{B})}\angle(T_{y}W^{s}_{\mathbb{ B}}(x_{0}),[e_{1}])\geq\theta,\text{ where }r^{\prime}_{k}=\frac{r_{k}+r_{k+1}}{2}.\] Therefore, by Lemma 5.5 and Remark 5.6 there exists \(\delta_{1}\) depending only on \(\theta\), hence ultimately only on \(D\), hence on \(f\), such that if \(\operatorname{Diam}_{x}(J^{u}(x))\leq\delta_{1}\), then for every \(y\in J^{u}(x)\), \(W^{s}_{r^{\prime}_{k}\mathbb{B}}\) is transverse to \(\partial(r^{\prime}_{k}\mathbb{B})\) and \(W^{s}_{r^{\prime}_{k}\mathbb{B}}(J^{u}(x))\) is topologically a product. This completes the proof of the first part of the proposition. From this point, the description of \(J^{+}_{2\mathbb{B}}(x)\) in (4) directly follows from Proposition 5.11. It follows from this analysis that if \(C\) is a semi-local component of \(J^{+}\), then either all its unstable slices are large or all of them are small. **Proposition 5.13**.: _There exists \(0<\delta_{1}\leq\delta_{2}\) such that for every component \(C\) of \(J^{+}\cap\mathbb{B}\) the following alternative holds:_ 1. _either for every_ \(x\in C\cap J\)_,_ \(\operatorname{Diam}_{x}J^{u}(x)\leq\delta_{2}\)_;_ 2. _or for every_ \(x\in C\cap J\)_,_ \(\operatorname{Diam}_{x}J^{u}(x)>\delta_{1}\)_._ _In addition if (i) holds then \(C\) satisfies the conclusions of Proposition 5.12._ Referring to this dichotomy in the following, we will say that a component is _thin_ (resp. _thick_) if it satisfies (_i_) (resp. (_ii_)). We stress that the Proposition asserts that a component is thick as soon as _one_ of its unstable slices has intrinsic diameter larger than \(\delta_{2}\). As seen before (see e.g. Corollary 5.10), if \(\Delta^{u}\) is an unstable transversal, every semi-local component of \(J^{+}\) intersects \(\Delta^{u}\), so from Theorem 3.6 we immediately deduce: **Corollary 5.14**.: _There are only finitely many thick components of \(J^{+}\cap\mathbb{B}\)._ Proposition 5.13 is a direct consequence of the following lemma. **Lemma 5.15**.: _Let \(\delta_{1}\) be as in Proposition 5.12. There exists \(\delta_{2}\geq\delta_{1}\) such that if \(x\) is such that \(\operatorname{Diam}_{x}(J^{u}(x))\leq\delta_{1}\), then for every \(y\in J^{+}_{\mathbb{B}}(x)\cap J\), \(\operatorname{Diam}_{y}(J^{u}(y))\leq\delta_{2}\)._ Proof.: Indeed by Proposition 5.12, if \(\operatorname{Diam}_{x}(J^{u}(x))\leq\delta_{1}\), then any point in \(J^{+}_{\mathbb{B}}(x)\) can be joined to \(y\in J^{u}(x)\) by a path contained in \(W^{s}_{2\mathbb{B}}(y)\). Furthermore, as explained in the proof of Lemma 5.5, the plaque-length of such a \(\gamma\) is uniformly bounded. The bound on \(\operatorname{Diam}_{y}(J^{u}(y))\) then follows from the uniform continuity of holonomy along bounded paths in the stable lamination. _Remark 5.16_.: The argument of Propositions 5.12 and 5.13 makes no use of the fact that \(J^{u}(x)\) is a component of \(J\cap W^{u}(x)\). Thus the same statements hold for the saturation by semi-local stable leaves of any (say closed) subset \(X\) of an unstable manifold: if its diameter of \(X\) is small enough then, changing the bidisk \(\mathbb{B}\) if necessary, the saturation \(\hat{X}\) of \(X\) by semi-local stable manifolds is a product and all the stable slices of \(\hat{X}\) have a small diameter. **Proposition 5.17**.: _Let \(\Delta^{u}\) be an unstable transversal in \(\mathbb{B}\). For every connected component \(C\) of \(J^{+}\cap\mathbb{B}\) (resp. \(K^{+}\cap\mathbb{B}\)), \(C\cap\Delta^{u}\) admits finitely many connected components._ Proof.: Let us first discuss the case of components of \(J^{+}\cap\mathbb{B}\). For thick components, the result follows immediately from Corollary 5.14, so we may assume that \(C\) is thin. As already seen, \(C\) intersects \(\Delta^{u}\). Pick \(x\in C\cap\Delta^{u}\), in particular \(x\in J\). Since \(C\) is thin, for some \(1\leq r\leq 2\), \(W^{s}_{r\mathbb{B}}(x)\) is transverse to \(\partial(r\mathbb{B})\) and by Proposition 5.12, \(J^{u}(x)\) can be followed under holonomy along \(W^{s}_{r\mathbb{B}}(x)\). Since \(W^{s}_{r\mathbb{B}}(x)\) and \(\Delta^{u}\) have finitely many intersection points, we infer that \(J^{+}_{r\mathbb{B}}(x)\cap\Delta^{u}\) has finitely many connected components. Finally, \(J^{+}_{\mathbb{B}}(x)=C\) coincides with the component of \(J^{+}_{r\mathbb{B}}(x)\cap\mathbb{B}\) containing \(x\), so \(C\cap\Delta^{u}\) is a union of connected components of \(J^{+}_{r\mathbb{B}}(x)\cap\Delta^{u}\) and we conclude that there are finitely many of them. We now discuss components of \(K^{+}\cap\mathbb{B}\). Recall from Lemma 2.3 that for such a component \(C\), \(\partial C\) is a component of \(J^{+}\cap\mathbb{B}\). Assume first that such a component \(A\) is thin. Given \(x\in A\cap\Delta^{u}\). \(J^{u}(x)\) can be followed under holonomy along \(W^{s}_{r\mathbb{B}}(x)\) for some \(1\leq r\leq 2\). If the polynomial hull of \(J^{u}(x)\) is non-empty, then it has a small diameter and it can be followed by holonomy in \(r\mathbb{B}\) along the extended foliation just as in Proposition 5.12 and it is topologically a product. It follows that \(C\cap\Delta^{u}\) is the polynomial hull of \(J^{+}_{\mathbb{B}}(x)\cap\Delta^{u}\) and it has finitely many components. On the other hand, if every component of \(\partial C\) is thick, then \(\partial C\cap\Delta^{u}\) is contained in the finitely many components of \(K^{+}\cap\Delta^{u}\) of diameter greater than some \(\delta\), and so is \(C\cap\Delta^{u}\). This concludes the proof. We conclude this subsection by giving a general description of components of \(J^{+}\cap\mathbb{B}\). Fix an unstable transversal \(\Delta^{u}\). Let \(x\in J\cap\Delta^{u}\) and consider \(W^{s}_{\mathbb{B}}(J^{u}(x))=\bigcup_{y\in J^{u}(x)}W^{s}_{\mathbb{B}}(y)\). If every \(W^{s}_{\mathbb{B}}(y)\) is transverse to \(\partial\mathbb{B}\) then by Proposition 5.11, \(W^{s}_{\mathbb{B}}(J^{u}(x))=J^{+}_{\mathbb{B}}(x)\). In the general case we define a relation between components of \(J^{+}\cap\Delta^{u}\) by declaring that \(C_{1}\leftrightarrow C_{2}\) if and only if there exists \(x\in C_{1}\) such that \(W^{s}_{\mathbb{B}}(x)\cap C_{2}\neq\emptyset\) (or equivalently there exists \((x_{1},x_{2})\in C_{1}\times C_{2}\) such that \(W^{s}_{\mathbb{B}}(x_{1})=W^{s}_{\mathbb{B}}(x_{2})\)). Then extend this relation to an equivalence relation (still denoted by \(\leftrightarrow\)) by allowing finite chains \(C_{1},\ldots,C_{n}\). Finally we define \[\widehat{W}^{s}_{\mathbb{B}}(J^{u}(x)):=\bigcup_{C\leftrightarrow J^{u}(x)}\bigcup _{y\in C}W^{s}_{\mathbb{B}}(y).\] **Proposition 5.18**.: _For any \(x\in J\), \(J^{+}_{\mathbb{B}}(x)\) coincides with \(\widehat{W}^{s}_{\mathbb{B}}(J^{u}(x))\)._ Proof.: By Proposition 5.17, \(J^{+}_{\mathbb{B}}(x)\cap\Delta^{u}\) admits finitely many connected components \((C_{i})_{i\in I}\). Every point \(z\in J^{+}_{\mathbb{B}}(x)\) belongs to some \(W^{s}_{\mathbb{B}}(y)\), \(y\in\Delta^{u}\), and necessarily \(y\) belongs to some \(C_{i}\), say \(C_{i_{0}}\). Furthermore, if \(z^{\prime}\in J^{+}_{\mathbb{B}}(x)\) is close to \(z\), by the continuity of stable manifolds, there exists \(y^{\prime}\in\Delta^{u}\) close to \(y\) such that \(z^{\prime}\in W^{s}_{\mathbb{B}}(y^{\prime})\). Since the \(C_{i}\) are at positive distance from each other it follows that \(y^{\prime}\) belongs to \(C_{i_{0}}\). In other words, \(W^{s}_{\mathbb{B}}(C_{i_{0}})\) is relatively open in \(J^{+}_{\mathbb{B}}(x)\). Clearly \(W^{s}_{\mathbb{B}}(C_{i_{0}})\) is connected, and even arcwise connected since by Theorem 3.6\(C_{i}\) is locally connected. Thus the \(W^{s}_{\mathbb{B}}(C_{i})\) realize a finite cover of \(J^{+}_{\mathbb{B}}(x)\) by connected open sets, which are contained in or disjoint from \(J^{+}_{\mathbb{B}}(x)\). Define a non-oriented graph on \(I\) by joining \(i\) and \(j\) whenever \(W^{s}_{\mathbb{B}}(C_{i})\cap W^{s}_{\mathbb{B}}(C_{j})\neq\emptyset\). If we fix \(i_{0}\) such that \(W^{s}_{\mathbb{B}}(C_{i_{0}})\subset J^{+}_{\mathbb{B}}(x)\), it follows that \(J^{+}_{\mathbb{B}}(x)=\bigcup_{i\in I_{0}}W^{s}_{\mathbb{B}}(C_{i})\) where \(I_{0}\) is the component of \(i_{0}\) in the graph. This is exactly the announced description. Let us point out the following interesting consequence of the proof: **Corollary 5.19**.: _Every connected component of \(J^{+}\cap\mathbb{B}\) (resp. \(K^{+}\cap\mathbb{B}\)) is locally connected._ Proof.: Given a component \(J^{+}_{\mathbb{B}}(x)\) of \(J^{+}\cap\mathbb{B}\), with notation as in the previous proof, \((W^{s}_{\mathbb{B}}(C_{i}))_{i\in I}\) is a finite cover of \(J^{+}_{\mathbb{B}}(x)\) by locally connected and relatively open sets: local connectedness follows. If now \(C\) is a component of \(K^{+}\cap\mathbb{B}\), we saw in the proof of Proposition 5.17 that \(\partial C\) is a finite union of components of \(J^{+}\cap\mathbb{B}\), therefore \(\partial C\) is locally connected. General topology then implies that \(C\) is locally connected and we are done. ### Induced dynamics on the set of components of \(J^{+}\) We still consider a uniformly hyperbolic dissipative Henon map, with a disconnected and stably totally disconnected Julia set, and fix a large bidisk \(\mathbb{B}\) as before. Since \(f\) maps \(K^{+}\cap\mathbb{B}\) (resp. \(J^{+}\cap\mathbb{B}\)) into itself, it induces a dynamical system on the set of its connected components. Recall that a component is said _non-trivial_ if it is not reduced to a vertical submanifold. **Theorem 5.20**.: _Let \(f\) be dissipative and hyperbolic with a disconnected and stably totally disconnected Julia set and \(\mathbb{B}\subset\mathbb{C}^{2}\) be a large bidisk. Then \(K^{+}\cap\mathbb{B}\) (resp. \(J^{+}\cap\mathbb{B}\)) admits uncountably many components, at most countably many of which being non-trivial. Any non-trivial connected component of \(K^{+}\cap\mathbb{B}\) (resp. \(J^{+}\cap\mathbb{B}\)) is preperiodic, and there are finitely many non-trivial periodic components._ _Remark 5.21_.: Notice a periodic component of \(K^{+}\cap\mathbb{B}\) can be trivial, that is, a vertical submanifold. Since it is mapped into itself by some \(f^{N}\) in this case we conclude that it is of the form \(W^{s}_{\mathbb{B}}(x)\) for some saddle periodic point \(x\). **Lemma 5.22**.: _The function \(y\mapsto\operatorname{Diam}_{y}(J^{u}(y))\) (resp. \(y\mapsto\operatorname{Diam}_{y}(K^{u}(y))\)) is upper semi-continuous on \(J\). In particular if \(y_{n}\to y_{\infty}\) and \((\operatorname{Diam}_{y_{n}}(K^{u}(y_{n})))\) is unbounded, then \(K^{u}(y_{\infty})\) is leafwise unbounded, and likewise for \(J^{u}\)._ Proof.: Recall that \(\operatorname{Diam}_{y}(J^{u}(y))=\operatorname{Diam}_{y}(K^{u}(y))\) for every \(y\in J\) (including the case where it is infinite) so it is enough to deal with \(K^{u}(y)\). Assume first that the \(y_{n}\) belong to the same local leaf and \(y_{n}\to y_{\infty}\). If \(K^{u}(y_{\infty})\) is leafwise bounded, we can consider a closed loop \(\gamma\) enclosing it and disjoint from \(K^{+}\). Then for large enough \(n\), \(\gamma\) also encloses \(K^{u}(y_{n})\), and any cluster value of this sequence for the Hausdorff topology is a continuum contained in \(K^{+}\) and containing \(y_{\infty}\). It follows that \[\limsup_{n\to\infty}\operatorname{Diam}_{y_{\infty}}(K^{u}(y_{n}))\leq \operatorname{Diam}_{y_{\infty}}(K^{u}(y_{\infty}))\] hence \[\limsup_{n\to\infty}\operatorname{Diam}_{y_{n}}(K^{u}(y_{n}))\leq\operatorname {Diam}_{y_{\infty}}(K^{u}(y_{\infty})),\] as desired. Of course if \(K^{u}(y_{\infty})\) is leafwise unbounded, the inequality is obvious. Assume now that the \(y_{n}\) belong to different local leaves. As before, the case where \(K^{u}(y_{\infty})\) is leafwise unbounded is obvious. If \(K^{u}(y_{\infty})\) is leafwise bounded, again we consider a closed loop \(\gamma\) enclosing it and disjoint from \(K^{+}\). In addition we can assume that \(\operatorname{Diam}_{y_{\infty}}(\gamma)\) is arbitrary close to \(\operatorname{Diam}_{y_{\infty}}(K^{u}(y_{\infty}))\). When \(y_{n}\to y_{\infty}\), \(\gamma\) can be lifted to a loop \(\widetilde{\gamma}_{n}\) in \(W^{u}(y_{n})\), with roughly the same diameter (here we use the continuity of the leafwise distance \(d_{y}^{u}\)), and \(K^{u}(y_{n})\) is enclosed in \(\widetilde{\gamma}_{n}\). The semi-continuity of the diameter follows. Proof of Theorem 5.20.: Fix an unstable transversal \(\Delta^{u}\), and recall that any component of \(K^{+}\cap\mathbb{B}\) (resp. \(J^{+}\cap\mathbb{B}\)) intersects \(\Delta^{u}\). By [6, Thm 7.1], \(J^{+}\cap\Delta^{u}\) admits uncountably many point components, thus the first assertion of the theorem follows from Proposition 5.9. Then Corollary 5.10 asserts that at most countably many components are non-trivial. Let \(x\in J^{+}\cap\Delta^{u}\) and assume that \(J^{+}_{\mathbb{B}}(x)\) (or equivalently \(K^{+}_{\mathbb{B}}(x)\)) is non-trivial. Since \(\Delta^{u}\) is a global transversal, \(J^{u}(x)\) is leafwise bounded. For \(n\geq 0\), \(J^{u}(x_{n})=f^{n}(J^{u}(x))\) where \(x_{n}=f^{n}(x)\), and by (1), \[\operatorname{Diam}_{x_{n}}(J^{u}(x_{n}))\geq Cu^{n}\operatorname{Diam}_{x}(J ^{u}(x))\underset{n\to\infty}{\longrightarrow}\infty. \tag{5}\] Let \(x_{\infty}\) be any accumulation point of \((x_{n})\). By Lemma 5.22, \(J^{u}(x_{\infty})\) is leafwise unbounded, and so does \(K^{u}(x_{\infty})\). By local product structure, for large \(n\), the holonomy along the stable lamination defines a projection \[D^{u}(x_{n},3/2)\cap J^{+}\to D^{u}(x_{\infty},2)\cap J^{+}\] which we simply denote by \(\pi^{s}\). It is Lipschitz (see Lemma 5.7) and a homeomorphism onto its image. Notice that \(\pi^{s}(D^{u}(x_{n},3/2)\cap J^{+})\) contains \(D^{u}(x_{\infty},1)\cap J^{+}\) for large \(n\). For large \(n\), \(J^{u}(x_{n})\) intersects the boundary of \(D^{u}(x_{n},3/2)\), so the sets \(J^{u}(\pi^{s}(x_{n}))\) define a sequence of components of \(J^{+}\cap D^{u}(x_{\infty},1)\) of diameter bounded from below. From Theorem 3.6 we infer that this sequence is finite. Let us denote by \(C_{j}\), \(j=1,\dots,N\) these components. By the Pigeonhole Principle there exist \(n\neq n^{\prime}\) such that \(\pi_{s}(x_{n})\) and \(\pi_{s}(x_{n^{\prime}})\) belong to the same \(C_{j}\), thus \(x_{n}\) and \(x_{n^{\prime}}\) belong the local stable saturation of \(C_{j}\). Therefore the sequence \((J^{+}_{\mathbb{B}}(x_{n}))\) is eventually periodic, and so is \((K^{+}_{\mathbb{B}}(x_{n}))\). Consider now a non-trivial periodic component \(C\) of \(J^{+}\cap\mathbb{B}\). Then it is of the form \(J^{+}_{\mathbb{B}}(x)\) for some \(x\in\Delta^{u}\cap J^{+}\). The previous argument shows that there are points \(x^{\prime}\in C\cap J\) such that \(J^{u}(x^{\prime})\) is leafwise unbounded. By Proposition 5.13, the components of the slices \(J^{+}_{\mathbb{B}}\cap\Delta^{u}\) have diameter uniformly bounded from below (here we use the fact that for every \(x\in\Delta^{u}\cap J^{+}\), the distance \(d^{u}_{x}\) is uniformly comparable to the ambient distance on \(\Delta^{u}\)). Thus, by Theorem 3.6 only finitely many such components can arise and we conclude that \(C\) belongs to a finite set of components. The corresponding result for components of \(K^{+}\cap\mathbb{B}\) follows from Lemma 2.3. _Remark 5.23_.: Using techniques similar to those of SS5.2 it is easily seen that any component of \(K^{+}\cap\mathbb{B}\) has finitely many preimages. In other words, the induced dynamical system on components of \(K^{+}\cap\mathbb{B}\) is finite-to-1. Indeed assume by contradiction that \(C\) is a component such that \(f^{-1}(C)\cap\mathbb{B}\) has infinitely many preimages \(C_{i}\). Then by Theorem 3.6, for some \(i\), \(C^{i}\cap\Delta^{u}\) has a component of small diameter. Therefore by pushing forward, there is some \(x\in C\cap J\) such that \(\operatorname{Diam}_{x}(J^{u}(x))\) is small, that is, \(J^{+}_{\mathbb{B}}(x)\) (or equivalently \(K^{+}_{\mathbb{B}}(x)\)) is thin. But it is easy to show that a thin component admits finitely many preimages, and we arrive at the desired contradiction. ## 6. Components of \(J\) and \(K\) We keep the same setting as before, that is, \(f\) is a uniformly hyperbolic dissipative Henon map, with a disconnected and stably totally disconnected Julia set. In this section, we complete the proof of the main theorem by classifying the connected components of \(J\) and \(K\). We start with an easy fact. Recall the notation \(E(x)=\operatorname{Comp}_{E}(x)\). **Proposition 6.1**.: _If \(x\in J\) is such that \(J^{u}(x)\) is leafwise bounded then \(J(x)=J^{u}(x)\)._ Proof.: First, \(J^{u}(x)\) is a connected set such that \(x\in J^{u}(x)\subset J\) so it is contained in \(J(x)\). To prove the converse statement, let \((U_{n})\) be a sequence of open neighborhoods of \(J^{u}(x)\) in \(W^{u}(x)\) decreasing to \(J^{u}(x)\) and such that \(\partial_{\hat{\imath}}U_{n}\cap J=\emptyset\). Since \(J^{s}(x)=\{x\}\), for every \(n\) any sufficiently small loop \(\gamma\) about \(x\) in \(W^{s}(x)\) can be propagated along \(U_{n}\) to yield an open set \(\widetilde{U}_{n}\) such that \(\partial\widetilde{U}_{n}=\emptyset\). Note that we did not prove any extension result for the unstable lamination, so we cannot simply say that we propagate \(\gamma\) by using some "unstable holonomy". On the other hand we can simply use the inclination lemma, by pushing forward a small thickening of \(f^{-n}(\gamma)\) as a 3 manifold transverse to \(W^{s}(f^{-n}(x))\). Finally, for every \(n\), \(\partial\widetilde{U}_{n}\) is relatively open and closed in \(J\), so it contains \(J(x)\) and we conclude that \(J(x)=J^{u}(x)\). To understand the structure of periodic components of \(J\), let us introduce a definition. **Definition 6.2**.: A _quasi-solenoid_ is a saddle hyperbolic set such that \(f^{k}(\Lambda)=\Lambda\) for some \(k\) and: * \(\Lambda\) is connected; * \(\Lambda\) has local product structure; * for every \(x\in\Lambda\), \(\Lambda\cap W^{u}(x)\) is leafwise unbounded and locally connected, and \(\Lambda\cap W^{s}(x)\) is totally disconnected. Observe that in this definition we do not require that \(\Lambda\cap W^{s}_{\operatorname{loc}}(x)\) is a Cantor set. In other words, we allow for isolated points in a stable transversal (this phenomenon will be ruled out later under appropriate hypotheses, see Theorem 8.7). **Theorem 6.3**.: _Let \(f\) be dissipative and hyperbolic with a disconnected and stably totally disconnected Julia set and \(\mathbb{B}\) be as above. Let \(C\) be a periodic component of \(J^{+}\cap\mathbb{B}\) and \(k\) be its period. Then \(\Lambda:=\bigcap_{n\geq 0}f^{kn}(C)\) is a point or a quasi-solenoid, and it is a connected component of \(J\)._ Proof.: Replacing \(f\) by some iterate, we may assume \(C\) is invariant, that is, \(k=1\). If \(C\) is a vertical manifold, it follows from Remark 5.21 that \(\Lambda\) is a point, and the other properties follow easily, so the interesting case is when \(C\) is non-trivial. Then, arguing in the proof of Theorem 5.20, by (5), \(C\) contains points such that \(\operatorname{Diam}_{x}(J^{u}(x))\) is arbitrary large, so it is thick in the sense of Proposition 5.13. Define \(\Lambda:=\bigcap_{n\geq 0}f^{n}(C)=\bigcap_{n\geq 0}\overline{f^{n}(C)}\). Since by assumption \(f(C)\subset C\), \(\Lambda\) is a decreasing intersection of compact connected sets. Hence \(\Lambda\) is an invariant connected hyperbolic set contained in \(J\), and \(f(\Lambda)=\Lambda\). Let us show that it is a connected component of \(J\). For this, let \(\Lambda^{\prime}\) be the connected component of \(\Lambda\) in \(J\). By definition \(\Lambda\subset\Lambda^{\prime}\). Since \(\Lambda^{\prime}\) is connected and contained in \(J^{+}\cap\mathbb{B}\), it must be contained in \(C\). Furthermore since \(f(\Lambda)=\Lambda\), and \(f\) permutes the components of \(J\), we have that \(f(\Lambda^{\prime})=\Lambda^{\prime}\), hence for every \(n\geq 1\), \(f^{-n}(\Lambda^{\prime})\subset C\), and we conclude that \(\Lambda^{\prime}\subset\bigcap_{n\geq 0}f^{n}(C)=\Lambda\), as was to be shown. We claim that for every \(x\in\Lambda\), \(J^{u}(x)\) is leafwise unbounded. Indeed for every \(x\in\Lambda\), we have that \(x=f^{n}(x_{-n})\) with \(x_{-n}=f^{-n}(x)\in C\) and since \(C\) is thick, \(\operatorname{Diam}_{x_{-n}}(J^{u}(x_{-n}))\) is uniformly bounded from below, and the result follows. By Lemma 3.9, for every \(x\in\Lambda\), there are only finitely many components of \(J\cap D^{u}(x,1)\) intersecting \(\partial_{\operatorname{i}}D^{u}(x,1)\) and \(D^{u}(x,1/2)\). A simple compactness argument using the holonomy invariance of \(J^{+}\) shows that this number is uniformly bounded, therefore there exists a uniform \(\delta>0\) such that leafwise unbounded components of \(J^{+}\) intersecting \(D^{u}(x,1/2)\) are \(\delta\)-separated in \(D^{u}(x,1)\) relative to the distance \(d^{u}_{x}\) (or equivalently, relative to the ambient one). From this we deduce that for every \(x\in\Lambda\), there exists \(\delta>0\) such that \(\Lambda\) coincides with \(J^{u}(x)\) in \(W^{u}_{\delta}(x)\), and it follows from Theorem 3.6 that \(\Lambda\) is locally connected in the unstable direction. Let us show that \(\Lambda\) has local product structure. For this, let \(y_{1},y_{2}\in\Lambda\) be close (i.e. \(d(y_{1},y_{2})\ll\delta\)), denote by \(\pi^{s}:W^{u}_{\operatorname{loc}}(y_{1})\to W^{u}_{\operatorname{loc}}(y_{2})\) the projection along stable leaves, and let \(z_{2}=\pi^{s}(y_{1})\). Since \(J^{u}(y_{1})\) and \(J^{u}(y_{2})\) are leafwise unbounded, if \(d(y_{1},y_{2})\) is small enough, \(J^{u}(z_{2})\) intersects \(\partial_{\operatorname{i}}D^{u}(y_{2},1)\), and so does \(J^{u}(y_{2})\). By definition of \(\delta\), it follows that \(J^{u}(y_{2})=J^{u}(z_{2})\), hence \(y_{2}\) and \(z_{2}\) belong to the same connected component of \(J\). In particular, \(z_{2}\) belongs to \(C\). Since \(f^{-1}\) contracts distances along unstable manifolds, and respects connected components of \(J\), we can repeat this argument with \(f^{-n}(y_{2})\) and \(f^{-n}(z_{2})\) for any \(n\geq 0\) and we conclude that \(z_{2}\in\Lambda\), as was to be shown. **Theorem 6.4**.: _Let \(f\) be dissipative and hyperbolic with a disconnected and stably totally disconnected Julia set. Then every component of \(J\) is either_ 1. _a point;_ 2. _or of the form_ \(J^{u}(x)\) _with_ \(J^{u}(x)\) _non-trivial and leafwise bounded;_ 3. _or a periodic quasi-solenoid._ _In addition:_ 1. _There are finitely many quasi-solenoidal components_ 2. _Every periodic component of_ \(J\) _is either a point or a quasi-solenoid._ 3. _Every non-trivial component of_ \(J\) _is attracted by a quasi-solenoid. More precisely, given a non-trivial component_ \(C\) _for every_ \(\delta>0\) _there exists_ \(n\) _such that_ \(f^{kn}(C)\subset W^{s}_{\delta}(\Lambda)\)_, where_ \(\Lambda\) _is a quasi-solenoid of period_ \(k\)_._ Note that in assertion (_ii_), the uniformity of \(n\) as a function of \(\delta\) is not a direct consequence of the fact that \(\omega(C)\subset\Lambda\). Proof.: To establish the announced trichotomy, by Proposition 6.1 it is enough to show that if \(C\) is a component such that for some \(x\in C\), \(J^{u}(x)\) is leafwise unbounded, then \(C\) is a periodic quasi-solenoid. Note that for every \(n\geq 1\), \(J^{u}(f^{-n}(x))\) is leafwise unbounded. Therefore the component of \(f^{-n}(x)\) in \(J^{+}\cap\mathbb{B}\) is thick in the sense of Proposition 5.13, and by Corollary 5.14, \(J^{+}_{\mathbb{B}}(f^{-n}(x))\) belongs to a finite set of semi-local components. Thus there exists a component \(C^{+}\) of \(J^{+}\cap\mathbb{B}\) and an infinite sequence \(n_{i}\) such that \(f^{-n_{i}}(x)\in C^{+}\), hence \(C^{+}\) is periodic of some period \(k\) and reversing time we get that \(J^{u}(x)\) is included in \(\Lambda:=\bigcap_{n\geq 0}f^{kn}(C^{+})\). By Theorem 6.3, \(\Lambda\) is a quasi-solenoid and \(J(x)=C=\Lambda\). Since there are only finitely many periodic semi-local components of \(J^{+}\), this argument shows that \(J\) has only finitely many solenoidal components. For assertion (_ii_), let \(C\) be a periodic component of \(J\) which is not reduced to a point, and let \(x\in C\). Without loss of generality we assume \(C\) is fixed. Expansion in the unstable direction shows that if \(J^{u}(x)\) is leafwise bounded, then \(J^{u}(x)=\{x\}\), which is a contradiction. Thus by the first part of the proof, \(C\) is a quasi-solenoid. To prove (_iii_), let \(C\) be a non-trivial component of \(J\), and for some large bidisk \(\mathbb{B}\), let \(C^{+}\) be the component of \(J^{+}\cap\mathbb{B}\) containing \(C\). Then by Theorem 5.20\(C^{+}\) is ultimately periodic (with preperiod \(k\)), thus by Theorem 6.3, \(\bigcap_{n\geq 0}f^{kn}(C^{+})\) is a periodic quasi-solenoid \(\Lambda\). This shows that \(C\) is attracted by \(\Lambda\) in the sense that for large \(n\), \(f^{kn}(C)\) is contained in a \(\delta\)-neighborhood of \(\Lambda\). To get the more precise statement that \(f^{kn}(C)\subset W^{s}_{\delta}(\Lambda)\), we have to show that \(W^{s}_{\delta}(\Lambda)\) is relatively open in \(C^{+}\cap J\). The argument is the same as for the local product structure: since large leafwise components of \(J\) are separated by some uniform distance and \(C\) is thick, if \(x\in C\cap J\) is sufficiently close to \(y\in\Lambda\), \(W^{s}_{\mathrm{loc}}(x)\cap W^{u}_{\mathrm{loc}}(y)\) must belong to a large component of \(W^{u}_{\mathrm{loc}}(y)\cap J\), therefore it belongs to \(J^{u}(y)\), and we are done. _Remark 6.5_.: Leafwise bounded components of \(J\) are locally connected, as follows from Theorem 3.6. On the other hand a quasi-solenoid is not locally connected, since it locally has the structure of a Cantor set times a (locally) connected set. The following result says that there is a 1-1 correspondence between components of \(K\) and \(J\), so that the previous theorems yield a description of components of \(K\) as well. **Proposition 6.6**.: _Every component of \(K\) contains a unique component of \(J\)._ For polynomials in one variable, the analogous statement is the fact that every component of \(K\) has a connected boundary, which follows from polynomial convexity. Here, components of \(K\) have empty interior so this has to be formulated differently. Proof.: Every component of \(K\) contains a point of \(J\), for otherwise it would be contained in \(\operatorname{Int}(K^{+})\), so it is of the form \(K(x)\) for some \(x\in J\). If \(J(x)=\{x\}\) the result is obvious. Now assume that \(J^{u}(x)\) is leafwise bounded. By Lemma 2.4, \(K^{u}(x)\) is obtained by filling the holes of \(J^{u}(x)\) in \(W^{u}(x)\simeq\mathbb{C}\), so \(J^{u}(x)\) is equal to the intrinsic boundary of \(K^{u}(x)\) and the result follows. The most interesting case is when \(J(x)\) is a quasi-solenoid. Replacing \(f\) by \(f^{k}\) for some \(k\geq 1\), we may assume that \(J(x)\) is fixed. We proved in Theorem 6.3 that \(J(x)=\bigcap_{n\geq 0}f^{n}(J^{+}_{\mathbb{B}}(x))\). The very same proof shows that \(K(x)=\bigcap_{n\geq 0}f^{n}(K^{+}_{\mathbb{B}}(x))\). By Lemma 2.3, \(\partial K^{+}_{\mathbb{B}}(x)\) contains a unique component of \(J^{+}_{\mathbb{B}}(x)\) (namely, its boundary), and we conclude by arguing that if \(K(x)\) contained two distinct components \(J(x)\) and \(J(y)\) of \(J\), then \(K^{+}_{\mathbb{B}}(x)\) would contain \(J^{+}_{\mathbb{B}}(x)\) and \(J^{+}_{\mathbb{B}}(y)\), which must be distinct because \(\bigcap_{n\geq 0}f^{n}(J^{+}_{\mathbb{B}}(x))\neq\bigcap_{n\geq 0}f^{n}(J^{+}_{ \mathbb{B}}(y))\), and this is impossible. ## 7. Complements We keep the setting as in Sections 5 and 6. Here we prove a number of complementary facts which do not enter into the proof of the main theorem, so we sometimes allow the presentation to be a little sketchy. ### Transitivity A desirable property of quasi-solenoids is transitivity, or chain transitivity. At this stage we are not able to show that quasi-solenoidal components are transitive, but let us already explain a partial result in this direction. The full statement will be obtained in Theorem 8.7 under an additional assumption. **Proposition 7.1**.: _If \(\Lambda\) is a quasi-solenoidal component of \(J\) of period \(k\), there exists a quasi-solenoid \(\Lambda^{\prime}\subset\Lambda\) of period \(k\ell\), which is saturated by unstable components (that is, if \(x\in\Lambda^{\prime}\) then \(J^{u}(x)\subset\Lambda^{\prime}\)), with the property that \(f^{k\ell}|_{\Lambda^{\prime}}\) is topologically mixing. In addition, stable slices of \(\Lambda^{\prime}\) are Cantor sets and for every periodic point \(p\in\Lambda^{\prime}\), \(\Lambda^{\prime}=\overline{J^{u}(p)}\)._ This proposition follows from general facts from hyperbolic dynamics. Let us recall some basics. Recall that a If \(\Lambda\) is a compact hyperbolic set with local product structure, then by Smale's Spectral Decomposition Theorem (see e.g. [46, SS4.2]), the non-empty closed invariant subset \[\Omega:=\mathcal{C}(f|_{\Lambda})=\overline{\operatorname{Per}(f|_{\Lambda})}\] (where by definition \(\mathcal{C}(f|_{\Lambda})\) is the chain recurrent set of \(f|_{\Lambda}\)) admits a decomposition of the form \(\Omega=\Omega_{1}\cup\cdots\cup\Omega_{N}\). The \(\Omega_{i}\) are called the basic pieces. They are closed (and hence relatively open in \(\Omega\)), \(f\) induces a permutation on the basic pieces and if \(q\) is the least integer such that \(f^{q}(\Omega_{i})=\Omega_{i}\), then \(f^{q}|_{\Lambda_{i}}\) is topologically mixing. In addition, \(\Omega\) and the \(\Omega_{i}\) have local product structure. Proof.: For notational simplicity replace \(f^{k}\) by \(f\) so that \(k=1\). Consider the \(\omega\)-limit set \(\omega(\Lambda)=\bigcup_{x\in\Lambda}\omega(x)\). Since a limit point is non-wandering, it is chain recurrent, so \(\omega(\Lambda)\subset\Omega\). Conversely, since any periodic point is an \(\omega\)-limit point, we see that \(\operatorname{Per}(f|_{\Lambda})\subset\omega(\Lambda)\), hence \(\Omega\subset\omega(\Lambda)\) and \(\omega(\Lambda)=\Omega\). Then the Shadowing Lemma implies that \(\Lambda\subset W^{s}(\Omega)=\bigcup_{x\in\Omega}W^{s}(x)\). Fix a small \(\delta>0\): then \(W^{s}(\Omega)=\bigcup_{n\geq 0}f^{-n}\left(W^{s}_{\delta}(\Omega)\right)\). By Baire's theorem, there exists \(n\) such that \(f^{-n}\left(W^{s}_{\delta}(\Omega)\right)\) has non-empty relative interior in \(\Lambda\), hence so does \(W^{s}_{\delta}(\Omega)\), and we conclude that for some \(i_{0}\), \(W^{s}_{\delta}(\Omega_{i_{0}})\) has relative non-empty interior in \(\Lambda\). Let us show that \(\Lambda^{\prime}=\Omega_{i_{0}}\) satisfies the requirements of the proposition. If \(\ell\) is the least integer such that \(f^{\ell}(\Lambda^{\prime})=\Lambda^{\prime}\), the fact that \(f^{\ell}|_{\Lambda^{\prime}}\) is topologically mixing follows from the Spectral Decomposition Theorem. Since \(\Lambda^{\prime}\) has local product structure and \(W^{s}_{\delta}(\Lambda^{\prime})\) has relative non-empty interior in \(\Lambda\), we see that there exists a relatively open subset \(U\) in \(\Lambda^{\prime}\) such that for any \(x_{0}\in U\), a neighborhood of \(x_{0}\) in \(J^{u}(x_{0})\) in contained in \(\Lambda^{\prime}\). Since \(f^{\ell}|_{\Lambda^{\prime}}\) is topologically transitive we may assume that \(x_{0}\) has a dense orbit under \(f^{\ell}\). So if \(y\in\Lambda^{\prime}\) is arbitrary we can find a sequence \((n_{j})\) such that \(f^{\ell n_{j}}(x_{0})\to y\). By expansion in the unstable direction, there exists a uniform \(\delta>0\) such that for every \(j\), \(f^{\ell n_{j}}(\Lambda^{\prime})=\Lambda^{\prime}\) contains a \(\delta\)-neighborhood of \(f^{\ell n_{j}}(x_{0})\) in \(J^{u}(f^{\ell n_{j}}(x_{0}))\), so by local product structure we conclude that a neighborhood of \(y\) in \(J^{u}(y)\) is contained in \(\Lambda^{\prime}\). On the other hand since \(\Lambda^{\prime}\) is closed it is also relatively closed in unstable manifolds. This shows that \(\Lambda^{\prime}\) is saturated by unstable components. Let us show that for every periodic point \(p\in\Lambda^{\prime}\), \(\overline{J^{u}(p)}=\Lambda^{\prime}\). Let \(N=\ell m\) be the period of \(p\). Since \(f^{\ell}|_{\Lambda^{\prime}}\) is topologically mixing, \(f^{\ell m}|_{\Lambda^{\prime}}\) is topologically transitive, so there exists \(y\) arbitrary close to \(p\) such that \((f^{\ell mn}(y))_{n\geq 0}\) is dense in \(\Lambda^{\prime}\). Let \(y^{\prime}\) be the projection of \(y\) in \(W^{u}_{\mathrm{loc}}(p)\) under stable holonomy. By local product structure, \(y^{\prime}\) belongs to \(J^{u}(p)\), and \(y^{\prime}\in W^{s}(y)\) so \((f^{\ell mn}(y^{\prime}))\) is dense, too. Since all these points belong to \(J^{u}(p)\), we conclude that \(J^{u}(p)\) is dense in \(\Lambda^{\prime}\), as asserted. For \(p\) as above, since \(J^{u}(p)\) is leafwise unbounded, it must accumulate non-trivially in \(\Lambda^{\prime}\). More precisely, there exists \(x\in\Lambda^{\prime}\) and a sequence of points \(x_{n}\in J^{u}(p)\), with \(x_{n}\notin W^{u}_{\mathrm{loc}}(x)\) and \(x_{n}\to x\). Note that by local product structure, \(W^{u}_{\mathrm{loc}}(x_{n})\cap\Lambda^{\prime}\) corresponds to \(W^{u}_{\mathrm{loc}}(x)\cap\Lambda^{\prime}\) under local stable holonomy. Now as before there exists \(y^{\prime}\in W^{u}_{\mathrm{loc}}(p)\cap\Lambda^{\prime}\) whose orbit is dense in \(\Lambda^{\prime}\). Thus any \(z\in\Lambda^{\prime}\) is the limit of \(f^{n_{j}}(y^{\prime})\) for some subsequence \(n_{j}\). But \(f^{n_{j}}(y^{\prime})\) is an accumulation point of \(W^{s}_{\mathrm{loc}}(f^{n_{j}}(y^{\prime}))\cap\Lambda^{\prime}\), so the same holds for \(z\), and we conclude that \(\Lambda^{\prime}\) is transversally perfect in the stable direction, hence it is transversally a Cantor set. ### Basins and solenoids Assume that \(f\) has an attracting cycle \(\{a_{1},\ldots a_{q}\}\) of exact period \(q\). We denote by \(\mathcal{B}\) its basin of attraction, which is made of \(k\) connected components \(\mathcal{B}_{i}\) biholomorphic to \(\mathbb{C}^{2}\). For every \(i\) we can write \(\mathcal{B}_{i}\cap\mathbb{B}\) as the (at most) countable union \((\mathcal{B}_{i,j})_{j\geq 0}\) of its components, with \(a_{i}\in\mathcal{B}_{i,0}\). We refer to these open sets as basin components and to \(\mathcal{B}_{i,0}\) as the _immediate basin_ of \(a_{i}\). Note that if we replace \(f\) by \(f^{q}\), the basin of attraction of \(a_{i}\) is now made of a single component, but \(\mathcal{B}_{i,0}\) is unchanged. By definition a _Jordan star_ in \(U\subset\mathbb{C}\) is a finite union of simple Jordan arcs in \(U\), intersecting at a single point. **Theorem 7.2**.: _Let \(f\) be dissipative and hyperbolic with a disconnected and stably totally disconnected Julia set. Suppose that \(f\) admits an attracting fixed point with immediate basin \(\mathcal{B}_{0}\). Then:_ 1. \(\partial\mathcal{B}_{0}\) _is a properly immersed topological submanifold of dimension 3, which intersects any global unstable transversal in finitely many Jordan domains._ 2. \(\bigcap_{n\geq 0}\partial\mathcal{B}_{0}\) _is a quasi-solenoid, whose unstable slices are Jordan stars. In particular there is a (saddle) periodic point in_ \(\partial\mathcal{B}_{0}\) We can be more precise about the structure of \(\partial\mathcal{B}_{0}\): locally it is homeomorphic to the product of a 2-disk by a Jordan star. The proof of the theorem shows that if the components of \(\mathcal{B}_{0}\cap\Delta^{u}\) have disjoint closures, then these stars are reduced to Jordan arcs, that is, \(\partial\mathcal{B}_{0}\) is a topological submanifold. The following basic fact is crucial for the proof. **Lemma 7.3**.: _The stable lamination \(\mathcal{W}^{s}\) respects basin boundaries. That is, if \(x\in J^{+}\) belongs to the boundary of an attracting basin \(\mathcal{B}\), then so does its image under stable holonomy._ Proof.: This follows readily from the existence of a local extension of the stable lamination (Lemma 5.7): indeed if a leaf of the extended foliation joined a point from \(\operatorname{Int}(K^{+})\) to a point of \((K^{+})^{\zeta}\), it would have to intersect \(J^{+}\). (See also [16], Step 3 of the proof of the main theorem, for an alternate argument without extending the stable lamination.) Proof of Theorem 7.2.: Fix a global unstable transversal \(\Delta^{u}\). Since every semi-local stable manifold intersects \(\Delta^{u}\), \(\mathcal{B}_{0}\cap\Delta^{u}\) is non-empty, and by the Maximum Principle each of its connected components is a topological disk. Pick such a connected component \(\Omega_{0}\). By the John-Holder property (Theorem 3.10), \(\partial\Omega_{0}\) is locally connected, and by the Maximum Principle again there is no cut point, and it follows that \(\Omega_{0}\) is a Jordan domain (see [40, Thm 2.6]). If the diameter of \(\Omega_{0}\) is small then, by Remark 5.16, enlarging \(\mathbb{B}\) if necessary the saturation \(\widehat{\partial\Omega_{0}}\) of \(\partial\Omega_{0}\) by semi-local stable leaves is topologically a product and we infer that \(\widehat{\partial\Omega_{0}}\cap\Delta^{u}\) has finitely many components. Otherwise the diameter is large and by the same remark, every component of \(\widehat{\partial\Omega_{0}}\cap\Delta^{u}\) has a large diameter. Then the finiteness of the number of such components follows from the John-Holder property of \(W^{u}(x)\backslash K^{+}\), Proposition 5.17, and the finiteness statement for interior components in Lemma 3.9. By the Maximum Principle, if \(\Omega_{0}\) and \(\Omega_{1}\) are two components of \(\mathcal{B}_{0}\cap\Delta^{u}\) such that \(\overline{\Omega_{0}}\cap\overline{\Omega_{1}}\neq\emptyset\), then \(\overline{\Omega_{0}}\cap\overline{\Omega_{1}}\) is a single point. Indeed if this set contained two distinct points \(z\) and \(z^{\prime}\), by using crosscuts of \(\Omega_{0}\) and \(\Omega_{1}\) ending at \(z\) and \(z^{\prime}\) we could construct a Jordan domain \(U\) with \(\partial U\subset\overline{\Omega_{0}}\cup\overline{\Omega_{1}}\), and \(U\) would be contained in the Fatou set, a contradiction. Create a plane graph from \(\mathcal{B}_{0}\cap\Delta^{u}\) whose vertices are its components and edges are added when two components touch. The Maximum Principle again shows that this graph is a finite union of trees. Since the stable holonomy respects \(\partial\mathcal{B}_{0}\) and \(\partial\mathcal{B}_{0}\) is obtained from \(\partial\mathcal{B}_{0}\cap\Delta^{u}\) by saturating by stable manifolds, the description of \(\partial\mathcal{B}_{0}\) as a properly immersed topological submanifold of dimension 3 follows. The proof of the second item of the theorem is similar to that of Theorem 6.3. First, \(\partial\mathcal{B}_{0}\) is connected: the argument is identical to that of Lemma 2.3. Then, for every \(x\in\partial\mathcal{B}_{0}\cap J^{-}\), there are only finitely many components of \(\mathcal{B}_{0}\cap D^{u}(x,1)\) (resp. \(\partial\mathcal{B}_{0}\cap D^{u}(x,1)\)) intersecting \(D^{u}(x,1/2)\). Indeed, observe first that it is enough to prove this in \(D^{u}(x,r)\) for some uniform \(r\). By the uniform boundedness of the degree of semi-local stable manifolds in \(\mathbb{B}\), there is a uniform \(r\) such that \(D^{u}(x,r)\) can be pushed to \(\Delta^{u}\) by stable holonomy, and the applying item (i) of the theorem completes the argument. From this point we proceed exactly as in Theorem 6.3. The existence of a periodic point in \(\partial\mathcal{B}_{0}\) follows from general hyperbolic dynamics (see the comments after Proposition 7.1). _Remark 7.4_.: It follows from this description that if \(x\in\Lambda\) lies at the boundary of \(\mathcal{B}_{0}\), then in \(W^{u}(x)\), \(x\) belongs to the boundary of a component \(\Omega\) of \(\mathcal{B}_{0}\cap W^{u}(x)\). In particular, \(\Omega\) is a Fatou disk contained in \(\operatorname{Comp}_{K}(x)\). _Remark 7.5_.: We do not know whether components of \(\mathcal{B}_{0}\cap\Delta^{u}\) can actually bump into each other, or equivalently if \(\bigcap_{n\geq 0}\partial\mathcal{B}_{0}\) does contain stars. If bumping occurs, let \(E\) be the finite set of points at which the closures of the components of \(\mathcal{B}_{0}\cap\Delta^{u}\) touch each other. Then \(W^{s}_{\mathbb{B}}(E)\) is a finite union of vertical submanifolds, and \(f(W^{s}_{\mathbb{B}}(E))\subset W^{s}_{\mathbb{B}}(E)\). It follows that \(\bigcap_{n\geq 0}f^{n}(W^{s}_{\mathbb{B}}(E))\) is a finite set of periodic points, and for any other point \(x\) in the limiting quasi solenoid \(\Lambda:=\bigcap_{n\geq 0}\partial\mathcal{B}_{0}\), \(\Lambda\cap W^{u}_{\operatorname{loc}}(x)\) is a Jordan arc. Thus, roughly speaking, \(\Lambda\)_has the structure of finitely many solenoids attached at periodic "junction" points_. ### Branched Julia set model Let \(\Lambda\) be a quasi-solenoidal component of \(J\), and without loss of generality assume that \(\Lambda\) is fixed. Let \(J^{+}_{\mathbb{B}}(\Lambda)\) be its connected component in \(J^{+}_{\mathbb{B}}\) and consider its intersection \(D:=J^{+}_{\mathbb{B}}(\Lambda)\cap\Delta^{u}\) with some unstable transversal, which is made of finitely many thick components. Introduce a relation \(\sim\) on \(D\) by \(x\sim y\) if and only if \(W^{s}_{\mathbb{B}}(x)=W^{s}_{\mathbb{B}}(y)\), where by definition \(W^{s}_{\mathbb{B}}(x)=\bigcap_{\varepsilon>0}W^{s}_{(1+\varepsilon)\mathbb{B} }(x)\). Equivalently \(x\sim y\) iff \(W^{s}_{\mathbb{B}}(x)\cap\overline{W^{s}_{\mathbb{B}}(y)}\neq\emptyset\): concretely, this means that \(x\) and \(y\) are related when they are connected by a stable manifold which is tangent to \(\partial\mathbb{B}\). This defines a closed equivalence relation on \(D\). We denote by \(\tilde{D}:=D/\sim\) the quotient topological space, which is compact (and Hausdorff) and by \(\pi:D\to\tilde{D}\) the natural projection. Since \(f(W^{s}_{\mathbb{B}}(x))\subset W^{s}_{\mathbb{B}}(f(x))\), \(f\) descends to the quotient \(\tilde{D}:=D/\sim\) to a well defined continuous map \(\tilde{f}\). Geometrically \(\tilde{D}\) has to be thought of as a _branched Julia set_, lying on the branched surface -in the sense of Williams [45]- obtained by collapsing the semi-local stable leaves of the extended stable lamination. Then \(\tilde{f}\) is expanding on the plaques of this branched manifold3, and its iterates are uniformly quasiconformal wherever defined, since they are obtained by iterating \(f\) and projecting along the stable lamination. Observe that \(f\) is not necessarily surjective, since for every \(x\in D\), \(f^{n}(x)\) eventually belongs to \(W^{s}_{\mathbb{B}}(\Lambda)\), which may be smaller than \(J^{+}_{\mathbb{B}}(\Lambda)\) (cf. Figure 1). On the other hand by the last assertion of Theorem 6.4, there exists a uniform \(N\) such that \(f^{N}(J^{+}_{\mathbb{B}}(\Lambda))\subset W^{s}_{\mathbb{B}}(\Lambda)\). It follows that the sequence \(\bigcap_{0\leq k\leq n}\tilde{f}^{k}(\tilde{D})\) is stationary for \(n\geq N\) and that \(\tilde{D}^{\prime}:=\pi(W^{s}_{\mathbb{B}}(\Lambda)\cap\Delta^{u})\), is an invariant, closed, and plaque-open subset of \(\tilde{D}\) on which \(\tilde{f}\) is surjective. Footnote 3: Here by plaque we mean one of the finitely many overlapping disks which make up a local chart of a branched manifold, see [45, Def. 1.0] **Proposition 7.6**.: _With the above definitions, the dynamical system \((\Lambda,f)\) is topologically conjugate to the natural extension of \((\tilde{D},\tilde{f})\) (or equivalently \((\tilde{D}^{\prime},\tilde{f})\))._ Proof.: Indeed define \(h:\varliminf(\tilde{D},\tilde{f})\to\Lambda\) by \(h((\tilde{x}_{n})_{n\in\mathbb{Z}})=\bigcap_{n\geq 0}f^{n}(W^{s}_{\mathbb{B}}(x_{-n}))\), whose inverse is \(y\mapsto h^{-1}(y)=((\overbrace{W^{s}_{\mathbb{B}}(f^{n}(y))}^{s})_{n\in \mathbb{Z}}\). **Lemma 7.7**.: _Let \(\Lambda\) be a quasi-solenoidal component of \(J\). Then \(\Lambda\) is a quasi-solenoidal component of \(J\)._ Proof.: Let \(\Lambda\) be a quasi-solenoidal component of \(J\). Let \(J^{+}_{\mathbb{B}}(\Lambda)\) be its connected component in \(J^{+}_{\mathbb{B}}\) and consider its intersection \(D:=J^{+}_{\mathbb{B}}(\Lambda)\cap\Delta^{u}\) with some unstable transversal, which is made of finitely many thick components. Introduce a relation \(\sim\) on \(D\) by \(x\sim y\) if and only if \(W^{s}_{\mathbb{B}}(x)=W^{s}_{\mathbb{B}}(y)\), where by definition \(W^{s}_{\mathbb{B}}(x)=\bigcap_{\varepsilon>0}W^{s}_{(1+\varepsilon)\mathbb{B} }(x)\). Equivalently \(x\sim y\) iff \(W^{s}_{\mathbb{B}}(x)\cap\overline{W^{s}_{\mathbb{B}}(y)}\neq\emptyset\): concretely, this means that \(x\) and \(y\) are related when they are connected by a stable manifold which is tangent to \(\partial\mathbb{B}\). This defines a closed equivalence relation on \(D\). We denote by \(\tilde{D}:=D/\sim\) the quotient topological space, which is compact (and Hausdorff) and by \(\pi:D\to\tilde{D}\) the natural projection. Since \(f(W^{s}_{\mathbb{B}}(x))\subset W^{s}_{\mathbb{B}}(f(x))\), \(f\) descends to the quotient \(\tilde{D}:=D/\sim\) to a well defined continuous map \(\tilde{f}\). Geometrically \(\tilde{D}\) has to be thought of as a _branched Julia set_, lying on the branched surface -in the sense of Williams [45]- obtained by collapsing the semi-local stable leaves of the extended stable lamination. Then \(\tilde{f}\) is expanding on the plaques of this branched manifold3, and its iterates are uniformly quasiconformal wherever defined, since they are obtained by iterating \(f\) and projecting along the stable lamination. Observe that \(f\) is not necessarily surjective, since for every \(x\in D\), \(f^{n}(x)\) eventually belongs to \(W^{s}_{\mathbb{B}}(\Lambda)\), which may be smaller than \(J^{+}_{\mathbb{B}}(\Lambda)\) (cf. Figure 1). On the other hand by the last assertion of Theorem 6.4, there exists a uniform \(N\) such that \(f^{N}(J^{+}_{\mathbb{B}}(\Lambda))\subset W^{s}_{\mathbb{B}}(\Lambda)\). It follows that the sequence \(\bigcap_{0\leq k\leq n}\tilde{f}^{k}(\tilde{D})\) is stationary for \(n\geq N\) and that \(\tilde{D}^{\prime}:=\pi(W^{s}_{\mathbb{B}}(\Lambda)\cap\Delta^{u})\), is an invariant, closed, and plaque-open subset of \(\tilde{D}\) on which \(\tilde{f}\) is surjective. Footnote 3: Here by plaque we mean one of the finitely many overlapping disks which make up a local chart of a branched manifold, see [45, Def. 1.0] **Proposition 7.8**.: _With the above definitions, the dynamical system \((\Lambda,f)\) is topologically conjugate to the natural extension of \((\tilde{D},\tilde{f})\) (or equivalently \((\tilde{D}^{\prime},\tilde{f})\))._ Proof.: Let \(\Lambda\) be a quasi-solenoidal component of \(J\). Let \(\Lambda\) be a quasi-solenoidal component of \(J\). Let \(J^{+}_{\mathbb{B}}(\Lambda)\) be its connected component in \(J^{+}_{\mathbb{B}}\) and consider its intersection \(D:=J^{+}_{\mathbb{B}}(\Lambda)\cap\Delta^{u}\) with some unstable transversal, which is made of finitely many thick components. Introduce a relation \(\sim\) on \(D\) by \(x\sim y\) if and only if \(W^{s}_{\mathbb{B}}(x)=W^{s}_{\mathbb{B}}(y)\), where by definition \(W^{s}_{\mathbb{B}}(x)=\bigcap_{\varepsilon>0}W^{s}_{(1+\varepsilon)\mathbb{B} }(x)\). Equivalently \(x\sim y\) iff \(W^{s}_{\mathbb{B}}(x)\cap\overline{W^{s}_{\mathbb{B}}(y)}\neq\emptyset\): concretely, this means that \(x\) and \(y\) are related when they are connected by a stable manifold which is tangent to \(\partial\mathbb{B}\). This defines a closed equivalence relation on \(D\). We denote by \(\tilde{D}:=D/\sim\) the quotient topological space, which is compact (and Hausdorff) and by \(\pi:D\to\tilde{D}\) the natural projection. Since \(f(W^{s}_{\mathbb{B}}(x))\subset W^{s}_{\mathbb{B}}(f(x))\), \(f\) descends to the quotient \(\tilde{D}:=D/\sim\) to a well defined continuous map \(\tilde{f}\). Geometrically \(\tilde{D}\) has to be thought of as a _branched Julia set_, lying on the branched surface -in the sense of Williams [45]- obtained by collapsing the semi-local stable leaves of the extended stable lamination. Then \(\tilde{f}\) is expanding on the plaques of this branched manifold3, and its iterates are uniformly quasiconformal wherever defined, since they are obtained by iterating \(f\) and projecting along the stable lamination ## 8. Non-divergence of holonomy and applications ### The NDH property We say that the property of _Non-Divergence of Holonomy_ (NDH) holds if for every pair of points \(x,y\in J\) such that \(y\) belongs to \(W^{s}(x)\), the stable holonomy, which is locally defined from a neighborhood of \(x\) in \(W^{u}(x)\) to a neighborhood of \(y\) in \(W^{u}(y)\), can be continued along any path contained in \(J^{u}(x)\). _Remark 8.1_.: 1. The stable holonomy \(h:W^{u}(x)\to W^{u}(y)\) is independent of the choice of a path \(c\) from \(x\) to \(y\) in \(W^{s}(x)\) because \(W^{s}(x)\) is simply connected. 2. An unstable component \(J^{u}(x)\) is typically _not_ simply connected (since it may encloses the trace of an attracting basin on \(W^{u}(x)\)). So even if the stable holonomy from \(x\) to \(y\) admits an extension along continuous paths, it does not generally yield a well-defined map from \(J^{u}(x)\) to \(J^{u}(y)\). We do not know any example where the NDH property fails. An analogue of this property was studied in the context of the classification of Anosov diffeomorphisms, where it is expected to be a crucial step in the classification program. It was established in the two dimensional case in [20] (see also [10, 32] for related results). Back to automorphisms of \(\mathbb{C}^{2}\), we have the following simple criterion: **Proposition 8.2**.: _A sufficient condition for the NDH property is that the stable lamination \(\mathcal{W}^{s}\) of \(J^{+}\) is transverse to \(\partial\mathbb{B}\) (No Tangency condition, NT)._ Proof.: Assume that the No Tangency condition holds and let \(x,y\in J\) be such that \(y\) belongs to \(W^{u}(x)\). Replacing \(x\) and \(y\) by \(f^{k}(x)\) and \(f^{k}(y)\) for some positive \(k\), we may assume that \(y\in W^{s}_{\mathbb{B}}(x)\). There is a germ of stable holonomy \(h\) sending a neighborhood of \(x\) in \(J^{u}(x)\) to some neighborhood of \(y\in J^{u}(y)\). Let \(\gamma:[0,1]\to J^{u}(x)\) be a continuous path: we have to show that \(h\) can be continued along \(\gamma\). For this, introduce \(E\subset[0,1]\) the set of parameters \(t\) such that \(h\) can be continued along \(\gamma|_{[0,t]}\) and \(h(\gamma(t))\in W^{s}_{\mathbb{B}}(\gamma(t))\). Obviously, \(E\) is a relatively open subinterval of \([0,1]\) containing \(0\), and the proof will be complete if we show that \(E\) is closed. Thus, assume that \((t_{n})\in E^{\mathbb{N}}\) is an increasing sequence converging to \(t_{\infty}\), and let \(y_{\infty}\) be any cluster value of the sequence \((h(\gamma(t_{n})))\). The main observation is that since \(\mathcal{W}^{s}\) is transverse to \(\partial\mathbb{B}\), \(W^{s}_{\mathbb{B}}(\gamma(t_{n}))\) converges to \(W^{s}_{\mathbb{B}}(\gamma(t_{\infty}))\) in the Hausdorff topology, with multiplicity \(1\), or equivalently in the \(C^{1}\) topology. Furthermore, by the uniform boundedness of the vertical degree, there is a uniform \(L\) such that for every \(n\), there is a path of length at most \(L\) joining \(\gamma(t_{n})\) to \(h(\gamma(t_{n}))\) in \(W^{s}(\gamma(t_{n}))\). It follows that the assignment \(\gamma(t_{n})\mapsto h(\gamma(t_{n})\) is equicontinuous. Let \(y_{\infty}\) be a cluster value of \((h(\gamma(t_{n})))\). The equicontinuity property shows that \(h(\gamma(t_{n}))\) actually converges to \(y_{\infty}\), and also that the the points \(h(\gamma(t_{n}))\) belong to the same local plaque of the unstable lamination, which must thus coincide with \(W^{u}_{\mathrm{loc}}(y_{\infty})\). From this we conclude that \(h\) extends to a neighborhood of \(\gamma(t_{\infty})\), with \(h(\gamma(t_{\infty}))=y_{\infty}\), and we are done. One may argue that the NT condition is not intrinsic since it depends on the choice of the bidisk \(\mathbb{B}\). To get around this issue we may consider the following variant: (NT\({}_{G}\)) there exists \(R>0\) such that the stable foliation admits no tangency with the hypersurface \(\{G^{-}=R\}\). Note that the level set \(\{G^{-}=R\}\) is smooth near \(J^{+}\) for every \(R>0\): indeed by the local structure of \(G^{-}\) near infinity this is the case when \(R\) is large, and then we use invariance to propagate this property to all \(R>0\). Arguing exactly as in the previous proposition shows that the NT\({}_{G}\) property implies NDH. Using this idea also enables us to understand more precisely how the NDH property may fail. If \(x\) and \(y\) are two points in \(J\) with \(y\in W^{s}(x)\), define the _Green distance_ \[d_{G}(x,y):=\inf_{c:x\to y}\max(G^{-}|_{c})\] where the infimum runs over the set of continuous paths \(c:[0,1]\to W^{s}(x)\) joining \(x\) to \(y\). Since \(W^{s}(x)\cap J\) is totally disconnected, this indeed defines an ultrametric on \(W^{s}(x)\cap J\), which is uniformly contracted by \(f\): \(d_{G}(f(x),f(y))=d^{-1}d_{G}(x,y)\). It provides an intrinsic way of measuring how far we need to go in \(\mathbb{C}^{2}\) to connect two unstable components by stable manifolds. Arguing exactly as in Proposition 8.2 shows: **Proposition 8.3**.: _Let \(x,y\in J\) with \(y\in W^{s}(x)\) and denote by \(h\) the germ of stable holonomy \(h:W^{u}_{\rm loc}(x)\to W^{u}_{\rm loc}(y)\). Let \(\gamma:[0,1]\to J^{u}(x)\) be a continuous path and assume that \(h\) can be continued along \(\gamma([0,t^{\star}))\). Then \(h\) admits an extension to \(t^{\star}\) if and only if \(d_{G}(\gamma(t),h(\gamma(t)))\) is bounded as \(t\to t^{\star}\)._ ### No queer components **Theorem 8.4**.: _Let \(f\) be dissipative and hyperbolic, with a disconnected and stably totally disconnected Julia set. Assume further that the NDH property holds. Then any non-trivial periodic component of \(K\) contains an attracting point._ Proof.: We argue by contradiction: assume that \(\Lambda\) is a component of \(K\) which does not contain any attracting periodic point. Let \(C\) be the component of \(\Lambda\) in \(K^{+}\cap\mathbb{B}\). Our hypothesis implies that \(C\) has empty interior, so \(C\) is a component of \(J^{+}\cap\mathbb{B}\) (and \(\Lambda\) is a component of \(J\). Fix an unstable transversal \(\Delta^{u}\) and let \(E\) be a component of \(C\cap\Delta^{u}\), which must have empty interior in \(\Delta^{u}\) by Lemma 2.1. Thus \(E\) is a locally connected continuum with empty interior, that is, a dendrite. **Lemma 8.5**.: _For every \(x\in E\), \(W^{s}(x)\cap E=\{x\}\)._ Assuming this lemma for the moment, let us complete the proof. By the expansion in the unstable direction, for every \(x\in E\), there exists \(\delta_{1}>0\) such that for every \(n\geq 0\), \(f^{n}(E)\) is not relatively compact in \(D^{u}(f^{n}(x),\delta_{1})\), and by the John-Holder property, there exists \(\delta_{2}>0\) such that any two components of \(f^{n}(E)\) in \(D^{u}(f^{n}(x),\delta_{1})\) intersecting \(D^{u}(f^{n}(x),\delta_{1}/2)\) are \(\delta_{2}\)-separated. Fix a covering of \(J\) by unstable flow boxes. By the product structure of \(J\), there exists \(\varepsilon>0\) such that if \(y,z\in f^{n}(E)\) are \(\varepsilon\)-close in \(\mathbb{C}^{2}\) but not on the same unstable plaque, then the components \(\operatorname{Comp}_{f^{n}(E)\cap D^{u}(y,\delta_{1})}(y)\) and \(\operatorname{Comp}_{f^{n}(E)\cap D^{u}(z,\delta_{1})}(z)\) are related by local stable holonomy. Finally, by expansion along the unstable direction and the previous separation property, \(f^{n}(E)\) cannot be contained in boundedly many unstable plaques as \(n\to\infty\). Thus, for sufficiently large \(n\) we can find two points in \(f^{n}(E)\) which are \(\varepsilon\)-close in \(\mathbb{C}^{2}\) but not on the same unstable plaque, so there exists \(y\in f^{n}(E)\) such that \(W^{s}_{\operatorname{loc}}(y)\) intersects \(f^{n}(E)\) in another point. This contradicts Lemma 8.5 and we are done. Proof of Lemma 8.5.: Assume that \(W^{s}(x)\cap E\) contains another point \(y\neq x\). Then the stable holonomy defines a germ of homeomorphism \(h:E\cap U_{x}\to E\cap U_{y}\), where \(U_{x}\) is some neighborhood of \(x\) (resp. \(y\)). By the NDH property, \(h\) can be continued along paths in \(E\). Since \(E\) is simply connected, this extends to a globally defined map \(h:E\to E\), sending \(x\) to \(y\), which is a local homeomorphism, hence a covering, so again using the fact that \(E\) is simply connected, we conclude that \(h\) is a homeomorphism. It is a classical fact that any continuous self-map of \(E\) admits a fixed point. For the reader's convenience let us include the argument. View \(E\) as a subset of the plane. Then, by the Caratheodory theorem, the Riemann map \(\mathbb{C}\backslash\mathbb{D}\to\mathbb{C}\backslash E\) extends to a continuous and surjective map \(\partial\mathbb{D}\to\partial E=E\). From this we can construct a topological disk \(U\supset E\) and a retraction \(r:\overline{U}\to E\): indeed take the disk bounded by some equipotential and define \(r\) as collapsing each external ray to its endpoint. Now let \(g=h\circ r\). Since \(g\) maps \(\overline{U}\) into itself, by the Brouwer fixed point theorem it admits a fixed point \(x_{0}\). Finally, since \(g(\overline{U})\subset E\), \(x_{0}\) belongs to \(E\), so \(g(x_{0})=h(r(x_{0}))=h(x_{0})=x_{0}\). To conclude the proof we show that the existence of such a fixed point contradicts the hyperbolicity of \(f\). For this, fix a continuous path \((x_{t})_{t\in[0,1]}\) joining \(x_{0}\) to \(x_{1}:=x\) and let \(t^{\star}=\max\left\{t\in[0,1],\ h(x_{t})=x_{t}\right\}\), which satisfies \(0\leq t^{\star}<1\). As \(t>t^{\star}\) tends to \(t^{\star}\), we see that the two point set \(\left\{x_{t},h(x_{t})\right\}\) collapses to \(\left\{x_{t^{\star}}\right\}\). This means that there is a tangency between the stable lamination and \(\Delta^{u}\) at \(x_{t^{\star}}\), which is the desired contradiction. _Remark 8.6_.: With notation as in the proof of the theorem, it is not difficult to deduce from the proof that for every \(\delta>0\), for \(n\geq n(\delta)\) there exists a non-trivial simple closed curve contained in \(W^{s}_{\delta}(f^{n}(E))\). So by the last assertion of Theorem 6.4, there is a non-trivial simple closed curve contained in \(W^{s}_{\delta}(\Lambda)\). Without the NDH property, we cannot exclude a situation where these simple closed curves do not enclose an attracting basin. We may qualify these dendrites and their limit sets as _queer components_ of \(J\). So Theorem 8.4 asserts that under the NDH property, _queer components of \(J\) do not exist_. ### Topological mixing **Theorem 8.7**.: _If the NDH property holds, if \(\Lambda\) is a quasi-solenoidal component of period \(k\), then \(f^{k}|_{\Lambda}\) is topologically mixing. In particular \(\Lambda\) is transversally a Cantor set._ Proof.: Without loss of generality we may assume \(k=1\). We resume Proposition 7.1 and its proof. Let \(\Lambda^{\prime}\) be as in Proposition 7.1, and let us show that \(\Lambda^{\prime}=\Lambda\). Since \(\Lambda^{\prime}\) is saturated in the unstable direction, \(W^{s}(\Lambda^{\prime})\) is relatively open in \(\Lambda\). The NDH property shows that if \(y\in W^{s}(\Lambda^{\prime})\), then \(J^{u}(y)\subset W^{s}(\Lambda^{\prime})\): indeed the set of points \(z\in J^{u}(y)\) such that \(z\in W^{s}(\Lambda^{\prime})\) is open because \(W^{s}(\Lambda^{\prime})\) is relatively open, and since \(J^{u}(y)\) is arcwise connected, the the NDH property implies that it is closed as well. Thus by the local product structure of \(\Lambda\), we conclude that \(W^{s}(\Lambda^{\prime})\) is relatively closed in \(\Lambda\), and by connectedness we conclude that \(W^{s}(\Lambda^{\prime})=\Lambda\). Fix a small \(\delta>0\). By Baire's theorem, we infer that \(f^{-n}(W^{s}_{\delta}(\Lambda^{\prime}))\) has non-empty relative interior in \(\Lambda\) for large \(n\), hence so does \(W^{s}_{\delta}(\Lambda^{\prime})\) by invariance. Arguing as in Proposition 7.1, we see that by topological transitivity, \(W^{s}_{\delta}(\Lambda^{\prime})\) is actually relatively open in \(\Lambda\). Therefore \(\bigcup_{n\geq 0}f^{-n}\left(W^{s}_{\delta}(\Lambda^{\prime})\right)\) is an open cover of \(\Lambda\) and by compactness we conclude that \(\Lambda\) is contained in \(\bigcup_{0\leq n\leq n_{0}}f^{-n}\left(W^{s}_{\delta}(\Lambda^{\prime})\right)\) for some \(n_{0}\). and since \(f^{n_{0}}(\Lambda)=\Lambda\) we finally deduce that \(\Lambda\subset W^{s}_{\delta}(\Lambda^{\prime})\). Since \(\delta\) was arbitrary, \(\Lambda\subset\Lambda^{\prime}\), and we are done. _Remark 8.8_.: A similar argument shows that under the NDH property, the quasi-solenoids obtained as limit sets of basin boundaries in Theorem 7.2 are transitive. As a consequence of transitivity we can be more precise about the topological structure of periodic components of \(K\). **Proposition 8.9**.: _Let \(f\) be dissipative and hyperbolic, with a disconnected and stably totally disconnected Julia set. Assume further that the NDH property holds. Then for any non-trivial component \(D\) of \(K\), \(D\cap\operatorname{Int}(K^{+})\) is dense in \(D\). Equivalently, for any \(x\in D\), \(D\cap W^{u}(x)\) is the closure of its interior for the intrinsic topology._ Proof.: The equivalence between the two assertions follows from Lemma 2.1, Lemma 7.3, and the local product structure. Let \(D\) be as in the statement of the proposition and \(C\) be its component in \(K^{+}\cap\mathbb{B}\). Let also \(\Lambda\) the unique component of \(J\) contained in \(D\) (Proposition 6.6). Without loss of generality we may assume that \(D\) (hence \(C\) and \(\Lambda\)) is fixed by \(f\). By Theorem 8.4\(D\) contains an attracting periodic point \(a\), so the immediate basin \(\mathcal{B}_{0}\) of \(a\) is contained in \(C\). By Theorem 7.2, \(\partial B_{0}\) contains a saddle periodic point \(p\), which must belong to \(\Lambda\) (indeed by Lemma 2.3 and Theorem 6.3, \(\Lambda=\bigcap_{n\geq 0}f^{n}(\partial C)\)). The topological mixing of \(f|_{\Lambda}\) (Theorem 8.7) classically implies that \(W^{s}(p)\cap\Lambda\) is dense in \(\Lambda\). Indeed let \(U\) be a product neighborhood of \(p\) in \(\Lambda\), and \(V\) be an arbitrary open subset of \(\Lambda\). Then for sufficiently large \(q\geq 0\) there exists \(y_{q}\in V\) such that \(f^{q}(y_{q})\in U\). Since \(\Lambda\) has local product structure \([f^{q}(y_{q}),p]:=W^{u}_{\operatorname{loc}}(f^{q}(y))\cap W^{s}_{\operatorname {loc}}(p)\) belongs to \(\Lambda\), hence increasing \(n\) again if needed, \(z_{q}:=f^{-q}([f^{q}(y_{q}),p])\) is a point in \(W^{s}(p)\cap V\). To conclude from this point, we observe that by Remark 7.4 (applied to \(f^{-q}(\mathcal{B}_{0})\)) \(z_{q}\) belongs to the boundary of a component \(\Omega\) of \(W^{u}(z_{q})\cap f^{-q}(\mathcal{B}_{0})\) contained in \(D\), and we are done. ### Concluding remarks The non-existence problem for queer components bears some similarity with another well-known open problem: the non-existence of Herman rings for complex Henon maps (see [4] for an early account). Indeed assume that \(f\) admits a Herman ring, that is, a Fatou component \(\Omega\) biholomorphic to the product of an annulus times \(\mathbb{C}\). More precisely there exists a biholomorphism \(h:\Omega\to A\times\mathbb{C}\), where \(A\) is a standard annulus, which conjugates \(f\) to \((x,y)\mapsto(e^{i\theta}x,\delta y)\), \(|\delta|<1\). Assume further that \(J\) is disconnected, and fix an unstable transversal \(\Delta^{u}\) (recall that its existence does not require \(f\) to be hyperbolic). Then if \(C\) is an invariant circle in \(A\), \(f\) admits an invariant "cylinder" \(\mathcal{C}=h^{-1}(C\times\mathbb{C})\). Any component of \(\mathcal{C}\cap\Delta^{u}\) is a piecewise smooth immersed curve, and a contradiction would follow if we can show that it bounds a disk in \(\Delta^{u}\) (since by the maximum principle this disk would be a Fatou disk, whose normal limits would fill up the annulus). In other words, if \(f\) admits a Herman ring, \(\mathcal{C}\cap\Delta^{u}\) is a countable union of dendrites whose saturation under the stable foliation of \(\mathcal{C}\) bounds a disk, but not a holomorphic disk (compare with Remark 8.6). Note however that a limitation to the analogy between the two problems is that the NDH property holds trivially in the Herman ring case, so the difficulty is of a different nature. ## Appendix A The core of a quasi-solenoid In this Appendix, we sketch the construction of the _core_ of a quasi-solenoidal component, which should intuitively be understood as the space obtained from this component after removing all "bounded decorations" in unstable manifolds. Initially designed as a potential tool to prove the non-existence of queer quasi-solenoids, it also gives interesting information on the combinatorial structure of tame ones. It would be interesting to compare it with other constructions such as Ishii's Hubbard trees (see [27]). We keep the setting as in the previous sections, that is \(f\) is a uniformly hyperbolic dissipative Henon map, with a a disconnected and stably totally disconnected Julia set. ### Number of accesses The discussion in this paragraph is reminiscent from [7, SS7], which deals with the connected case. Pick \(x\in J\). For any \(R>0\), define \(N^{u}(x,R)\) to be the number of connected components \(\Omega\) of \(D^{u}(x,R)\backslash J\) such that \(x\in\overline{\Omega}\). Since \(K\cap D^{u}(x,R)\) has the John-Holder property, Corollary 3.3 implies that \(N^{u}(x,R)<\infty\). Thus, \(R\mapsto N^{u}(x,R)\) is a integer-valued non-increasing function which drops when two components of \(D^{u}(x,R)\backslash J\) merge. The limit \[N^{u}_{\mathrm{loc}}(x):=\lim_{R\to 0}N^{u}(x,R)\] is the number of local accesses to \(x\), and \[N^{u}(x):=\lim_{R\to\infty}N^{u}(x,R)\] is the number of connected components of \(W^{u}(x)\backslash J\). Note that if \(J^{u}(x)\) is bounded then \(N^{u}(x)=1\), so this notion is interesting only when \(x\) belongs to a quasi-solenoidal component. We can also restrict to counting accesses from infinity, that is components of \(D^{u}(x,R)\backslash K^{+}\), and we obtain corresponding numbers \(N^{u}_{\infty}(x,R)\), \(N^{u}_{\infty,\mathrm{loc}}(x)\) and \(N^{u}_{\infty}(x)\). We have that \(N^{u}_{\infty}(x)\leqslant N^{u}(x)\) (and similarly for the other quantities), and, since every point of \(J\) is accessible from infinity, \(N^{u}_{\infty}(x)\geqslant 1\). (4 Footnote 4: The John-Hölder property of the basin of infinity directly guarantees the finiteness of \(N^{u}_{\infty,\mathrm{loc}}(x)\), but not that of \(N^{u}_{\mathrm{loc}}(x)\) (see Remark 3.11). This property can actually be salvaged as follows: if for small \(R\), \(N^{u}(x,R)\) is large, then for some \(k\gg 1\), \(N^{u}(f^{k}(x),1)\) is large, and projecting to some fixed transversal yields a contradiction. **Lemma A.1**.: \(N^{u}\) _(resp. \(N^{u}_{\infty}\)) is upper semicontinuous on \(J\), that is, for any \(k\geqslant 1\), \(\{x,\ N^{u}(x)\geqslant k\}\) is closed._ Proof.: We deal with \(N^{u}\), the proof for \(N^{u}_{\infty}\) is similar. It is enough to assume that \(k\geqslant 2\). By the local product structure of \(J\), it is enough to study the semi-continuity of \(x\mapsto N^{u}(x)\) separately along stable and unstable manifolds. Let us start by studying this semicontinuity along a local stable transversal. We have to prove that \(\{x,\ N^{u}(x)<k\}\) is open. Indeed assume that there are \(j<k\) accesses to \(x\) in \(W^{u}(x)\backslash J\). This means that for large \(R\), \(D^{u}(x,R)\backslash J\) has \(j\) connected components accumulating at \(x\). If \(x^{\prime}\in W^{s}(x)\) then the local stable holonomy between \(W^{u}_{\mathrm{loc}}(x)\) and \(W^{u}_{\mathrm{loc}}(x^{\prime})\) is a homeomorphism, which locally preserves the number of components of \(W^{u}_{\mathrm{loc}}(x)\backslash J\). In addition if \(x^{\prime}\) is sufficiently close to \(x\), this holonomy is defined in \(D^{u}(x,R)\). Indeed for this it is enough t to iterate backwards until \(f^{-n}(D^{u}(x,R))\) is contained in the domain of the extended stable lamination. Therefore, there is a large domain \(D^{\prime}\) in \(W^{u}(x^{\prime})\) such that \(D^{\prime}\backslash J\) has \(j\) connected components accumulating on \(x^{\prime}\). Since the number of components may drop when enlarging this disk further, we conclude that \(N^{u}(x^{\prime})\leq j\). Now we work inside a given unstable manifold. Let \(R\) be such that \(N^{u}(x,s)=N^{u}(x)=j\) for \(s\geq R-1\). By the Holder-John property, for \(R^{\prime}<R\), \(D^{u}(x,R)\backslash J\) admits finitely many components intersecting \(D^{u}(x,R^{\prime})\). So if \(N^{u}(x)=j\), there is some \(0<\varepsilon<1\) such that only \(j\) of these components reach \(D^{u}(x,\varepsilon)\), and we conclude that for \(x^{\prime}\in D^{u}(x,\varepsilon)\), \(N^{u}(x^{\prime},R-1)\leq j\), hence \(N^{u}(x^{\prime})\leq j\), as asserted. Since \(f\) acts linearly on unstable parameterizations, \(N^{u}(x,R)=N^{u}(f(x),\lambda^{u}R)\), and we obtain: **Corollary A.2**.: _If \(N^{u}_{\mathrm{loc}}(x)\geq k\) then for any \(y\in\omega(x)\), \(N^{u}(y)\geq k\)._ An argument similar to that of the second part of Lemma A.1 implies (compare [7, pp. 490-491]): **Lemma A.3**.: _For any \(R>0\) and any \(x\in\Lambda\), the set \(\left\{y\in W^{u}(x),\ N^{u}(y,R)\geq 3\right\}\) is discrete for the intrinsic topology._ **Proposition A.4**.: _The set \(\left\{x\in J,\ N^{u}(x)\geq 3\right\}\) is a finite set of saddle periodic points._ Proof.: By Lemma A.3, the set \(\left\{x\in J,\ N^{u}(x,R)\geq 3\right\}\) is contained in a countable union of local stable manifolds. Since any point in \(J\) can be joined to a given unstable transversal \(\Delta^{u}\) by a stable path of uniform length, by taking small enough \(R\) we infer that the projection of this set to \(\Delta^{u}\) is actually finite. Therefore, the set \(\left\{x\in\Lambda,\ N^{u}(x)\geq 3\right\}\) is a closed invariant set contained in a finite union of semi-local stable manifolds, so it is finite. ### Definition(s) and properties of the core Let \(\Lambda\) be a quasi-solenoidal component of \(J\). There are several possible definitions for the core of \(\Lambda\). It is unclear for the moment which choice is the most appropriate. We define: * \(\mathrm{Core}(\Lambda)=\left\{x\in\Lambda,\ N^{u}(x)\geq 2\right\}\) * \(\mathrm{Core}^{\prime}(\Lambda)=\omega\left(\left\{x\in\Lambda,\ N^{u}_{ \mathrm{loc}}(x)\geq 2\right\}\right)\) By Corollary A.2 we have the inclusion \(\mathrm{Core}^{\prime}(\Lambda)\subset\mathrm{Core}(\Lambda)\), and it is an open problem whether equality holds It is obvious from the definition that \(\mathrm{Core}(\Lambda)\) (resp. \(\mathrm{Core}^{\prime}(\Lambda)\)) is invariant and Lemma A.1 implies that it is closed. Hence it is a closed hyperbolic set. Another natural open question is whether \(\mathrm{Core}(\Lambda)\) is connected. The core of the Julia set is the union of the cores of its finitely many quasi-solenoidal components. If \(x\in J\) is any point such that \(W^{u}(x)\backslash J\) has several local accesses, then \(\omega(x)\subset\mathrm{Core}(J)\). We say that \(x\in\mathrm{Core}(\Lambda)\) is _regular_ if \(N^{u}(x)=2\) and _singular_ otherwise. Recall that the singular set is a finite set of periodic points. Note that if \(x\) belongs to the core, then \(J^{u}(x)\) disconnects \(W^{u}(x)\). **Conjecture A.5**.: \(\mathrm{Core}(\Lambda)\) _has local product structure near any regular point, and is locally the product of a Jordan arc by a totally disconnected set._ On the other hand, \(\mathrm{Core}(\Lambda)\) does not have local product structure in the neighborhood of any of its singular points, unless it is locally contained in a single unstable manifold. So the structure of the core should be that of a union of solenoids joined at finitely many branch points. It seems that in the example described in [26, Thm 4.23], one quasisolenoidal component has a core made of two solenoids attached at a fixed saddle point. Note that if \(\Lambda\) is not a queer component, that is the associated component of \(K\) contains an attracting periodic point, then the solenoid at the boundary of the immediate basin, constructed in SS 7.2, is contained in the core. Indeed it is obtained by taking limits of Jordan arcs locally separating an attracting basin from the basin of infinity. So the topological structure of the core should give an account how these various basins are organized and attached to each other in \(\Lambda\) (compare with the Hubbard tree in one-dimensional dynamics). Finally, we may also define \(\mathrm{Core}_{\infty}(\Lambda)=\{x\in\Lambda,\ N_{\infty}^{u}(x)\geq 2\}\). (If \(\Lambda\) is a queer component, then \(\mathrm{Core}_{\infty}(\Lambda)=\mathrm{Core}(\Lambda)\).) We expect that \(\mathrm{Core}_{\infty}(\Lambda)\) is a finite set. Indeed, if not, it should contain a Jordan arc such that every point is accessible from both sides by the basin of infinity, and such arcs should not exist. Indeed, iterating forward, and arguing as in Theorem 8.4, a large iterate of this arc must spiral and come close to itself, hence, projecting to an unstable transversal, this would cut out a Fatou disk, and we conclude that one side of the arc is contained in an attracting basin. ## Appendix B Continuity of affine structure Here we present the following mild generalization of a theorem by Etienne Ghys [22]. Recall that the ratio of a triple \((u,v,w)\in\mathbb{C}^{3}\) is \(\frac{u-v}{u-w}\). **Theorem B.1**.: _Let \(\psi:\mathbb{C}\to\mathbb{C}^{2}\) be an injective holomorphic immersion, and \(L=\psi(\mathbb{C})\). Assume that \((L_{n})\) is a sequence of immersed complex submanifolds converging to \(L\) in the following sense: if \(K\Subset L\) is any relatively compact subset (relative to the leafwise topology), then \(L_{n}\) contains a graph over a neighborhood of \(K\) for large \(n\), that is there exists a neighborhood \(N(K)\) of \(K\) in \(L\) and a sequence of injective holomorphic maps \(\pi_{n}:N(K)\to L_{n}\) such that \(\pi_{n}(x)\to x\) for every \(x\). Assume further that for every \(n\), \(L_{n}\) is biholomorphic to \(\mathbb{C}\)._ _Then the affine structures on the \(L_{n}\) converge to that of \(L\) in the following sense: for any compact set \(K\Subset L\) as above and any triple \((x,y,z)\in K^{3}\), if \((x_{n},y_{n},z_{n})\in\pi_{n}(N(K))\) are close to \((\pi_{n}(x),\pi_{n}(y),\pi_{n}(z))\) and converge to \((x,y,z)\), then the corresponding ratios converge as well._ The point of this statement is to emphasize that there is no need in Ghys' theorem to work with the leaves of a Riemann surface lamination. Also, compactness of the ambient space is not required. The theorem is certainly not written in its most general form: one might assume more generally that * the \(\pi_{n}\) are \((1+\varepsilon_{n})\) quasi-conformal for some \(\varepsilon_{n}\to 0\); * \(L\) and the \(L_{n}\) are parabolic Riemann surfaces instead of copies of \(\mathbb{C}\). The adaptation is left to the reader. Notice also that any submanifold \(V\) of a Stein manifold admits a neighborhood \(W\) endowed with a holomorphic retraction \(W\to V\) (see [42, Cor. 1]). Therefore our convergence assumption essentially means that \(L_{n}\) converges to \(L\) with multiplicity \(1\). Proof.: We follow [22, SS4] closely. Pick a triple of distinct points \((x,y,z)\) in \(L\) and \(R_{0}\) such that \(\psi(D(0,R_{0}))\) contains \(x,y,z\). For \(\alpha\in L\) let \(\widetilde{\alpha}=\psi^{-1}(\alpha)\). Without loss of generality we may assume \(R_{0}=1\). Let \(R\) be a large positive number to be determined. For \(n\geqslant n(R)\), \(\pi_{n}\) is well defined in \(\psi(D(0,R))\). Let \((x_{n},y_{n},z_{n})\in\pi_{n}(D(0,1))^{3}\) converging to \((x,y,z)\), and fix \(\varepsilon>0\).. Then by assumption \((\pi_{n}^{-1}(x_{n}),\pi_{n}^{-1}(y_{n}),\pi_{n}^{-1}(z_{n}))\) converges to \((x,y,z)\) for the leafwise topology in \(L\). Let \(\psi_{n}:\mathbb{C}\to L_{n}\) be any parameterization, and let \(\widetilde{x}_{n}=\psi_{n}^{-1}(x_{n})\), \(\widetilde{y}_{n}=\psi^{-1}(y_{n})\) and \(\widetilde{z}_{n}=\psi^{-1}(z_{n})\). Without loss of generality we may assume \(\widetilde{x}_{n}=0\). We have to show that for large \(n\), the ratio of \((\widetilde{x}_{n},\widetilde{y}_{n},\widetilde{z}_{n})\) is close to that of \((\widetilde{x},\widetilde{y},\widetilde{z})\). By assumption \(h_{n}:=\psi_{n}^{-1}\circ\pi_{n}\circ\psi:D(0,R)\to\mathbb{C}\) is an injective holomorphic map. By renormalizing \(\psi_{n}\) we may assume that \(h_{n}^{\prime}(0)=1\) (we use \(L_{n}\simeq\mathbb{C}\) precisely here). Then by the Koebe distortion theorem, \(h_{n}\) is almost affine in \(D(0,1)\), that is, it distorts the ratios of points in \(D(0,1)\) by some small amount \(\varepsilon(R)\). Fix \(R\) so large that \(\varepsilon(R)<\varepsilon\). In particular for \(n\geqslant n(R)\) we get that \[\left|\frac{h_{n}(\widetilde{x})-h_{n}(\widetilde{y})}{h_{n}(\widetilde{x})- h_{n}(\widetilde{z})}-\frac{\widetilde{x}-\widetilde{y}}{\widetilde{x}- \widetilde{z}}\right|\leqslant\varepsilon.\] Now for \(\alpha\in K\), \(h_{n}(\widetilde{\alpha})\) is the parameter in \(\mathbb{C}\) corresponding to \(\pi_{n}(\alpha)\in L_{n}\), so \(\widetilde{\alpha}_{n}\) is close to \(h_{n}(\widetilde{\alpha})\) in \(\mathbb{C}\) and for large \(n\) we also get that \[\left|\frac{h_{n}(\widetilde{x})-h_{n}(\widetilde{y})}{h_{n}(\widetilde{x})-h_ {n}(\widetilde{z})}-\frac{\widetilde{x}_{n}-\widetilde{y}_{n}}{\widetilde{x}_ {n}-\widetilde{z}_{n}}\right|\leqslant\varepsilon,\] and we are done.
2301.00004
SESNet: sequence-structure feature-integrated deep learning method for data-efficient protein engineering
Deep learning has been widely used for protein engineering. However, it is limited by the lack of sufficient experimental data to train an accurate model for predicting the functional fitness of high-order mutants. Here, we develop SESNet, a supervised deep-learning model to predict the fitness for protein mutants by leveraging both sequence and structure information, and exploiting attention mechanism. Our model integrates local evolutionary context from homologous sequences, the global evolutionary context encoding rich semantic from the universal protein sequence space and the structure information accounting for the microenvironment around each residue in a protein. We show that SESNet outperforms state-of-the-art models for predicting the sequence-function relationship on 26 deep mutational scanning datasets. More importantly, we propose a data augmentation strategy by leveraging the data from unsupervised models to pre-train our model. After that, our model can achieve strikingly high accuracy in prediction of the fitness of protein mutants, especially for the higher order variants (> 4 mutation sites), when finetuned by using only a small number of experimental mutation data (<50). The strategy proposed is of great practical value as the required experimental effort, i.e., producing a few tens of experimental mutation data on a given protein, is generally affordable by an ordinary biochemical group and can be applied on almost any protein.
Mingchen Li, Liqi Kang, Yi Xiong, Yu Guang Wang, Guisheng Fan, Pan Tan, Liang Hong
2022-12-29T01:49:52Z
http://arxiv.org/abs/2301.00004v1
SESNet: sequence-structure feature-integrated deep learning method for data-efficient protein engineering ###### Abstract Deep learning has been widely used for protein engineering. However, it is limited by the lack of sufficient experimental data to train an accurate model for predicting the functional fitness of high-order mutants. Here, we develop SESNet, a supervised deep-learning model to predict the fitness for protein mutants by leveraging both sequence and structure information, and exploiting attention mechanism. Our model integrates local evolutionary context from homologous sequences, the global evolutionary context encoding rich semantic from the universal protein sequence space and the structure information accounting for the microenvironment around each residue in a protein. We show that SESNet outperforms state-of-the-art models for predicting the sequence-function relationship on 26 deep mutational scanning datasets. More importantly, we propose a data augmentation strategy by leveraging the data from unsupervised models to pre-train our model. After that, our model can achieve strikingly high accuracy in prediction of the fitness of protein mutants, especially for the higher order variants (> 4 mutation sites), when finetuned by using only a small number of experimental mutation data (<50). The strategy proposed is of great practical value as the required experimental effort, i.e., producing a few tens of experimental mutation data on a given protein, is generally affordable by an ordinary biochemical group and can be applied on almost any protein. ## Introduction Proteins are workhorses of the life activities. Their various functions such as catalysis, binding, and transportation undertake most of the metabolic activities in cells. In addition, they are the key components of the cytoskeleton, supporting the stable and diverse form of organisms. Nature provides numerous proteins with great potential value for practical applications. However, the natural proteins often do not have the optimal function to meet the demand of bioengineering. Directed evolution is a widely used experimental method to optimize proteins' functionality, namely fitness, by employing a greedy local search to optimize protein fitness[1, 2]. During this process, gain-of-function mutants are achieved and optimized via mutating several Amino Acids (AA) in the protein, which were selected and accumulated through the iterative processes of mutation by testing hundreds to thousands of variants in each generation. Despite the great success directed evolution has achieved, the phase space of the protein fitness landscape can be screened by this method is rather limited. Furthermore, to acquire a mutant of excellent fitness, especially a high-order mutant with multiple AA being mutated, the directed evolution often needs to develop an effective high-throughput screening or conduct a large number of experimental tests, which is experimentally and economically challenging[3]. Since experimental screening for directed evolution is largely costing, particularly for high-order mutations, prediction of the fitness of protein variants in silico are highly desirable. Recently, deep learning methods have been applied for predicting the fitness landscape of the protein variants[2]. By building models trained to learn the sequence-function relationship, deep learning can predict the fitness of each mutant in the whole sequence space and give a list of the most favorable candidate mutants for experimental tests. Generally, these deep learning models can be classified into protein language models [4-11], learning the representations from the global unlabeled sequences[6, 7, 12] and multiple sequence alignment (MSA) based model, capturing the feature of evolutional information within the family of the protein targeted[13-16]. And more recent works have proposed to combine these two strategies: learning on evolutionary information together with global natural sequences as the representation[17, 18], and trained the model on the labelled experimental data of screened variants to predict the fitness of all possible sequences. Nevertheless, all these models are focused on protein sequence, i.e., using protein sequence as the input of the model. Apart from sequence information, protein structure can provide additional information on function. Due to the experimental challenge of determining the protein structure, the number of reported protein structures is orders of magnitude smaller than that of known protein sequences, which hinders the development of geometric deep learning model to leverage protein structural feature. Thanks to the dramatic breakthrough in deep learning-based technique for predicting protein structure[19, 20], especially AlphaFold 2, it is now possible to efficiently predict protein structures from sequences at a large scale [21]. Recently, some researches directly take the protein structure feature as input to train the geometric deep learning model, which has been proved to achieve better or similar performance in prediction of protein function compared to language models [22-24]. However, the fused deep-learning method which can make the use of both sequence and structural information of the protein to map the sequence-function is yet much to be explored [25]. Recently, both supervised and unsupervised models have been developed for protein engineering, i.e., prediction of the fitness of protein mutants[24, 26]. Generally speaking, the supervised model can often achieve better performance as compared to the unsupervised model[26], but the former requires a great amount (at least hundreds to thousands) of experimental mutation data of the protein studied for training, which is experimentally challenging[18]. In contrast, the unsupervised model does not need any of such experimental data, but its performance is relatively worse, especially for the high-order mutant, which is often the final product of a direct-evolution project. It is thus highly desirable to develop a deep-learning algorithm, which can efficiently and accurately predict the fitness of protein variants, especially the high-order mutant, without the need of a large size of experimental mutation data of the protein concerned. In the present work, we built a supervised deep learning model (SESNet), which can effectively fuse the protein sequence and structure information together to predict the fitness of variant sequences (Fig 1A). We demonstrated that SESNet outperforms several state-of-the-art models on 26 metagenesis datasets. Moreover, to reduce the dependence of the model on the quantity of experimental mutation data, we proposed a data-augmentation strategy (Fig 1B), where the model was firstly pre-trained using a large quantity of the low-quality results derived from the unsupervised model and then finetuned by a small amount of the high-quality experimental results. We showed that the proposed model can achieve very high accuracy in predicting the fitness of high-order variants of a protein, even for those with more than four mutation sites, when the experimental dataset used for finetuning is as small as 40. Moreover, our model can predict the key AA sites, which are crucial for the protein fitness, and thus the protein engineer can focus on these key sites for mutagenesis. This can greatly reduce the experiment cost of trial and error. ## Results **Deep learning-based architecture of SESNet for predicting protein fitness.** To exploit the diverse information from protein sequence, coevolution and structure, we fuse three encoder modules into our model. As shown in Fig 1A: the first one (local encoder) accounts for residue interdependence in a specific protein learned from evolution-related sequences[15, 16]; the second one (global encoder) captures the sequence feature in global protein sequence universe[6, 12]; and the third one (structure module) captures local structural feature around each residue learned from 3D geometric structure of the protein[23, 24]. To integrate the information of different modules, we first concatenate representations of local and global encoders and get an integrated sequence representation. This integrated sequence representation is then sent to an attention layer and becomes the sequence attention weights, which will be further averaged with the structure attention weights derived from structure module, leading to the combined attention weights. Finally, the product of combined attention weights and the integrated sequence representation is then fed into a fully connected layer to generate the predicted fitness. The combined attention weights can also be used to predict the key AA sites, critical for the protein fitness, details of which is discussed in the section of Method. **SESNet outperforms state-of-the-art methods for predicting fitness of variants on deep mutation scan (DMS) datasets** We compared our supervised model against the existing state-of-the-art supervised models, ECNet [17], ESM-1b [6]; and unsupervised models, ESM-1v [9], ESM-IF1[23] and MSA transformer[15]. As can be seen in Fig 2A, in 19 out of 20 datasets, the supervised models generally outperform the unsupervised ones as expected, and our model (SESNet) achieves the best performance among all the models. Moreover, we Figure 1: **Architecture of model and the schematic of data-augmentation strategy.** Architecture of SESNet (A): The local encoder accounts for the inter-residue dependence in a protein learned from MSA of homologous sequences using a Markov random field[27]. The global encoder captures the sequence feature in global protein sequence universe using protein language model[6]. The structure module accounts for the microscopically environmental feature of a residue learned from 3D geometric structure of the protein[23, 28]. Schematic of data-augmentation strategy. (B): We first build a mutant library containing all of the single-site mutants and numerous double-site mutants. Then, all of these mutated sequences are scored by the unsupervised model. After that, these mutants are used to pre-train the initial model (SESNet), which will be further finetuned on a small number of low-order experimental mutational data. further explored the ability of our model to predict the fitness of higher-order variants by training it using the experimental results of the low-order variants on 6 datasets of DMS. As shown in Fig 2B&C, our model outperforms all the other models. Data in Fig 2 is presented in Supplementary Tables 1,2&3. These datasets cover various proteins and different types of functionalities, including catalytic rate, stability, and binding affinity to peptide, DNA, RNA and antibody, as well as fluorescence intensity (Table 4). While most of the datasets contain only single-site mutants, five of them involve both single-site and double-site mutants, and the dataset of GFP contains data up to 15-site mutants. **All three components contribute positively to the performance of SESNet.** As described in the above architecture (Fig.1A), our model integrates three different encoders or modules together. To investigate how much contribution each of the three parts makes, we performed ablation studies in 20 datasets of single-site mutants. Briefly, we removed each of the three components and compared the performance to that of the original model. As shown in Supplementary Table 5, the average spearman correlation of the original model is 0.672, much higher than that without local encoder (0.639), that without global encoder (0.247) and that without structure module (0.630). The ablation study reveals that all three components contribute to the improvement of model performance, and the contribution from the global encoder, which captures the sequence feature in global protein sequence universe, is the most significant. **The combined attention weights guide the finding of the key AA site.** The combined attention weights can be used to measure the importance of each AA site on protein fitness when mutated. To the first approximation, higher the attention score is, more important the AA site is. To test this approximation, we trained our model on the experimental data of 1084 single-site mutants in the dataset of GFP [29], a green fluorescent protein from _Aequorea victoria_. The ground truth of the key sites of GFP are defined here as the experimentally discovered top 20 sites, which exhibit the largest change of protein fitness when mutated, or the AAs forming and stabilizing the chromophore, which are known to significantly affect the fluorescent function of the protein [30], but lack the fitness results in the experimental dataset. Indeed, one can observe that, at least 4 out of 7 top attention-score AA sites predicted by our model are the key sites as two of them (AG65 and T201) are located at the chromophore, and the other two (P73 and R71) were among the top 20 residues discovered in experiment to render the highest change of fitness when mutated (Fig 3A and Fig S1A). Interestingly, when we removed the structure module from the model, only one residue in the predicted top-7 attention-score AA is the key site (Fig 3B and Fig S1B). To further verify this discovery, we also performed these tests on the dataset of RRM, the RNA recognition motif of the _Saccharomyces cerevisiae_ poly(A)-binding protein[31]. The key sites of RRM are defined as the experimentally discovered top 20 sites, which render the largest change of fitness of the protein when mutated, or the binding sites, which are within 5 A of the RNA molecules as revealed in the structure of PDB 6R5K. Fig 3C and Fig S2A show that 4 out of 7 top attention-score AA sites predicted by our model are the key AAs. One of them (I12) is among the top 20 residues and three of them (N7, P10 and K39) are binding sites. Whereas, no key residue can be found in the predicted top-seven attention-score AAs, when we removed the structure module. (Fig 3D and Fig S2B). The results in Fig. 3 demonstrate that the structural module which learns the microscopically structural information around each residue makes important contribution to identify the key AAs, which are crucial for the protein fitness. Although the ablation study (Supplementary Table 5) reveals that the addition of the structural module improves the average spearman correlation over 20 datasets only by 4 percent, Fig. 3 demonstrates an important role of the structural module, which can guide the protein engineer to identify the important AA sites in a protein for mutagenesis. **Data-augmentation strategy boosts the performance of the fitness prediction when finetuned by a small size of labelled experimental data.** Supervised model is normally performing better than the unsupervised models (see Fig. 2)[26]. But the accuracy of the supervised model is highly affected by the amount of input experimental results used for training. However, it is experimentally challenging and costly to generate sufficient data (many hundreds or even thousands) for such purpose on every protein studied. To address this challenge, we propose a simple strategy of data augmentation by using the result generated by one unsupervised model to pre-train our model on a given protein, and then finetuning it using a limited number of experimental results on the same protein. We call it a pre-trained model. We note that data-augmentation strategy has been applied in various earlier work and has achieved good success in protein design[23, 32, 33]. In particular, to improve the accuracy of inverse folding, ref [23] used 16153 experimentally determined 3-D structures of proteins and 12 million structures predicted by the AlphaFold 2 [19] to train the model ESM-IF1[23]. In the present work, the data augmentation strategy is used for a different purpose that it can reduce the dependence of the supervised model on the size of the experimental data when predicting the fitness of protein mutants. We took GFP as an example to illustrate our data-augmentation strategy as GFP has a large number of experimental data for testing, particularly the experimental data for high-order mutants (up to 15-site mutant). We used the fitness results of low-order mutants predicted by the unsupervised model, ESM-IF1, to pre-train our model. The pre-training dataset contains the fitness of all single-site mutants and 30,000 double-site mutants randomly selected out of tens of million double-site variants. Then, we finetuned the pre-trained model by a certain number of experimental results of single-site mutants. The resulting model was used to predict the fitness of high-order mutants. As can be seen in Fig. 4A-D, when comparing with the original model without pre-training (blue bars), the performance of the pre-trained model is significantly improved (red bars). Such improvement is particularly large when only a small number of experimental data (40) is fed for training, and it will be gradually reduced when feeding more experimental data, eventually disappearing when more than 1000 experimental data were used for training. Here, we would like to particularly highlight the case when the finetuning experimental dataset contains only 40 data points. As can be seen in Fig. 4A, the pretrained model can achieve high spearman correlation of 0.5-0.7 for multisite-mutants, even for high-order mutants with 5-8 mutation sites. This is remarkably important for most protein engineers, as such experimental workload (40 data points) is generally affordable in an ordinary biochemical research group. However, without pre-training, the performance of the supervised model is rather low (\(\sim\)0.2). This comparison demonstrates the advantage of the data augmentation strategy proposed in the present work. Moreover, we also compared the performance of the pretrained model with respect to the unsupervised model (green bars), which were used for generating the low-quality pretraining datasets. As can be seen, when only 40 experimental data were used for training, the pretrained model has similar performance as compared to the unsupervised model for low-order mutants (\(<\) 4 mutation sites), but clearly outperforms the latter for high-order mutants (\(>\)4 mutation sites). When feeding more experimental data, especially a couple of hundreds, the pretrained model will outperform the unsupervised model regardless of how many sites of the protein were mutated. The unsupervised model used for analysis in Fig. 4 is ESM-1F1, which captures the local structural information of a residue. To demonstrate the general superiority of data-augmentation strategy proposed here, we also tested the results using other unsupervised model to generate the augmented datasets for GFP. As can be seen in Fig. S3, we used ProGen2 [8], an unsupervised model to learn the global sequence information, for data augmentation, and still derived the similar conclusion as in Fig. 4. That is, the pretrained model outperforms the original model without pretraining especially when a small experimental dataset is used for training, and it also beats the unsupervised model particularly for the high-order mutants. To further validate the generality of the data augmentation strategy proposed here, we did the analysis on the dataset of other proteins: toxin-antitoxin complex (F7YBW8) [34]containing data up to 4 sites mutants, and Adeno-associated virus capsids (CAPSD_AAV2S)[35], a deep mutational dataset including data up to 23-site mutants. We used the unsupervised model ProGen2[8] to generate the low-quality data of F7YBW8 for pretraining, since we found ProGen2 performs better than ESM-IF1 on this dataset. As shown in Fig 5A, the pre-trained model outperforms both the original model without pretraining and the unsupervised model in the fitness prediction of all multi-site mutants (2-4 sites) after finetuned by using only 37 experimental data points. In addition, in the dataset of CAPSD_AAV2S (Fig 5B), the pre-trained model also achieves the best performance in all of the high-order mutants ranging from 2 to 23 sites, when finetuned by only 20 experimental data points. These results further support the practical use of our data augmentation strategy, as the required experimental effort is largely affordable on most proteins. **Learned models provide insight into protein fitness.** SESNet projects a protein sequence into a high dimensional latent space and represents each mutant as a vector by the last hidden layer. Thus, we can visualize the relationships between sequences in these latent spaces to reveal how the networks learn and comprehend protein fitness. Specifically, we trained SESNet on the experimental data of single-site mutants from the datasets of GFP and RRM, then we used the trained model and untrained model to encode each variant and extracted the output of the last hidden layer as a representation of the variant sequence. Fig S4 shows a two-dimensional projection of the high dimensional latent space using t-SNE[36]. We found that the representations of positive and negative variants, i.e., the experimental fitness values being larger or smaller than that of wildtype, generated by the trained SESNet are clearly clustered into distinct groups (Fig S4A and Fig S4B). In contrast, the representations from untrained model cannot provide a distinguishable boundary between positive and negative variants (Fig S4C and Fig S4D). Therefore, SESNet can learn to distinguish functional fitness of mutants into a latent representation space with supervised training. Furthermore, to explore why the data-augmentation strategy works, we performed a case study on GFP dataset. Here, we compared the latent-space representation from the last hidden layer generated by our model with and without pre-training using the augmented data from the unsupervised model. As seen in Fig. S5A, after pretraining even without finetuning by the experimental data, SESNet can already roughly distinguish the negative and positive mutants. One thus can deduce that the pre-training can furnish a good parameter initialization for SESNet. After further finetuning the pre-trained SESNet by only 40 experimental data points of single-site mutants, a rather clear boundary between negative and positive high-order mutants is further outlined (Fig S5B). In contrast, when we skipped the pretraining process, i.e., directly training the model on 40 experimental data points, the separation between the positive and negative high-order mutants is rather ambiguous (Fig S5C). This comparison demonstrates the superiority of our data-augmentation strategy in distinguishing mutants of distinct fitness values, when the number of available experimental data is limited. Figure 2: **Spearman correlation of predicted fitness.** A: Comparison of our model to other models on the predicted fitness of the single-site mutants on 20 datasets. We performed five-fold cross-validation with 7:1:2 as the ratio of train versus validation versus test set. B: comparison of predicted fitness of double-site mutants of our model to other unsupervised models (ESM-1v, ESM-IF1 and MSA transformer), or supervised models (ECNet and ESM-1b). Here, our model and other supervised models were trained on the data of single-site mutants. We used 10% of double-site mutants as validation set and the remaining 90% as test set. C: Comparison of our model to other models on fitness prediction of quadruple-site mutants of GFP. Here, our model and other supervised model were trained using the single, double, triple-site mutants and all the three together. We used 10% of quadruple-site mutants as validation set and the remaining 90% as test set. The error bar in single-site mutant was got from the five-fold cross-validation. Since we cannot do five-fold cross-validation in the fitness prediction of high-order mutants trained on low-order mutants, we don’t put error bar for those data. The key sites of GFP have been marked as red spheres. A: 4 key sites were recovered by our model. G65 and T201 are the active residues helping to form and stabilize the chromophore in GFP as described by Ref [30]. P73 and R71 are among the experimentally-discovered top 20 sites, which render the highest change of fitness when mutated. B: Only one key site was identified by the model when removing the structure module and it is Y37, which is among the experimentally-discovered top 20 AA sites. C&D: The key sites of RRM have been marked as red spheres. C: 4 key sites were recovered by the original model. N7, P10 and K39 are the binding sites which are within 5A of the RNA molecules. I12 is among the experimentally-discovered top 20 sites, which render the highest change of fitness when mutated. D: There is no key site identified by the model when removing the structure module. Figure 3: **The sites with the top 7 largest attention scores on the wildtype sequence.** A&B: The key sites of GFP have been marked as red spheres. A: 4 key sites were recovered by our model. G65 and T201 are the active residues helping to form and stabilize the chromophore in GFP as described by Ref [30]. P73 and R71 are among the experimentally-discovered top 20 sites, which render the highest change of fitness when mutated. B: Only one key site was identified by the model when removing the structure module and it is Y37, which is among the experimentally-discovered top 20 AA sites. C&D: The key sites of RRM have been marked as red spheres. C: 4 key sites were recovered by the original model. N7, P10 and K39 are the binding sites which are within 5Å of the RNA molecules. I12 is among the experimentally-discovered top 20 sites, which render the highest change of fitness when mutated. D: There is no key site identified by the model when removing the structure module. Figure 4: **Results of models trained on different number of experimental variants.** A-D: The spearman correlation of fitness prediction on multiple sites (2-8 sites) mutants by finetuning using 40, 100, 400, 1084 single-site experimental mutation results from dataset of GFP. Where the red and blue bars represent the results of the pre-trained model and the original model without pretraining, respectively. And the green bars correspond to the results of the unsupervised model ESM-IF1 as a control. Figure 5: **Results of models trained on different datasets.** A-B: The spearman correlation of fitness prediction on high-order mutants by finetuning on 37 experimental single-site mutation results from datasets of F7YBW8 and on 20 experimental single-site mutation results of CAPSD_AAV2S, respectively. Where the red and blue bars represent the results of the pre-trained model and the original model without pretraining. And the green bars correspond to the results of the unsupervised model, which is ProGen2 for F7YBW8 and ESM-IF1 for CAPSD_AAV2S, respectively. ## Discussion In this study, we present a supervised deep learning model, which leverages the information of both sequence and structure of protein to predict the fitness of variants. And this model is found to outperform the existing state-of-the-art ones for protein engineering. Moreover, we proposed a data augmentation strategy, which pretrains our model using the results predicted by other unsupervised model, and then finetunes the model with only a small number of experimental results. We demonstrated that such data augmentation will significantly improve the accuracy of the model when the experimental results are very limited (\(\sim\)40), and also for high-order mutants with \(>\)4 mutation sites. We noted that our work, especially the data-augmentation strategy proposed here, will be of great practical importance as the experimental effort it requires is generally affordable by an ordinary biochemical research group and can be applied on most protein. ## Method ### Details of Model Architecture _Local encoder._ Residue interdependencies are crucial to evaluate if a mutation is acceptable. Several models, including ESM-MSA-1b [37], DeepSequence[14], EVE[38] and the Potts model[27], such as EVmutation[16] and ECNet [39], utilize multiple sequence alignment (MSA) to dig the constraints of evolutionary process in the residues level. In the present work, we use Potts model to establish the local encoder. This method first searches for the homologous sequences and builds MSA of the given protein with HHsuite [40]. After that, a statistical model is used to identify the evolutionary couplings by learning a generative model of the MSA of homologous sequences using a Markov random field. In the model, the probability of each sequence depends on an energy function, which is defined as the sum of single-site constraints \(e_{i}\) and all pairwise coupling constraints \(e_{ij}\): \[E(x)=\sum_{i}\mathbf{e}_{i}(x_{i})+\sum_{i\neq j}\mathbf{e}_{ij}(x_{i},x_{j}) \tag{1}\] Where \(i\) and \(j\) are position indices along the sequence. The \(i\)-th amino acid \(x_{i}\) is encoded by a vector, in which elements are set to the single-site term \(\mathbf{e}_{i}(x_{i})\) and pairwise coupling terms \(\mathbf{e}_{ij}(x_{i},x_{j})\) for \(j\)=1,...,\(n\), \(n\) is the number of residues in the sequence. These coupling parameters \(\mathbf{e}_{i}\) and \(\mathbf{e}_{ij}\) can be estimated using regularized maximum pseudolikelihood algorithm [41; 42]. As the result, each amino acid in the sequence is represented by a vector whose length is \((L+1)\), and the whole input sequence is encoded as a matrix whose size is \((L+1)\times L\). Since the length of the local evolutionary representation of each amino acid is close to the length of the sequence, the \((L+1)\)-long vector would be transformed into a new vector with fixed length \(d_{l}\) (in our local encoder, \(d_{l}\)=128) through a fully connected layer to avoid the overfitting issue. Sequence of protein would also pass a Bi-LSTM layer and be transformed into an \(L\times d_{l}\) matrix for random initialization. By concatenating two matrices above, we obtain the output of local encoder \(\mathbf{e}^{\prime}=<\mathbf{e}^{\prime}_{1},\mathbf{e}^{\prime}_{2},...\,\mathbf{e}^{\prime}_ {L}>\), whose size is \(L\times 2d_{l}\). Global Encoder.Recently, the large scale pre-trained models have been successfully applied in diverse tasks for inferring protein structure or function based on sequence information. Such as prediction of secondary structure, contact prediction and prediction of mutational effects. Thus, we take a pre-trained protein language model as the global encoder which is responsible to extract biochemical properties and evolution information of the protein sequences. There are some effective language models such as UniRep [12], TAPE [43], ESM-1v [44], ESM-1b [37], ProteinBERT[11] etc. We test these language models on our validation datasets, and results show that ESM-1b performs better than others. Therefore, we chose to use ESM-1b as the global encoder. The model is a bert-based [45] context-aware language model for protein, trained on the protein sequence dataset of UniRef 50 (86 billion amino acids across 250 million protein sequences). Due to its ability to represent the biological properties and evolutionary diversity of proteins, we utilize this model as our global encoder to encode the evolutionary protein sequence. Formally, given a protein sequence \(\mathbf{x=<x_{1},x_{2},...,x_{L}>}\in\mathbf{L^{N}}\) as input, where \(\mathbf{x_{l}}\) is the one-hot representation of \(\mathbf{i_{th}}\) amino acids in the evolutionary sequence, \(\mathbf{L}\) is the length of the sequence, and \(\mathbf{N}\) is the size of amino acids alphabet. The global encoder first encodes each amino acid and its context to \(\mathbf{g=<g_{1},g_{2},...,g_{L}>}\), where \(\mathbf{g_{i}\in R^{n}}\), (in ESM-1b, \(n=1420\)). Then \(\mathbf{g_{i}}\) is projected to \(\mathbf{g^{\prime}_{i}}\) of a hidden space \(\mathbf{R^{h}}\) with a lower dimension (in our default model configuration, \(h=256\)), \(\mathbf{g^{\prime}_{i}=W_{G}g_{i}+b}\), where \(\mathbf{W_{G}\in R^{n\times h}}\) is a learnable affine transform parameter matrix and \(\mathbf{b\in R^{h}}\) is the bias. The output of global encoder is \(\mathbf{g^{\prime}=<g^{\prime}_{1},g^{\prime}_{2},...g^{\prime}_{L}>}\in\mathbf{R^{L \times h}}\). We integrate the ESM-1b architecture into our model i.e.; we update the parameters of ESM-1b dynamically during the training process. Structure module.Structure module utilizes the microenvironmental information to guide the fitness prediction. In this part, we use the ESM-IF1 model [23] to generate the scores of mutant sequences, which evaluate their ability to be folded to the wildtype structure of the given protein. Higher scores mean these mutations are more favorable than others. Specifically, all possible single mutants at each position of a sequence would obtain the corresponding scores. The prediction sequence distribution is an (\(L\times 20\)) matrix. Then we calculated the cross-entropy at each position of the sequence between the matrix above and one-hot encoding matrix of mutant sequence. After passing the results through a SoftMax function, we obtained an (\(L\times 1\)) output vector, which is the reconstruction perplexities \(\mathbf{p^{\prime}=<p^{\prime}_{1},p^{\prime}_{2},...\,p^{\prime}_{L}>}\) align the evolutionary sequence. In the present work, we do not directly encode distance map or the 3D coordinate of mutated protein. Since before that encoding process, we need to fold every specific mutant from their sequences, which will lead to unaffordable computational cost and is unpractical for the task of fitness prediction. Intra-Attention.The outputs of local encoder and global encoder are embedding vectors, aligning all positions of input sequence. We utilize intra-attention mechanism to compress the whole embeddings to a context vector. The inputs of attention layer are: (1) the global representations \(\mathbf{g^{\prime}=<g^{\prime}_{1},g^{\prime}_{2},...\,g^{\prime}_{L}>}\) (2) the local representations \(\mathbf{e^{\prime}=<e^{\prime}_{1},e^{\prime}_{2},...e^{\prime}_{L}>}\) (3) the reconstruction perplexities \(\mathbf{p^{\prime}=<p^{\prime}_{1},p^{\prime}_{2},...,p^{\prime}_{L}>}\). Firstly, the local representations and global representations are normalized by layer normalization [46] over the length dimension respectively for stable training. That is, \(\mathbf{g^{\prime}=LayerNorm(g^{\prime})}\) and \(\mathbf{e^{\prime}=LayerNorm(e^{\prime})}\). Secondly, the normalized global representations and local representations are concatenated to joint-representations \(\mathbf{r=<r_{1},r_{2},...r_{L}>}\), where \(\mathbf{r_{l}=[g^{\prime}_{i};r^{\prime}_{i}]\in R^{2\mathbf{h}}}\). Then we use an dot attention layer to compute the sequence attention weights \(a=<a_{1},a_{2},...,a_{L}>\in R^{L}\), where \(a_{i}\in R\) is the attention weight on the \(i_{th}\) position, \(a_{i}=\frac{\exp(r_{i}\cdot W_{a}r_{i})}{\sum_{k=1}^{n}\exp(r_{k}\cdot W_{a}r_ {k})}\), \(W_{a}\in R^{h\times 1}\) is the learnable parameter. Besides the sequence attention weights, there is structure attention weights called structure attention \(s=<s_{1},s_{2},...,s_{L}>\in R^{L}\), which are calculated by reconstruction perplexities, \(s_{i}=\frac{\exp(p^{\prime}_{i})}{\sum_{k=1}^{n}\exp(p^{\prime}_{k})}\). We use the average of sequence attention and structure attention as the final combined attention weights, that is \(w=<w_{1},w_{2},...,w_{L}>\), where \(w_{i}=\frac{a_{i}+s_{i}}{2}\). According to the combined attention weights, we get the context vector \(\mathbf{c}=\sum_{i=1}^{L}w_{i}\mathbf{r_{i}}\) as the embedding vector of the entire sequence. _Output layer._ The input of output layer is the context vector \(\mathbf{c}\) from the output of attention aggregator, and an evolutionary score \(d\) from the unsupervised model [23]. While the evolutionary score may not be trusted in many cases, we use a dynamic weight to take the score into account. The context vector \(\mathbf{c}\) was firstly transformed to a hidden vector \(\mathbf{h}\), where \(\mathbf{h}=ReLU(W_{h}c+b)\), \(W_{h}\) and \(b\) are learnable parameters, and **ReLU**[47] is the activation function. Then, the hidden vector \(\mathbf{h}\) is used to calculate the weight \(p\in(0,1)\) on \(d\): \(p=Sigmoid(W_{p}[\mathbf{h};d])\). The scale of \(\mathbf{p}\) quantifies how much should the model trust the score from the zero-shot model. At last, we use a linear layer to compute a fitness score \(y_{q}\in R\) according to the hidden vector \(\mathbf{h}\) directly, where \(y_{q}=W_{q}h+b\). The output of our model, i.e., the prediction fitness \(y\in R\) is computed as: \[y\ =\ (1-p)\ \times\ y_{p}\ +\ p\ \times\ y_{q}. \tag{2}\] We utilize the mean square error (MSE) as the loss function to update model parameters during back-propagation: \[loss=\frac{1}{N}\sum_{i=1}^{N}(t_{i}-y_{i})^{2} \tag{3}\] , where \(N\) is the number of samples in a mini-batch, \(t_{i}\) is the target fitness and \(y_{i}\) is the output fitness. **Dataset and experimental settings** _Benchmark dataset collection_. We first collected 20 multiple deep mutational scanning datasets from Ref [14]. Most of them only contain the fitness data of single-site mutants, while one of them (RRM)[31] also provides data of high-order mutants. The fitness data measured in these datasets include enzyme function, growth rate, peptide binding, viral replication and protein stability. We also collected the mutant data of the WW domain of human Yap1, GB1 domain of protein G in _Streptococcus sp. group G_ and FOS-JUN heterodimer from Ref [48], and the prion-like domain of TDP-43 from Ref [49] to evaluate the ability of our model to predict the effect of double-sites mutant by learning from the data of single-site mutant. Besides, the ability to predict the fitness of higher order mutants (larger than 2) is tested in the dataset from Ref [29]. This study analyzed the local fitness landscape of the green fluorescent protein from _Aequorea victoria_ (avGFP) by measuring the native function (fluorescence) of tens of thousands of derivative genotypes of avGFP. The detailed information on these datasets are provided in Table 4 in the Supplement Information. _Prediction of single-site mutation effects_. We compared our model to ECNet, ESM-1b, ESM-1v and MSA transformer model on the DMS datasets. For the supervised models (ECNet and ESM-1b), we performed five-fold cross-validation on these datasets, and 12.5% of each train set are randomly selected as valid set. Spearman correlation was used to evaluate the performances of different models. _Prediction of High-order mutation effects_. We evaluated the performance for predicting the fitness of high-order mutants by the model trained on low-order mutants. The training set for the prediction of double-site mutants only contains the experimental fitness of single-site mutants. The models used to predict the fitness of quadruple mutants of avGFP are trained on single, double, triple, and all the three types of mutants, respectively. Both in the prediction of effect of double mutants and quadruple mutants, we chose 10% of the high-order mutant data as valid set. The performances of models were evaluated by Spearman correlation. _Data-augmentation strategy_. The data augmentation was conducted by pre-training our model on the results predicted by the unsupervised model. To be specific, we first built a mutant library, which contains all of the single-site mutants and 30,000 double-site mutants randomly selected from tens of millions of saturated double-site mutants. Then, we used ESM-IF1 (or ProGen2) to score all of these sequences. Those sequence-score data were used to pre-train our model. While we used 90% of the data as training test, 10% as validation set. After that, we finetuned the pre-trained model on single-site mutants from experiment with the high-order mutants as test set. **Training details.** SESNet was trained with adam optimizer with weight decay (equals to L2 norm). Hyperparameters of the model were tuned with a local grid search on the validation set. Since conducting 5-fold cross-validation and grid search on 20 datasets is costly, we only searched on two representative datasets. We performed grid search on GFP dataset for multi-sites dataset and RRM dataset for single-site dataset to obtain the best hyperparameters configuration and apply the search results in other datasets. We tested the hidden size of [128, 256, 512], learning rate of [1e-3, 5e-4, 1e-4, 5e-5, 1e-5], and dropout of [0.1, 0.2, 0.4]. Table 7 in SI shows the details of the hyperparameters configuration. All experiments are conducted on a GPU server with 10 RTX 3090 GPUs (24GB VRAM) and 2 Intel Gold 6226R CPUs with 2TB RAM. _Model contrast._ The source code of ECNet model for contrast is downloaded from the GitHub website ([https://github.com/luoyunan/ECNet](https://github.com/luoyunan/ECNet)) provided by Ref [17]. The ESM-1b model is also reproduced in our local computers with architecture that is described in their publication [6]. The code of ESM-IF1, ESM-1v and MSA transformer (ESM-MSA-1b) are got from the GitHub website of Facebook research ([https://github.com/facebookresearch/esm](https://github.com/facebookresearch/esm)). For each assay, all experiments of three different models are performed in the same dataset. ## Ethical Approval Not applicable ### Competing interests The authors declare no competing interests. ### Authors' contributions LH and PT designed this project, PT and ML proposed this model, ML, LQ, YX, YGW and GF implemented the method, performed the calculations. All of the authors read and approved the final manuscript. ### Author's notes \(\dagger\)These authors contributed equally to this work. *To whom correspondence should be addressed: [email protected]; [email protected]. ### Funding This work was financially supported by the Natural Science Foundation of China (Grant No. 12104295, 11974239, 31630002, 61872094, 61832019), the Innovation Program of Shanghai Municipal Education Commission, and Shanghai Jiao Tong university Multidisciplinary research fund of medicine and engineering YG 2016QN13. The computing hardware resource was supported by the Center for High Performance Computing at Shanghai Jiao Tong University. ### Availability of data and materials Source code for SESNet and all the datasets used in the present work can be found in the supplemental materials. Where the original sources of datasets have been declaimed and cited in the main text.
2309.05986
Some attempts on $L^{2}$ boundedness for 1-D wave equations with time variable coeffecients
We consider the $L^2$-boundedness of the solution itself of the Cauchy problem for wave equations with time-dependent wave speeds. We treat it in the one-dimensional Euclidean space. To study these, we adopt a simple multiplier method by using a special property equiped with the one dimensional space.
Ryo Ikehata
2023-09-12T06:31:09Z
http://arxiv.org/abs/2309.05986v1
# Some attempts on \(L^{2}\) boundedness for \(1\)-D wave equations with time variable coefficients ###### Abstract We consider the \(L^{2}\)-boundedness of the solution itself of the Cauchy problem for wave equations with time-dependent wave speeds. We treat it in the one-dimensional Euclidean space \(\mathbf{R}\). To study these, we adopt a simple multiplier method by using a special property equiped with the one dimensional space. + Footnote †: 2020 Mathematics Subject Classification. Primary 35L05; Secondary 35L10, 35B45, 35B40. + Footnote †: 2020 Mathematics Subject Classification. Primary 35L05; Secondary 35L10, 35B45, 35B40. ## 1 Introduction We consider the Cauchy problem for wave equations with time dependent wavespeeds in one-dimensional Euclidean space \(\mathbf{R}\) \[u_{tt}(t,x)-a(t)^{2}u_{xx}(t,x)=0,\quad(t,x)\in(0,\infty)\times\mathbf{R}, \tag{1.1}\] \[u(0,x)=u_{0}(x),\ \ u_{t}(0,x)=u_{1}(x),\quad x\in\mathbf{R}, \tag{1.2}\] where the initial data \([u_{0},u_{1}]\) are taken from the test function space (for simplicity) \[u_{0}\in C^{\infty}_{0}(\mathbf{R}),\quad u_{1}\in C^{\infty}_{0}(\mathbf{R}),\] \(a\in\mathrm{C}^{1}([0,\infty))\) and we denote \[u_{t}=\frac{\partial u}{\partial t},\quad u_{tt}=\frac{\partial^{2}u}{ \partial t^{2}},\quad u_{xx}=\frac{\partial^{2}u}{\partial x^{2}}.\] Throughout this paper, \(\|\cdot\|\) stands for the usual \(L^{2}(\mathbf{R})\)-norm. The total energy \(E_{u}(t)\) of the solution \(u(t,x)\) to problem (1.1) is defined by \[E_{u}(t)=\frac{1}{2}(\|u_{t}(t,\cdot)\|^{2}+a(t)^{2}\|u_{x}(t,\cdot)\|^{2}). \tag{1.3}\] We shall impose the following two assumptions on \(a(t)\): \(\mathbf{(A.1)}\,a\in\mathrm{C}^{1}([0,\infty))\) and \(a(t)>0\) for all \(t\geq 0\), and \(a_{m}:=\sup\{a(t)\,:\,t\geq 0\}<+\infty\), \(\mathbf{(A.2)}\,a^{\prime}(t)\geq 0\) for all \(t\geq 0\). Under these conditions, it is known that the problem (1.1)-(1.2) admits a unique strong solution \(u\in{\rm C}([0,\infty);H^{2}({\bf R}))\cap{\rm C}^{1}([0,\infty);H^{1}({\bf R}) )\cap{\rm C}^{2}([0,\infty);L^{2}({\bf R}))\), which has finite speed propagation of waves with its propagation speed \(a_{m}>0\) (cf. [7]). This is a sufficient class to deploy the multiplier method. In fact, the solution is much smoother than is convenient. The purpose of the present paper is to consider whether the \(L^{2}\)-boundedness of the solution itself for problem (1.1)-(1.2) can be observed or not in the one-dimensional case. The problem itself is never trivial in the sense that one has no Hardy or Poincare inequality. Furthermore, one can imagine that time-dependent coefficients \(a(t)\) can be an obstacle in many ways in partial integration. There have been many interesting papers published on the estimate of the total energy and asymptotic behavior of solutions of the wave equation, not only in the case of constant coefficients, but also in the case of time and space variable dependence (see [1, 4, 6, 8, 9, 12, 15, 19] and the references therein). Furthermore, a series of sharp research results have been published on the \(L^{p}\)-\(L^{q}\)-estimate of the solution itself, starting with Strichartz [17] (see e.g. [2, 3, 13, 14, 16, 18]). However, when we look at the \(L^{2}\) estimate of the solution itself, there seems to be a little gap from the viewpoint of bestness. In such a situation, the author [10] published the following results for the case \(a(t)=1\) (constant coefficient) in problems (1.1)-(1.2): \[\int_{-\infty}^{\infty}u_{1}(x)dx\neq 0\Rightarrow\|u(t,\cdot)\|\sim\sqrt{t}, \quad(t\to\infty). \tag{1.4}\] This shows a growth estimate of the \(L^{2}\)-norm of the solution itself to problem (1.1)-(1.2) with \(a(t)=1\). Thus, when one wishes to observe the \(L^{2}\) boundedness of the solution itself, it will be necessary to consider the general time-variable coefficient problem (1.1) by factoring in the information that the zero-order moment of the initial velocity may or may not vanish. Herein lies the difficulty of deriving the solution's own \(L^{2}\) estimate in the case of general variable coefficients. Furthermore, with variable coefficients, it is not possible to describe and evaluate the solution "explicitly" using the Fourier transform as in the method of [10], and this makes us imagine a very hopeless prediction for deriving the \(L^{2}\) estimate formula from "under" the solution itself. Therefore, in order to construct a "general theory" involving constant coefficients to watch the \(L^{2}\) boundedness of the solution itself, the condition that the zero-order moment of the initial velocity vanishes, can never be removed. It's time to introduce the results. To state our results, we define \[v_{1}(x):=\int_{-\infty}^{x}u_{1}(y)dy.\] Our result then can be stated as follows. **Theorem 1.1**: _Suppose_ (**A.1**)_,_ (**A.2**) _and assume \(v_{1}\in L^{2}({\bf R})\). Then, the corresponding solution \(u(t,x)\) to problem_ (1.1)-(1.2) _with initial data \([u_{0},u_{1}]\in C_{0}^{\infty}({\bf R})\times C_{0}^{\infty}({\bf R})\) satisfies_ \[\|u(t,\cdot)\|^{2}\leq I_{0}^{2}a(0)^{-2},\quad(t\geq 0)\] _with a constant \(I_{0}\geq 0\) defined by_ \[I_{0}:=\left(\|v_{1}\|^{2}+a(0)^{2}\|u_{0}\|^{2}\right)^{1/2}.\] **Remark 1.1**: In this Theorem 1.1, it is essentially imposing a new condition on \(u_{1}\) through \(v_{1}\in L^{2}({\bf R})\). For example, if \(u_{1}\in C_{0}^{\infty}({\bf R})\) is odd about the origin, then we see that \(\int_{\bf R}u_{1}(x)dx=0\), and in this case \(v_{1}\in C_{0}^{\infty}({\bf R})\). Thus one has \(v_{1}\in L^{2}({\bf R})\). The condition \(v_{1}\in L^{2}({\bf R})\) on the initial velocity \(u_{1}\) of the theorem makes sense. **Example 1.** We can find an appropriate function \(a(t)\) satisfying (**A.1**) and (**A.2**). \[a(t)=\left\{\begin{array}{ll}1+e^{-1/t},&t>0,\\ \\ 1,&t=0,\end{array}\right.\] From the proof of Theorem 1.1 we see that the condition **(A.2)** can be replaced by the following one: \(\mathbf{(A.3)}\,a^{\prime}(t)\leq 0\) for all \(t\geq 0\), and \(A_{0}:=\inf\{a(t)\,:\,t\geq 0\}>0\). Then, one can also derive the following corollary. A monotone "decreasing" function \(a(t)\) can be also covered in our theory. **Corollary 1.1**: _Suppose_ (**A.1**) _and_ (**A.3**) _and assume \(v_{1}\in L^{2}(\mathbf{R})\). Then, the corresponding solution \(u(t,x)\) to problem (1.1)-(1.2) with initial data \([u_{0},u_{1}]\in C_{0}^{\infty}(\mathbf{R})\times C_{0}^{\infty}(\mathbf{R}))\) satisfies_ \[\|u(t,\cdot)\|^{2}\leq I_{0}^{2},\quad(t\geq 0).\] **Example 2.** We can also choose \(a(t):=1+e^{-t}\), and/or \(a(t):=\frac{2+t}{1+t}\). Then, the statement of Corollary 1.1 implies \[\|u(t,\cdot)\|^{2}\leq I_{0}^{2},\quad(t\geq 0).\] **Remark 1.2**: The conditions \(\sup\{a(t)\,:\,t\geq 0\}<+\infty\) and \(\inf\{a(t)\,:\,t\geq 0\}>0\) assumed in **(A.2)** and **(A.3)** express a propagation speed of the wave, and the ellipticity of the operator \(u\mapsto a(t)^{2}\partial_{xx}u\), respectively. By modifying the proof of Theorem 1.1 one can also present another version of the result. For this purpose we set one more assumption. \(\mathbf{(A.4)}\,A_{0}>0\), and \(a^{\prime}\in L^{1}(0,\infty)\), where \(A_{0}\) is the constant already defined in **(A.3)**. Then, one can derive one more corollary. **Corollary 1.2**: _Suppose_ (**A.1**) _and_ (**A.4**) _and assume \(v_{1}\in L^{2}(\mathbf{R})\). Then, the corresponding solution \(u(t,x)\) to problem (1.1)-(1.2) with initial data \([u_{0},u_{1}]\in C_{0}^{\infty}(\mathbf{R})\times C_{0}^{\infty}(\mathbf{R}))\) satisfies_ \[\|u(t,\cdot)\|^{2}\leq\frac{1}{A_{0}^{2}}I_{0}^{2}e^{\frac{2}{A_{0}}\int_{0}^ {\infty}|a^{\prime}(s)|ds},\quad(t\geq 0).\] **Example 3.** We can present an additional oscillating example: \[a(t):=2+\frac{\sin t}{(1+t)^{2}}.\] The above theorem and examples include the case of constant coefficients \(a(t)=1\). Therefore, what is still a concern is the fear that some moment conditions for the initial velocity \(u_{1}\) may contradict (1.4). Let us discuss this situation below. Suppose \(u_{1}\in C_{0}^{\infty}(\mathbf{R})\). Then, there is a large number \(L>0\) such that one can assume that \(\operatorname{supp}u_{1}\subset[-L,L]\). Since \[v_{1}(x)=\int_{-\infty}^{x}u_{1}(y)dy,\] if \(x>2L\), then one sees that \[v_{1}(x)=\int_{-L}^{L}u_{1}(y)dy=\int_{-\infty}^{\infty}u_{1}(y)dy=:c_{0}\quad (\forall x>2L).\] Assume for the moment that \(c_{0}\neq 0\). Then, it follws that \[\|v_{1}\|^{2}=\int_{-\infty}^{2L}|v_{1}(x)|^{2}dx+\int_{2L}^{\infty}|v_{1}(x)|^{ 2}dx\geq\int_{2L}^{\infty}|v_{1}(x)|^{2}dx=\int_{2L}^{\infty}c_{0}^{2}dx=\infty,\] which shows a contradiction to the assumption that \(v_{1}\in L^{2}({\bf R})\) in Theorem 1.1 and Corollary 1.1. Thus it must be \(c_{0}=0\). Note that \(u_{1}\in C_{0}^{\infty}({\bf R})\) and \(v_{1}\in L^{2}({\bf R})\) impliy \(v_{1}\in C_{0}^{\infty}({\bf R})\) and \(v_{1}(\infty)=0\). Thus, under the conditions of Theorem 1.1, there is no contradiction with (1.4) because of the additional assumption that the zero-order moment of the initial velocity vanishes. Conversely, if the zero-order moment of the initial velocity does not vanish, then the augmentation property as in (1.4) may still be derived, but this remains unresolved at this time. Note that the theorem took the initial value class to be the test function, but this is not the essence of the theorem. This is off the topic of this issue, though. **Remark 1.3**: With our method, it is difficult to handle high dimensional cases now and in our time. Also, by analogy with the results in [10] for the constant coefficients, the moment vanishing condition for initial velocity \(u_{1}\) would be required even in two dimensions, but that condition would not be necessary for three or more dimensions. The \(L^{2}\)-boundedness is more likely to be used for three or more dimensions.__ **Remark 1.4**: From the above discussion, we expect the following in the 1-D case (and probably in the 2-D case as well): under the condition such that \[\int_{\bf R}u_{1}(x)dx\neq 0,\] then \[\lim_{t\to\infty}\|u(t,\cdot)\|=+\infty\] with some growth rate. This is still open in the time variable coefficient case (cf. (1.4)).__ The remainder of this paper is organized as follows. In Section 2, we shall prove Theorem 1.1 and Corollaries 1.1 and 1.2. ## 2 Proof of results. In this section, let us prove Theorem 1.1 and then Corollary 1.1 by using multiplier method inspired from the idea in [5]. Incidentally, it seems that the method used in [5] itself is partially inspired from an idea developed in [11] as can be observed from their proofs. _Proof of Theorem 1.1._ First of all, define a function \(v(t,x)\) by \[v(t,x):=\int_{-\infty}^{x}u(t,y)dy.\] Note that the function \(v(t,x)\) is well-defined because of finite speed propagation of waves. Furthermore, for each \(j=0,1\) we set \[v_{j}(x):=\int_{-\infty}^{x}u_{j}(y)dy.\] Incidentally, we see that \(v_{j}\in C^{\infty}({\bf R})\) (\(j=0,1\)). Then, the function \(v(t,x)\) satisfies \[v_{tt}(t,x)-a(t)^{2}v_{xx}(t,x)=0,\quad(t,x)\in(0,\infty)\times{\bf R}, \tag{2.1}\] \[v(0,x)=v_{0}(x),\ \ v_{t}(0,x)=v_{1}(x),\quad x\in{\bf R}, \tag{2.2}\] where one has just used the fact that \[\lim_{y\to-\infty}u_{y}(t,y)=0\quad(t\geq 0).\] Multiplying both sides of (2.1) by \(v_{t}\), and integration by parts yield the equality: \[\frac{d}{dt}E_{v}(t)=a(t)a^{\prime}(t)\|v_{x}(t,\cdot)\|^{2},\] where one has used the fact that \(v_{x}(t,x)=u(t,x)=0\) for large \(|x|\gg 1\) and each \(t\geq 0\). Integrating the above equality on \([0,t]\) it follows that \[E_{v}(t)=E_{v}(0)+\int_{0}^{t}a(s)a^{\prime}(s)\|v_{x}(s,\cdot)\|^{2}ds \tag{2.3}\] \[=E_{v}(0)+2\int_{0}^{t}\frac{a^{\prime}(s)}{a(s)}\left(\frac{1}{2}a(s)^{2}\|v _{x}(s,\cdot)\|^{2}\right)ds \tag{2.4}\] \[\leq E_{v}(0)+2\int_{0}^{t}\frac{a^{\prime}(s)}{a(s)}E_{v}(s)ds,\] where we have just used the assumption **(A.2)**. Thus, by using the Gronwall inequality one has \[E_{v}(t)\leq E_{v}(0)e^{2\int_{0}^{t}\frac{a^{\prime}(s)}{a(s)}ds}. \tag{2.5}\] Therefore, from the definition of the total energy and (2.5) one can get the inequality: \[\|v_{x}(t,\cdot)\|^{2}\leq 2E_{v}(0)a(t)^{-2}e^{2W(t)},\] where \[W(t):=\int_{0}^{t}\frac{a^{\prime}(s)}{a(s)}ds=\int_{0}^{t}\frac{d}{ds}\log a (s)ds=\log\frac{a(t)}{a(0)}.\] Thus one has \[\|v_{x}(t,\cdot)\|^{2}\leq 2E_{v}(0)a(t)^{-2}\left(\frac{a(t)}{a(0)}\right)^{ 2}=2E_{v}(0)a(0)^{-2}.\] Now, since \(v_{x}(t,x)=u(t,x)\) (This is a crucial idea in [11] and [5]), we can arrive at the desired estimate: \[\|u(t,\cdot)\|^{2}\leq 2E_{v}(0)a(0)^{-2}\quad(t\geq 0).\] Set \[I_{0}:=\left(2E_{v}(0)\right)^{1/2}=\left(\|v_{1}\|^{2}+a(0)^{2}\|u_{0}\|^{2 }\right)^{1/2}. \tag{2.6}\] These imply the desired statement of Theorem 1.1. \(\Box\) **Remark 2.1**: The function \(v(t,x):=\int_{-\infty}^{x}u(t,y)dy\) used in the proof above can be considered as \[v(t,x)\sim v(t,\infty)\quad(x\to\infty),\] and \[v(t,\infty)=\int_{\mathbf{R}}u(t,y)dy=\left.\int_{\mathbf{R}}e^{-iy\xi}u(t,y)dy \right|_{\xi=0}=\mathcal{F}(u(t,\cdot))(0)=\hat{u}(t,0),\] where \(\mathcal{F}\) denotes the Fourier transform. Thus, the function \(v(t,x)\) is equal to \(\hat{u}(t,0)\) as \(x\to\infty\) approximately. The estimate for \(v(t,x)\) may correspond to the low frequency estimate near \(\xi=0\) of \(\hat{u}(t,\xi)\). _Proof of Corollary 1.1._ It follows from (2.3) of the proof of Theorem 1.1 and the assumption **(A.3)** we see that \[E_{v}(t)\leq E_{v}(0).\] The other part of proof is similar. \(\square\) _Proof of Corollary 1.2._ It follows from (2.4) of the proof of Theorem 1.1 and the assumption **(A.4)** we see that \[E_{v}(t)\leq E_{v}(0)+2\int_{0}^{t}\frac{|a^{\prime}(s)|}{a(s)}E_{v}(s)ds\leq E _{v}(0)+\frac{2}{A_{0}}\int_{0}^{t}|a^{\prime}(s)|E_{v}(s)ds.\] The use of the Gronwall inequality yields the desired estimate similarly. \(\square\) _Acknowledgement._ The work of the author (R. IKEHATA) was supported in part by Grant-in-Aid for Scientific Research (C) 22540193 of JSPS.
2309.04440
Registration of terahertz irradiation with silicon carbide nanostructures
The response to external terahertz (THz) irradiation from the silicon carbide nanostructures prepared by the method of substitution of atoms on silicon is investigated. The kinetic dependence of the longitudinal voltage is recorded at room temperature by varying the drain-source current in the device structure performed in a Hall geometry. In the frameworks of proposed model based on the quantum Faraday effect the incident radiation results in the appearance of a generated current in the edge channels with a change in the number of magnetic flux quanta and in the appearance of features in the kinetic dependence of the longitudinal voltage. The generation of intrinsic terahertz irradiation inside the silicon carbide nanostructures is also revealed by the electrically-detected electron paramagnetic resonance (EDEPR) measured the longitudinal voltage as a function of the magnetic field value.
N. T. Bagraev, S. A. Kukushkin, A. V. Osipov, L. E. Klyachkin, A. M. Malyarenko, V. S. Khromov
2023-09-07T11:53:09Z
http://arxiv.org/abs/2309.04440v1
# Registration of terahertz irradiation with silicon carbide nanostructures ###### Abstract The response to external terahertz (THz) irradiation from the silicon carbide nanostructures prepared by the PETSDF of the 100Gb/s. The magnetic field was measured by the 100Gb/s. in security systems to detect concealed objects without any detriment to human health. It is also worth noting that the processes in materials subjected to THz irradiation (rotational transitions in molecules, lattice vibrations in solids, large-amplitude vibrational motion in organic compounds) may be utilized in detection of explosives and chemical and biological weapons [6]. At first, thermometers were used to detect infrared (IR)and THz radiation. Thermal effects induced by incident radiation form the physical basis for a class of so-called thermal THz radiation detectors. A Golay cell, where heat from an absorber irradiated by incident radiation is transferred to a monoatomic gas (argon or xenon) isolated within a cell with a membrane wall, belongs to this class. The gas expands and induces deformation of the membrane wall with a mirror surface, which is irradiated by a light-emitting diode. Radiation reflected from the mirror wall is incident on a photodiode. The illumination intensity measured by it is proportional to the degree of mirror deformation [9]. In solid-state samples, temperature variations translate into changes in spontaneous polarization of samples made from pyroelectric materials, of which triglycine sulfate (TGS) and deuterated triglycine sulfate (DTGS) are common examples. The absorption of heat induces a change in intensity of the electric field between the opposite faces of a sample. This variation may be detected [9]. While thermal detectors of this kind are suitable for room-temperature operation, they are not sufficiently sensitive to detect low-intensity radiation and are affected by displacement and vibrations (this is especially true for pyroelectric detectors, which are also piezoelectric [9]). Owing to the freezing of carriers, the resistance increases significantly in semiconductor bolometers cooled to low temperatures. Following absorption of THz quanta, carriers enter the conduction band, thus inducing a detectable resistance reduction [9]. Mercury cadmium telluride (MCT) is one of the commonly used materials for such devices. Although these detectors feature a fast response and high sensitivity, their application is limited by the need for cryogenic cooling [9]. Several attempts at circumventing this limitation with the use of device structures based on HTSC materials have been made [10], but technological difficulties associated with the production of devices with reproducible characteristics hindered the progress in this research. Terahertz irradiation may also interact with carriers in a detector, which is then said to belong to the class of so-called electron detectors. Effects underlying their operation may be related to the collective behavior of carriers (interaction of THz radiation with plasma waves in the channel of field-effect transistors, which may be characterized using a hydrodynamic analogy [11]) or to the interaction of photons with individual carriers (e.g., the passage of a carrier through a potential barrier in a Schottky diode). Such detectors usually operate at room temperature and are compact, thus allowing one to construct array structures. The narrow-band nature of defection is one of their drawbacks; in the case of transistor type detectors, this necessitates the use of several antenna structures [9]. Terahertz irradiation receivers may be regarded as devices that are either separate from the source or are associated closely with it. Specifically, the technique of self-mixing involves feeding a part of radiation emitted by a laser back into the cavity and modifying the laser operation parameters. The amplitude and the phase of this reradiated signal may be measured [12]. This interferometry technique implemented with a quantum cascade laser was used to visualize the state of biological tissues [13]. It was demonstrated in recent studies that semiconductor nanosandwiches based on silicon heavily doped with boron may be used as sources of THz radiation in diagnostics of oncological diseases performed in accordance with the above method [14]. Radiation is generated in these nanosandwiches in edge channels confined by dipole centers that form networks of Josephson junctions [15]. It was found recently that silicon carbide, which provides an opportunity to raise the radiation power, may be used as the base material for a nanosandwich emitter [16]. Therefore, the examination of possibilities of detection of THz radiation at room temperature with the use of a nanosandwich of this kind is a topical issue. Experimental methods Semiconductor samples with single-crystalline SiC films grown on the surface of single-crystalline silicon were used as the basis for a detector. The procedure of their fabrication by coordinated substitution of atoms with the use of a chemical reaction between silicon and carbon monoxide gas was detailed in [17, 18, 19]. An in-depth description of the accompanying processes was given in reviews [20, 21]. This SiC film growth mechanism differs from the other methods in that the structure of the initial cubic Si lattice persists, thus providing for the growth of the cubic 3\(C\)-SiC [20, 21, 22] polytype. This was verified in electron microscopic studies, which also revealed the lack of lattice misfit dislocations at the 3\(C\)-SiC(111)/Si(111) interface; instead, stacking faults with interlayers of hexagonal phases are present at the interphase boundary [23]. The term "coordinated" implies that the processes of removal of a Si atom from the lattice and introduction of a C atom into the vacant position in reaction \[2\text{Si (crystal)}+\text{CO (gas)}=\text{SiC (crystal)}+\text{SiO (gas)}\uparrow \tag{1}\] are concurrent [22]. With the mentioned lack of misfit dislocations, epitaxy of silicon carbide films on silicon due to the coordinated substitution of one half of Si atoms with C atoms ensures high crystalline perfection of SiC films [20, 21, 22, 23]. The synthesis of silicon carbide in reaction (1) is a two-stage process. Silicon vacancy-interstitial carbon atom complexes form first. Carbon atoms then shift toward silicon vacancies with the formation of silicon carbide. Activated complexes transform into silicon carbide, and free vacancies assemble in pores below the SiC layer. The end result is the formation of a silicon carbide film partially suspended above pores in silicon. This is the reason why films formed this way are free from elastic strain [17, 18, 19, 20, 21, 22, 23]. In contrast to traditional growth methods, the film orientation is set not just by the substrate surface, but by the "old" crystal structure of the initial silicon matrix. The emergence of a layer with unusual optical and electrophysical properties and a thickness on the order of several nanometers at the SiC/Si interface is an important feature of this technique of synthesis of silicon carbide by coordinated substitution of atoms. Its formation is induced by the process of contraction of the initial Si lattice with a parameter of 0.543 nm, which "collapses" to a cubic SiC lattice with a parameter of 0.435 nm, at the final stage of transformation of silicon into silicon carbide. This process occurs in the substrate plane [20, 23]. Silicon carbide detached from the silicon matrix subjects it to anomalously strong compression (exceeding 100GPain magnitude). It would be impossible to produce SiC with a structure ordered so tightly under such pressures if every fifth lattice cell of silicon carbide did not align accurately with every fourth cell of silicon. As a result of material contraction, every fifth chemical bond of SiC is positioned in a coordinated fashion with every fourth bond of Si. The other bonds either get disrupted, thus producing vacancies and pores, or are subjected to compression, which alters the structure of surface bands of silicon carbide adjacent to Si and leads to its transformation into a "semimetal". This effect has been observed for the first time in a recent study performed using the spectral ellipsometry technique in the 0.5-9.3 eV range of photon energies [24]. It follows from the results of quantum-chemical calculations [24] that, in the process of dislocation-free matching of SiC and Si lattices that differ by 20%, a SiC film with its Si surface facing the substrate attracts one of the 16 Si atoms in the proximal double layer of substrate atoms. Out of 25 Si atoms, 22 form chemical bonds with Si substrate atoms, while the remaining three (i.e., 12%) do not form bonds, since they are located too far (\(>3\) A) from substrate atoms. These are the Si atoms in SiC the \(p\) electrons of which produce the primary contribution to the narrow and well-pronounced 3\(C\)-SiC(111)/Si(111) peak of the electron state density located in the vicinity of the Fermi energy. In other words, the 3\(C\)-SiC(111)/Si(111) interface should exhibit unusual electrophysical properties; specifically, it should be a fine electric conductor. This interface also exhibits certain unusual magnetic properties. A sample with a silicon carbide film grown on (110) silicon was examined in [25]. Following the formation of this film, doping with boron in the conditions of non-equilibrium gas-phase diffusion was performed; the technological parameters for this sample (F5) are given in [25]. The observed "dia-para-hysteresis" of the magnetic susceptibility amounts to an experimental demonstration of the Meissner-Ochsenfeld effect. Oscillations of the magnetic susceptibility, which signify the fulfillment of conditions of quantum interference in the vicinity of microdefects in the sample plane, were also noted. The examination of these oscillations allowed us to identify the de Haas-van Alphen (DHVA) and Aharonov-Bohm (AB) effects, which are associated with quantization of the moment and the magnetic flux at room temperature, respectively, due to the effective suppression of the electron-electron interaction at high temperatures. This is made possible by the formation of dipole boron centers with a negative correlation energy that confine the edge channels of the studied structure and, owing to their interaction with carriers, govern the characteristics of DHVA and AB oscillations [26]. It should be noted that edge channels may be formed not only by dipole boron centers, but also by dipole centers of the "silicon vacancy-interstitial carbon atom" type, which are always present in SiC/Si structures grown by coordinated substitution of atoms on the (111) and (110) silicon substrate surfaces [22]. Thus, the observation of quantum interference in edge channels is interrelated with the above-mentioned "dia-para-hysteresis" of the magnetic susceptibility. Surface contacts were formed in the Hall geometry (see Fig. 1) to carry out experiments with current flowing through the detector sample based on silicon carbide. The measurement diagram presented in Fig. 2, \(a\) was implemented in these experiments. The studied detector sample based on silicon carbide was positioned at distance \(s\) from the THz radiation emitter. The detector was housed in a protective metallic box with an aperture. A Tydex LPF14.3 THz filter blocking radiation with frequencies \(>\) 14.3 THz was mounted in front of the aperture. A nanosandwich based on silicon heavily doped with boron was used as the THz radiation emitter. It contained an ultra-narrow silicon quantum well (2 nm in width) confined by two \(\delta\)-barriers containing dipole boron centers with a negative correlation energy [27]. This emitter has the geometry of a Hall bridge (similar to the one presented in Fig. 1) and is characterized by the following parameters (\(\mu\)m): \(a=50\), \(b=200\), \(d=200\), \(l=4720\), and \(f\)\(=1000\). The generation of THz radiation is induced by the passage of longitudinal source-drain current \(I_{ds\ (emi)}\) (Fig. 2, \(b\)) of the Figure 1: Hall geometry of contacts on the surface of the studied SiC-based structure. Parameters (\(\mu\)m): \(a=50\), \(b=200\), \(d=200\), \(l=4200\), \(f=1000\). Dashed contours denote the positions of vertical contacts \(b\times b\) in size formed above the structure surface. milliampere range through the emitter [27]. Both samples were housed in aluminum enclosures with an aperture and were positioned with their plane surfaces facing each other. Longitudinal voltage \(U_{ds}\) of the detector was measured in the experiment as a function of time with longitudinal source-drain current \(I_{ds}=1.5\)\(\mu\)A flowing through the sample (Fig. 2, _c_). These measurements were performed at room temperature. A total of 580-1500 measurements with an overall duration of 245-634 s, respectively, were carried out in a single run. Terahertz generation was lacking in the first 200 measurements; the detector was irradiated (with longitudinal current \(I_{ds\,(emi)}=30\)mA applied to the emitter) in all measurements starting from the 201st one. In view of this, quantity \[\Delta U_{ds}=U_{ds}-\left<U_{ds}\left(1-200\right)\right>, \tag{2}\] which is equal to the difference between the experimental value of \(U_{ds}\) and its mean value determined in the first 200 measurements, was used for analysis. ## 3 Results and discussion The results of measurements are presented in Fig. 3, where the vertical line at Counts\(=200\) marks the initial moment of generation when the emitter current is switched on. The characteristic step features of the kinetic \(IU_{ds}\) (_t_) dependence are reproduced at different distances \(s=42\) (Fig. 3, _a_), 36 (Fig. 3, _b_), and 30mm (Fig. 3, _c_). A feasible model relating these \(\Delta U_{ds}\) (_t_) features to the detected radiation intensity via the Faraday formula is proposed below. The discussed experiment is specific in that the trapping of single magnetic flux quanta in quantum interference regions is possible. This is manifested in the discovered magnetic susceptibility oscillations with their period depending on the size of the quantum interference region [25]. Thus, it is fair to say that we observe the quantum Faraday effect induced by the trapping of single magnetic flux quanta (magnetic field lines) in quantum interference regions. Characteristic size \(L\) of the region of quantum carrier interference may be estimated based on the results of measurements in a magnetic field. According to the magnetic-field dependences of longitudinal voltage \(U_{xx}\) of the studied sample, the region of interference of a single carrier in the edge channel is 134 \(\mu\)m\(\times\)1.54 nm in size [16]. Since the electron-electron interaction may be suppressed strongly under high pressures on the order of several hundred GPa at the silicon substrate-silicon carbide interface (see above), the formation of interference regions containing a pair of carriers with \(L_{1}=268\)\(\mu\)m (i.e., twice as big) is possible. The charac Figure 2: \(a\) — Mutual positioning of the Si-based radiation emitter and the SiC-based THz radiation detector; \(s\) — distance between the samples. \(b\) — Diagram of passage of longitudinal source–drain current \(I_{ds(emi)}\) through the emitter sample. \(c\) — Diagram of passage of longitudinal source–drain current \(I_{ds}\) through the detector sample and measurement of longitudinal detector voltage \(U_{ds}\). allowing for interference of carrier pairs may be estimated based on the measurement data on magnetic susceptibility oscillations: \[R^{2}=\frac{\Phi_{0}}{\pi\Delta B}\,, \tag{3}\] where \(\Phi_{0}=h/2e\) is the magnetic flux quantum and \(\Delta B\) is the period of magnetic susceptibility oscillations. The determined periods of 13 and 300Oe [25] translate into \(R\) values of 0.712 and 0.148 \(\mu\)m, respectively. Characteristic size \(L=2R\) of the interference region is then \(L_{2}=1.424\)\(\mu\)m and \(L_{3}=0.296\)\(\mu\)m. Knowing the characteristic sizes of regions of quantum interference of carrier pairs, one may relate the observed \(\Delta U_{ds}\) (\(t\)) features to the incident radiation frequency using the Faraday formula: \[I_{gen}=\frac{\Delta E}{\Delta\Phi}=\frac{h\nu}{\Phi}\,, \tag{4}\] where \(I_{gen}\) is the generation current produced after the introduction of additional energy \(\Delta E\) into the system in the presence of magnetic flux variation \(\Delta\Phi\). Relation \(\Delta\Phi=\Phi_{0}=h/2e\) (\(\Delta\Phi=\Delta B\cdot S\), where Figure 3: Dependence \(\Delta U_{ds}\) (\(t\)) for the SiC-based detector at distances \(s=42\) (\(a\)), 36 (\(b\)), 30mm (\(c\)). Step features at 400 and 1200 nV correspond to the detection of frequencies of 2.745 THz and 9.096GHz. The vertical line marks the initial moment of generation when emitter current \(I_{ds(em)}=30\)mA is switched on. Temperature \(T=300\)K, \(I_{ds}=1.5\)\(\mu\)A. \(\Delta\Phi B\) is the field variation upon trapping of a single magnetic flux quantum \(\Delta\Phi_{0}\) in a quantum interference region with area \(S\)) holds true for a carrier pair in the context of trapping of isolated field lines in a quantum interference region. Therefore, he generation current induced by radiation incident on the detector is related to the corresponding voltage via conductance quantum \(G_{0}=2e^{2}/h\) in the following way: \(I_{gen}=2G_{0}U\). The end result is the following expression that may be used to estimate the incident radiation frequency: \[\nu=N\cdot G_{0}\Delta U_{ds}/e\,, \tag{5}\] where \(N=L_{0}/L_{i}\) is the number of quantum interference regions with characteristic size \(L_{1,2,3}\) connected in parallel within distance \(L_{0}\) between the measurement contacts. In the case of \(ds\) contacts, \(L_{0}=l=4200\)\(\mu\)m. The feature at \(\Delta U_{ds}=400\) nV characterizes the contribution of regions with \(L_{3}=0.296\)\(\mu\)m to the generation current and corresponds to \(\nu=2.745\) THz. The feature at \(\Delta U_{ds}\) = 1200 nV characterizes the contribution of regions with \(L_{1}=268\)\(\mu\)m to the generation current and corresponds to \(\nu=9.096\)GHz. In order to resolve finer features, one may perform measurements in the same geometry, but using \(xx\) contacts as measurement ones (see Fig. 1). The results of such measurements are presented in Fig. 4. In this scenario, \(L_{0}=2\)\(f=2000\)\(\mu\)m; the feature at \(\Delta U_{ds}=220\) nV characterizes the contribution of regions with \(L_{2}=1.424\)\(\mu\)m to the generation current and corresponds to \(\nu=0.15\) THz. The obtained values agree well with the key frequencies of the emitter sample (2.8 and 0.12 THz and 9.3GHz [27]). It should be noted that a component associated with the intrinsic generation of THz radiation due to the passage of longitudinal current along edge channels may be present in the detector response to external THz irradiation. The energy variation in formula (4) is then defined by the load resistance in the quantum interference region. In other words, electrical characteristics in the quantum interference region govern the frequency of generation, which may be estimated by recording the electrically detected electron paramagnetic resonance (EDEPR) spectrum in measurements of magnetic-field dependences of the longitudinal voltage [28]. When nanoampere-range source-drain current flows through the sample with the Hall geometry, microwave generation is observed in the edge channel if embedded microcavities are present in it [28]. In this context, the EDEPR spectrum of point centers localized within edge channels is obtained by scanning over the magnetic field. Figure 4: Dependence \(\Delta U_{xx}\) (\(t\)) for the SiC-based detector at \(s=30\)mm. The step feature at 220 nV corresponds to the detection of a frequency of 0.15 THz. The vertical line marks the initial moment of generation when emitter current \(I_{ds(em)}=30\)mA is switched on. Temperature \(T=300\)K, \(I_{ds}=1.5\)\(\mu\)A. The results of measurements are presented in Fig. 5, where the room-temperature dependence of voltage \(U_{xx}\) at the studied detector sample on magnetic field \(B\) applied perpendicularly to the sample plane with \(I_{ds}=10\) nA is shown. It follows from the analysis of the EDEPR spectrum in Fig. 5 that it contains a fragment of the magnetic-field dependence corresponding to the EPR spectrum of a silicon vacancy recorded at a frequency of 9.4GHz (Fig. 6) [29]. This verifies the presence of microcavities supporting the generation and detection of radiation with centimeter wavelengths. In the studied sample, the silicon substrate extending throughout its length apparently acts as a microcavity with geometric length \(l_{0}\): if one assumes the refraction index of silicon to be equal to 3.42 (NSM database [30]), the \(l_{0}=c/2vn\) condition for frequency \(v=9.3\)GHz yields \(l0=4.74\) mm, which agrees closely with the value of sample length \(l\) + \(2b\) determined with account for the size of contact pads (see Fig. 1). If the formation of several types of microcavities is feasible and a number of quantum interference regions of different size for radiation generation are present, multifrequency EDEPR may be implemented instead of the single-frequency variant. The magnetic-field dependence recorded in such measurements reproduces the variety of generated frequencies in quantum interference regions. Specifically, the low-field part of the EDEPR spectrum in Fig. 5, \(g=1500\), corresponds to the generation of radiation with a frequency of 3.4 THz, which was found in the electroluminescence spectrum of the studied sample [16]. Owing to the size difference between the corresponding microcavities, 17.3 \(\mu\)m (see Fig. 9 in [16]), and the area occupied by a single carrier in the edge channel, 134 \(\mu\)m\(\times 1.54\) nm [16], the EDEPR spectrum is split into seven components. The excited states of complexes of silicon vacancies interacting with single carriers are revealed clearly as lines of different polarity in strong and weak magnetic fields. This is indicative of their strong spin polarization. It should be noted that the EDEPR signal of multicomponent vacancy centers involved in the exchange interaction with carriers is detected reliably under the condition that the effective mass of a carrier in the edge channel is small: \[\hbar\omega_{c}=2\pi v\hbar=\hbar\frac{e\Delta B}{m*}\,, \tag{6}\] where \(m*\) is the effective mass of a carrier in the edge channel, \(v\) is the EDEPR spectrum recording frequency, \(\Delta B\) is the FWHM of the EDEPR spectrum, and \(e\) is the charge of an electron. The Figure 5: EDEPR spectrum of the detector sample recorded by measuring longitudinal voltage \(U_{xx}\). The feature at 162.5mT corresponds to a frequency of 3.4 THz. Temperature \(T=300\)K, \(I_{ds}=10\) nA. effective mass estimated using the magnetic-field dependence in Fig. 5 at \(\Delta B=6.5\)mT and \(\nu=3.4\) THz is \(m^{\ast}=5\cdot 10^{-35}\) kg, which agrees with the measurement data on DHVA oscillations [25]. Thus, the observation of EDEPR is actually feasible at a low value of the effective carrier mass, which corresponds with the transport conditions in edge channels and quasi-one-dimensional structures. ## 4 Conclusion Thus, the specific features of response of silicon carbide nanostructures, which were produced by coordinated atomic substitution, to external THz irradiation were revealed and studied. These effects were manifested in room-temperature measurements of the kinetic dependence of longitudinal voltage with a longitudinal source-drain current flowing through the structures of a Hall geometry. The discovered features of the kinetic dependences of longitudinal voltage were examined as manifestations of the quantum Faraday effect emerging in the case of trapping of single magnetic flux quanta by the edge channels of nanostructures. Within the proposed model, THz radiation induces current generation in an edge channel and, consequently, alters the kinetic dependences of longitudinal voltage, which are governed by the geometric parameters of the studied nanostructures. Figure 6: \(a\) — EDEPR spectrum of a silicon vacancy in the detector sample recorded by measuring longitudinal voltage \(U_{xx}\) without an external cavity, source, and receiver of microwave radiation; \(T=300\)K, \(I.=10\) nA. \(b\) — EPR spectrum (\(X\) band) of a silicon vacancy in \(6H\)-SiC (according to [29]). The technique of EDEPR detection via the measurement of magnetic-field dependences of longitudinal voltage revealed the generation of intrinsic THz radiation in silicon carbide nanostructures with a longitudinal source-drain current flowing through them. It was demonstrated that microcavities embedded into the edge channels of the nanostructure support the generation and detection of THz radiation. EDEPR spectra may be measured reliably at low values of the effective carrier mass in the edge channel of the examined structure. ## Funding This work was supported financially by the Russian Science Foundation (grant Ne 20-12-00193). ## Acknowledgments The synthesis of a SiC layer on Si was performed using the equipment of the "Physics, Chemistry, and Mechanics of Chips and Thin Films" unique scientific unit at the Institute of Problems of Mechanical Engineering, Russian Academy of Sciences (St. Petersburg). ## Conflict of interest The authors declare that they have no conflict of interest.
2309.06257
The problem of dust attenuation in photometric decomposition of edge-on galaxies and possible solutions
The presence of dust in spiral galaxies affects the ability of photometric decompositions to retrieve the parameters of their main structural components. For galaxies in an edge-on orientation, the optical depth integrated over the line-of-sight is significantly higher than for those with intermediate or face-on inclinations, so it is only natural to expect that for edge-on galaxies, dust attenuation should severely influence measured structural parameters. In this paper, we use radiative transfer simulations to generate a set of synthetic images of edge-on galaxies which are then analysed via decomposition. Our results demonstrate that for edge-on galaxies, the observed systematic errors of the fit parameters are significantly higher than for moderately inclined galaxies. Even for models with a relatively low dust content, all structural parameters suffer offsets that are far from negligible. In our search for ways to reduce the impact of dust on retrieved structural parameters, we test several approaches, including various masking methods and an analytical model that incorporates dust absorption. We show that using such techniques greatly improves the reliability of decompositions for edge-on galaxies.
Sergey Savchenko, Denis Poliakov, Aleksandr Mosenkov, Anton Smirnov, Alexander Marchuk, Vladimir Il'in, George Gontcharov, Jonah Seguine, Maarten Baes
2023-09-12T14:17:21Z
http://arxiv.org/abs/2309.06257v1
The problem of dust attenuation in photometric decomposition of edge-on galaxies and possible solutions ###### Abstract The presence of dust in spiral galaxies affects the ability of photometric decompositions to retrieve the parameters of their main structural components. For galaxies in an edge-on orientation, the optical depth integrated over the line-of-sight is significantly higher than for those with intermediate or face-on inclinations, so it is only natural to expect that for edge-on galaxies, dust attenuation should severely influence measured structural parameters. In this paper, we use radiative transfer simulations to generate a set of synthetic images of edge-on galaxies which are then analysed via decomposition. Our results demonstrate that for edge-on galaxies, the observed systematic errors of the fit parameters are significantly higher than for moderately inclined galaxies. Even for models with a relatively low dust content, all structural parameters suffer offsets that are far from negligible. In our search for ways to reduce the impact of dust on retrieved structural parameters, we test several approaches, including various masking methods and an analytical model that incorporates dust absorption. We show that using such techniques greatly improves the reliability of decompositions for edge-on galaxies. keywords: Galaxy: structure - fundamental parameters - formation - disc - bulge ## 1 Introduction Measuring the physical properties of galaxies is one of the cornerstones of extragalactic astrophysics, because all theories of galaxy formation and evolution should be supported by observational data. Disc galaxies which are visible in an edge-on orientation (i.e. inclined at \(i\approx 90^{\circ}\)) are of special interest in this regard because they are the only targets that facilitate a direct study of the vertical structures of disc galaxies. For example, the vertical distributions of stars, gas, and dust, as well as the possible presence of different sub-components (such as thin and thick discs), which properties are often described via various galaxy scaling relations (such as the dependence of the disc flattening on the relative mass of a spherical component, including a dark matter halo) can only be explored in edge-on galaxies (see e.g. Kylafis & Bahcall, 1987; Xilouris et al., 1999; Mosenkov et al., 2010; Bizyaev et al., 2014; Comeron et al., 2018; Mosenkov et al., 2022). This utility of edge-on galaxies is easily recognized due to the existence of special catalogues which were created specifically for studying the three-dimensional structure of disc galaxies. For example, the RFGC catalogue (Karachentsev et al., 1999) contains 4236 thin edge-on spiral galaxies over the whole sky. The EGIS catalog (Bizyaev et al., 2014) provides structural parameters of stellar discs (the disc scale length and scale height, as wells as the disc central surface brightness) for almost 6000 galaxies using the Sloan Digital Sky Survey (SDSS, York et al., 2000) observations in several optical wavebands. The EGIPS catalogue (Makarov et al., 2022) contains 16551 edge-on galaxies from the Pan-STARRS survey (Chambers et al., 2016; Flewelling et al., 2020). The most widely used approach to acquire the structural parameters of galaxies is performing a photometric decomposition of their images. The main idea behind this process is to adopt an analytical model to describe the observed surface brightness distribution in a galaxy image and find the optimal parameters for such a model that yield the fewest discrepancies with the real image. There exist a number of software packages (see Peng et al., 2002; Vika et al., 2013; De Geyter et al., 2013; Erwin, 2015) which were specifically designed to perform photometric decompositions of galaxies (e.g. Gadotti, 2009; Lackner & Gunn, 2012; Bottrell et al., 2019, and many others). For example, in almost two thousand refereed publications to date, the GALFIT code has been used to retrieve the structural parameters of galaxies with various morphologies, at different wavelengths, and in a wide range of redshifts. Although at a first glance the main idea behind the decomposition process looks rather straightforward, there are various obstacles that must be overcome on the way to a solid and robust estimation of galaxy parameters. For example, even the model selection can be a problem, especially when working with a large sample of objects (Lingard et al., 2020). Another complicating factor is image smearing caused by a point spread function (PSF) due to atmospheric turbulence (seeing) and the physical diffraction limit. The general rule is that the smaller the galaxy component, the larger the influence of the PSF on the retrieved structural parameters (Trujillo et al., 2001, 2001, 2009), and this is especially true for edge-on galaxies (Sandin, 2014, 2015). In this article, we focus on another important issue for galaxy photometric decompositions that manifests particularly strongly for edge-on galaxies, dust attenuation. The dust distributed in a galaxy absorbs, scatters, and re-emits its stellar light, resulting in an observed surface brightness distribution for a galaxy image, coupled with a mass-to-stellar luminosity ratio as a function of both radius and wavelength, that does not reflect the actual mass surface density distribution over the galaxy body. This suggests that the measured structural parameters can be affected in a manner that is challenging to predict. One possible solution to this problem is to perform radiative transfer modelling that includes the interaction between the photons and dust. The complexity of such approaches have grown over time. For example, Disney et al. (1989) provided an analysis of several simple geometric models including a "slab" model where the galaxy is considered to be a flat disc with a uniform mixture of stars, dust, and gas; a "screen" model with a stellar disc covered by a dust absorbing screen lying above the stars; and a "sandwich" model where a thin uniform dust disc is located inside of a relatively thicker stellar disc. Byun et al. (1994) performed numerical radiative transfer modelling of a three-dimensional galaxy model with various dust contents visible at different viewing angles to study how dust attenuation changes the main observables of the galaxy (ellipticity, surface brightness, exponential scale, etc.). The dust's impact on attenuation in a galaxy as a function of wavelength was studied by Ferrara et al. (1999) for a set of different galaxy models (mimicking spiral and elliptical galaxies) and by Tuffs et al. (2004) who considered exponential discs and de Vaucouleurs bulges as separate components. A combined bulge+disc model was studied in Pierini et al. (2004). Nowadays, there are various tools that allow one to carry out radiative transfer modeling for complex multicomponent galaxies with dust. For example, Popescu et al. (2000) describe such an approach and its application to an edge-on galaxy NGC 891. Other examples of radiative transfer programs include the TRADING code (Bianchi, 2008), the DART-RAY code (Natale et al., 2014, 2017), and the FITSKIRT software (De Geyter et al., 2013). For example, using FITSKIRT, it is possible to fit a galaxy image with a predefined model consisting of multiple stellar and dust components. A significant drawback of such an approach is its extreme computational cost (Mosenkov et al., 2018), and, thus, it is only useful when trying to model individual edge-on galaxies (Xilouris et al., 1997, 1998; Popescu et al., 2000; Baes et al., 2010; Bianchi & Xilouris, 2011; De Looze et al., 2012; Schechtman-Rook et al., 2012; De Geyter et al., 2013; Mosenkov et al., 2016) or small samples of galaxies (Xilouris et al., 1999; Bianchi, 2007; De Geyter et al., 2014; Mosenkov et al., 2018; Natale et al., 2022). Another approach frequently used to investigate the effect opacity has on measured structural parameters in disc galaxies is to use radiative transfer simulations to create a mock galaxy image. This involves using a given, a-priori known model and then decomposing the image. In this case, one can explore how exactly the presence of dust affects the galactic parameters measured by using a regular decomposition technique, that is, without including the radiative transfer or any other dust compensation method. This was done by Gadotti et al. (2010), who investigated the behaviour of a couple of models of disc galaxies for a range of dust optical depth values and for inclination angles ranging from 15 to 60 degrees. A similar approach was adopted by Pastrav et al. (2013) and Pastrav et al. (2013) where a set of corrections for the measured decomposition parameters were computed. These corrections were applied in Pastrav (2020) to find the intrinsic (i.e. dust-corrected) parameters for several real galaxies. In this paper, we concentrate on the effects of dust attenuation on the parameters of edge-on galaxies measured via a standard decomposition analysis. By building our article on the work done by the aforementioned studies, we opt to go further in this analysis and add some new features, such as * using three-dimensional decomposition models with line-of-sight integration instead of traditional two-dimensional fitting, which will allow us to treat the structure of edge-on galaxies to the fullest; * simulating real observations by accounting for instrument PSFs, transmission curves, and noise parameters; * running simulations for a set of models to explore how midplane dust lanes impact galaxy structural parameters. The other important goal of this study is to investigate various techniques to compensate for the presence of dust during the decomposition process (aside from the time-consuming radiative transfer approach). Is it possible to modify the decomposition procedure to make the derived parameters more reliable without a significant increase to computational time? If so, can we apply this approach to a large sample of edge-on galaxies? The answers to these questions are of high importance for the ongoing work with the EGIPS catalogue (Makarov et al., 2022) where we aim to perform a mass decomposition of edge-on galaxies with three-dimensional models using "dust contaminated" optical observations. The rest of the article is organized as follows. In Section 2, we describe our algorithms for synthetic image creation and decomposition with and without correcting for dust. In Section 3, we demonstrate the results of our simulations including the dust impact on the derived decomposition parameters, and the results of applying different techniques to compensating for the presence of the dust. In Section 4, we employ the decomposition methods for retrieving the structural parameters for a couple of real galaxies, with taking a dust component into account. We state our conclusions in Section 5. Appendix A contains some technical details about the training of the neural network used throughout the paper. ## 2 The algorithm In this section, we describe in detail our algorithms to investigate the dust impact on the decomposition results and propose several ways to ease these dust effects. The overall pipeline looks as follows. For a set of input parameters we create a three-dimensional model of a galaxy, and transform it into a FITS-file by projecting it on an image plane. To mimic real observations, we include a smearing effect by a PSF and add read and photon noise. We then run a standard decomposition technique to obtain the observed structural parameters of a galaxy in order to compare them with the input ones. After that, we run a series of fits using various methods, by accounting for the dust presence and without accounting for it, to see how these can amend the observed parameters. ### Model functions and their parameters In this work we consider a three-component model of a galaxy with a stellar disc, bulge, and a dust disc. As a disc model, we adopt a three-dimensional isothermal disc that follows an exponential luminosity density profile in the radial direction and a \(\mathrm{sech}^{2}\) law perpendicular to the galaxy plane (van der Kruit & Searle, 1981): \[\rho_{\mathrm{disc}}(r,z)=\rho_{0,\mathrm{disc}}\exp\left(-\frac{r}{h_{\mathrm{ disc}}}\right)\mathrm{sech}^{2}\left(\frac{z}{z_{\mathrm{disc}}}\right). \tag{1}\] The disc model has two geometric parameters: its radial exponential scale \(h_{\mathrm{disc}}\) and the vertical scale \(z_{\mathrm{disc}}\). The third parameter is the central luminosity density \(\rho_{0,\mathrm{disc}}\), which governs the luminosity of the disc, but for our purposes it is more convenient to work directly with the disc's total luminosity: \[L_{\mathrm{disc}}=4\pi\rho_{0,\mathrm{disc}}h_{\mathrm{disc}}^{2}z_{\mathrm{ disc}}. \tag{2}\] Even though real galaxies can demonstrate more complex disc structures (such as the existence of two embedded stellar discs), we do not include such complexity in our simulations, because it can only be studied for the closest galaxies with better spatial resolution, and most decomposition studies utilise a single disc model. To model a central component, we use the well-known Sersic function (Sersic, 1963; Sersic, 1968) that is often used to describe galactic bulges, and which have the following projected surface brightness profile: \[I(r)=I_{0}\exp\left[-\nu_{n}\left(\frac{r}{r_{e}}\right)^{\frac{1}{2}}\right]. \tag{3}\] The corresponding three-dimensional density distribution that is required for our work can be found through an Abel inversion: \[\rho_{\mathrm{bulge}}(r)=-\frac{1}{\pi}\int_{r}^{+\infty}\frac{dI}{dR}\frac{dR }{\sqrt{R^{2}-r^{2}}}. \tag{4}\] In the literature there are various approaches on how to solve this integral analytically (with special functions) or numerically (Prugniel & Simien, 1997; Lima Neto et al., 1999; Baes & Gentile, 2011; Baes & van Hese, 2011; Vitral & Mamon, 2020). The bulge has three main geometric parameters (the value \(\nu_{n}\) in (3) is a normalisation constant): its effective radius \(r_{e}\), a Sersic parameter \(n\), and a bulge oblateness \(\rho_{\mathrm{bulge}}\), and a parameter, that governs the overall bulge brightness, the central luminosity density \(\rho_{0,\mathrm{bulge}}\), but as before, it is more convenient to use the total luminosity as a free parameter: \[L_{\mathrm{bulge}}=\rho_{0,\mathrm{bulge}}r_{e}^{3} \tag{5}\] We describe the dust component with the same isothermal disc model as the stellar disc (1) and it has the following set of geometric parameters: \(h_{\mathrm{dust}}\) and \(z_{\mathrm{dust}}\). Following the previous works of Gadotti et al. (2010); Pastrav et al. (2013b), we parameterize the dust content not by its central density \(\rho_{0,\mathrm{dust}}\), but by using the central face-on optical depth \(\tau\), that is, an integral characteristic of the galaxy opacity which can be computed by a line-of-sight integral drawn through the center of a face-on oriented model (1): \[\tau=\int_{-\infty}^{\infty}\kappa\rho_{\mathrm{dust}}(0,z)\;dz=2\kappa z_{ \mathrm{dust}}\cdot\rho_{0,dust}, \tag{6}\] where \(\kappa\) is the extinction coefficient that depends on the dust mixture and the observed wavelength. Throughout this paper, we measure \(\tau\) values in the \(V\) band to be consistent with Gadotti et al. (2010), although different normalizations are used in the literature (for example, the \(B\) band in Pastrav et al., 2013a,b). The galaxy as a whole has the following free parameters (apart from parameters specific for separate components): the total bolometric luminosity \(L_{\mathrm{total}}\), the bulge-to-total luminosity ratio \(B/T\) (these two parameters defy the actual values of \(L_{\mathrm{disc}}\) and \(L_{\mathrm{bulge}}\)), the luminosity distance \(D_{\mathrm{L}}\), and the inclination \(i\). Theoretically, a change in any parameter in a galaxy model can lead to changes in random and systematic errors in any other parameters, but it is difficult to make a set of models that cover this parameter space well enough to study any possible inter-combinations between all parameters. To achieve the goal of this study, we settle upon the following strategy to create the model grid. We start with a single model which has the parameters of a typical disc galaxy (see, for example Gadotti, 2009), listed in Tab. 1. Apart from the described set of geometric parameters above, this list also contains ages of stellar populations for the bulge and the disc such that the disc contains a younger stellar population than the bulge (4 Gyr versus 11 Gyr of a bulge), a galactic average metallicity, and a dust mixture that has mean properties found in Zubko et al. (2004). Hereafter, we will call this model a _basic model_. Then, we variate some parameters of this model leaving others fixed to see how these variations affect the decomposition results. ### Synthetic images Creating a galaxy model with a dust component requires accurate Monte-Carlo radiative transfer simulations. For this purpose, we use the state-of-art radiative transfer code SKIRT(Baes et al., 2011; Camps & Baes, 2015, 2020). SKIRT allows one to generate panchromatic simulations of a galaxy with provided parameters for the specified structural components (in our case, these are a bulge, disc, and dust). The output data cube (a collection of two-dimensional images) contains different layers with snapshots of the galaxy model for the chosen set of wavelengths. Next, each synthesized galaxy image from the data cube can be transformed into a new mock image to simulate observational effects which are always present in real observations. Here, we take into account specific instrument transmission curves by multiplying all \(N\) individual layers of the model data cube \(I_{j}(x,y)\) by the instrument response for the corresponding wavelength \(f_{j}\). Then we coadd the layers into a single image \(I(x,y)\), making sure to take into account \begin{table} \begin{tabular}{l l} \hline Parameter & Value \\ \hline Bulge effective radius, \(r_{e}\) & 900 pc \\ Bulge Sérsic index, \(n\) & 4 \\ Bulge oblateness, \(q\) & 0.0 \\ Bulge stellar population age & 11 Gyr \\ Disc radial scalelength, \(h_{\mathrm{disc}}\) & 4000 pc \\ Disc vertical scalelength, \(z_{\mathrm{disc}}\) & 400 pc \\ Disc stellar population age & 4 Gyr \\ Dust radial scalelength, \(h_{\mathrm{dust}}\) & 4000 pc \\ Dust vertical scalelength, \(z_{\mathrm{dust}}\) & 150 pc \\ Dust mixture & Zubko et al. (2004) \\ Galaxy bolometric luminosity, \(L_{\mathrm{ext}}\) & \(10^{11}L_{\odot}\) \\ Galaxy average metallicity & 0.02 \\ Bulge-to-total luminosity ratio, \(B/T\) & 0.2 \\ \hline \end{tabular} \end{table} Table 1: The basic model parameters the wavelength width of each layer \(W_{j}\): \[I(x,y)=\sum_{j=1}^{N}I_{j}(x,y)f_{j}W_{j}. \tag{7}\] After that, it is necessary to convolve the obtained image with a PSF to simulate the effects of atmospheric and telescopic blurring. Finally, we add Gaussian and Poisson noise to the image. For this study we decided to use the SDSS \(r\) waveband as the instrument system because this survey is widely used as a data source for galaxy decompositions. Since this is an optical survey, dust attenuation can be high for edge-on galaxies. Therefore, we generate our mock galaxy images using the instrument and PSF parameters (filter transmission curve, full width at half maximum (FWHM) of PSF, gain, and readnoise values) specific for the SDSS \(r\)-band instrument. The average values of these characteristics (gain value equal to 4.75 electrons per count and dark variance equal to 1.32 electrons) are taken from the SDSS website1. An example of a model with a face-on optical depth \(\tau=1.0\) is shown on the top panel of Fig. 1. Footnote 1: [https://dr12.sdss.org/datamodel/files/BOSS_PHOTOOBJ/frames/kHMC/smr/standard/kHMC/smr_to_m_m_0.1](https://dr12.sdss.org/datamodel/files/BOSS_PHOTOOBJ/frames/kHMC/smr/standard/kHMC/smr_to_m_m_0.1)) demonstrate a prominent dust lane ### Regular decomposition When a mock galaxy image is ready, we use the IMFIT code (Erwin, 2015) to perform our decomposition of the image. One of the standard functions in IMFIT is a three-dimensional model of the disc which allows one to account for the projection effects and fit the galaxy inclination. Employing this function, we can reliably investigate the vertical structure of a highly inclined disc galaxy, whereas two-dimensional models of an edge-on disc work, strictly speaking, for perfect edge-on orientations only. The IMFIT package allows one to take the PSF into account during fitting. We provide the same PSF image that we used to blur the mock image in Sec. 2.2. The IMFIT list of models includes both a Sersic function and a three-dimensional isothermal disc, so the output results can be directly compared with the input parameters from the SKIRT model. The only necessary step is to convert the output geometrical parameters of the IMFIT, which are given in pixels, back to parsecs using the model distance. We emphasize that the model of the disc component that is used for the decomposition has an actual three-dimensional volume brightness distribution. To produce a projected two-dimensional model image, IMFIT performs integration along the line-of-sight \(s\) of the volume luminosity density for each pixel \((x,y)\) of the image: \[I(x,y)=\int_{-\infty}^{+\infty}\rho(s)\;\mathrm{d}s. \tag{8}\] This approach requires considerably more computational time than directly computing the two-dimensional exponential surface brightness distribution. However, the advantage of this method is that it can give accurate results for models in an orientation _close_ to edge-on. The insufficiency of a simple two-dimensional model to describe an actual three-dimensional disc is clearly shown in Pastrav et al. (2013b) where their decomposition results diverge quickly near the edge-on orientation. In Gadotti et al. (2010), the highest inclination considered was \(60^{\circ}\)which allowed them to elude this problem. ### Decomposition with a dust correction As we will see in Sec. 3, the effects of a dust component on the derived parameters of edge-on galaxies are enormous, so in some cases these make the fit results completely unreliable. To mitigate this problem, we try a number of modifications on the regular decomposition technique. In this study, we test two approaches: (i) We use different masks for the dust lane to exclude "dusty" pixels from our decomposition and (ii) we modify the decomposition model to account for the dust attenuation. Below we describe these two methods in detail. #### 2.4.1 Masking The vertical scale height of the dust component in galaxies is smaller than that of the stellar disc. As a result, dust attenuation in edge-on galaxies appears as a dark narrow lane along the mid-plane of the galactic disc, whereas below and above this lane the galaxy appears less obscured. During decomposition, the analytical model fits both attenuated and unattenuated regions of the galaxy which results in systematic errors on the model's parameters. It therefore seems promising that, by masking out the attenuated dust lane in the galaxy image, the fitting procedure can create a better model which better restores the actual galaxy parameters by only using unmasked dust-free regions of the galaxy. The exact area to mask off is not easy to determine. A more extensive mask should better cover the regions of a galaxy affected by dust and should lead to less biased fits. On the other hand, a larger mask means less of the galaxy is actually used for fitting. In addition to that, the outer regions of galaxies are faint and have lower signal-to-noise ratios when compared to the inner galaxy regions. Thus, redundant masking is expected to make a fit less reliable. Moreover, the outermost regions of a galaxy can be dominated by other structural components, such as thick discs and halos. Therefore, masking the central galaxy region can switch the fitting, for example, from a thin disc to a thick disc and completely change the fitting results. To ascertain if there is some trade-off between our intention to better cover dust attenuated regions and, at the same time, to use as much of the galaxy data as possible, we use several different masking strategies and compare their effects on the ability to recover the true parameters of galaxies. The simplest approach is to mask a narrow strip of a fixed height along the galaxy mid-plane. This mask has only one parameter, its height \(h_{\mathrm{mask}}\) that can be varied to govern the fraction of the galaxy area to be masked. In order to link the height of the mask to the internal model parameters, we decided to measure the height in units of the dust component vertical scale \(z_{\mathrm{dust}}\). Hereafter, we refer to this simple mask as the "flat" mask. An application of such a flat mask to a model image is demonstrated in the middle panel of Fig. 1 for \(h_{\mathrm{mask}}\) values of 2, 4, and 6. The appropriate size of the flat mask depends on the absorption strength. Even with the fixed values of \(z_{\mathrm{disc}}\) and \(z_{\mathrm{dust}}\), the area where the dust impact is significant depends on the value of \(\tau\). This is illustrated in Fig. 2 by a vertical slice made through the center of an image for a dust-free model and for a set of models with an increasing value of \(\tau\). It can be seen from the figure that, while for galaxies with a relatively small dust content a flat mask with \(h_{\mathrm{mask}}=1\) (i.e. height of a mask equal to \(z_{\mathrm{dust}}\)) may cover the affected areas of the image well enough, galaxies with prominent dust lanes require that the mask should be several times wider. We also note that from this figure it becomes clear that even relatively transparent discs in a in an edge-on orientation; a central peak on the slice is completely obscured by the dust and a darker depression is visible instead. The drawback of using the flat mask is that it covers the mid-plane of a galaxy evenly for all radial distances from the galaxy centre, whereas most of the attenuation happens in the central region of a galaxy and decreases towards the periphery. This means that a mask that has a larger height in the central region of a galaxy and becomes thinner toward the galaxy edges would more efficiently cover the dust-affected regions of the galaxy. The exact shape of such a mask is not easy to find, as it depends on the complex interplay between the parameters of the stellar and dust components. Luckily, when we work with simulations, it is possible to determine the optimal parameters of such masks numerically. By comparing a mock image of a modeled dusty galaxy to an image of a model with the same stellar components but without dust, we can find regions of the dusty model that are most affected by the dust attenuation to mask them out. This leads to another masking strategy (we will call it a "relative" mask): a mask that covers the regions where the relative change between the models with and without dust is higher than a given threshold. The relative mask likewise has one free parameter to vary, the relative change between the two models \(f_{\rm mask}\) above which we start our masking (in other words, a relative mask with \(f_{\rm mask}=0.5\) covers regions where the dust attenuation is higher than \(50\%\)). The areas that are covered by a relative mask with \(f_{\rm mask}\) values of 0.1, 0.25, and 0.5 are shown in the bottom panel of Fig. 1. It can be seen that the relative mask covers the dust-affected regions of a galaxy more effectively: it is wider near the galaxy centre and becomes thinner outwards. Another illustration of a comparison between these two approaches for creating dust masks is shown in Fig. 3. This figure shows the fraction of the total galaxy area (defined here as an image region that contains \(99\%\) of the total model flux) covered by different masks as a function of \(\tau\). Since flat masks do not depend on \(\tau\), instead only depending upon \(z_{\rm dust}\), their covered area appears as a flat strip. Relative masks, in contrast, depend on \(\tau\) and grow as the absorption increases. From this figure it is clear that the relative masking method is a more efficient way to cover the dust lane in terms of the fraction of the galaxy image that is left for the upcoming fitting. As it will be shown later in Sec. 3, this leads to considerably better fitting results in terms of the precision of the recovered galaxy parameters. Although the relative masking approach appears to be more promising, such a mask can be created easily only in the controlled conditions of a numerical experiment. In practice, one cannot readily find the relative fraction of light absorbed by dust for every pixel of a galaxy image. To make this approach applicable to conditions where there is no such information available (i.e. in real observations), we decided to train a neural network to produce a relative dust mask based on optical images of a galaxy. #### 2.4.2 Neural networks for mask creation A relative mask represents a binary image where pixels with a value of 1 define a masked region in the corresponding galaxy image and pixels with a value of 0 define the unmasked region. A generation of such masks from galaxy images in several optical bands is the semantic segmentation problem. To tackle it, we employ a U-Net Figure 1: Top panel: an \(r\)-band image of a galaxy model with a face-on optical depth \(\tau=1\). Middle panel: the same model enlarged with lines showing areas that would be covered by a flat mask with various \(h_{\rm mask}\) values (see text): 2, 4, and 6 for solid, dot-dashed, and dotted lines respectively. Bottom panel: the same but with lines showing areas that would be covered by a relative mask for various \(f_{\rm mask}\) values: 0.1, 0.25, 0.5 for dotted, dot-dashed and solid lines. Figure 3: Fraction of the galaxy covered by masks of different types as a function of the dust disc optical depth \(\tau\). Horizontal dotted lines show three flat masks, and the three curved lines show the behaviour of relative masks. Figure 2: The impact of dust on the light distribution in a galaxy mock image illustrated by a vertical slice made through the image centre. Solid line: a dust-free (\(\tau=0\)) model. The dot-dashed lines show the set of models with increasing values of \(\tau\). The darker shaded region shows a distance of \(z_{\rm dust}\) from the model mid-plane, a lighter one – a distance of \(4z_{\rm dust}\). (Ronneberger et al., 2015) based network that was successfully used in Smirnov et al. (2023) to solve the Galactic cirrus segmentation problem in SDSS optical bands. The family of U-Net-based neural network models has been applied in various fields of science. These network architectures are employed in medicine (Iglovikov et al., 2017; Ching et al., 2017; Ing et al., 2018; Andersson et al., 2019; Nazeran et al., 2021), biology (Kandel et al., 2020), satellite image analyses (Iglovikov et al., 2017), and astronomy (Aragon-Calvo, 2019; Bekki, 2021; Bianco et al., 2021; Wells & Norman, 2021; Vojtekova et al., 2021; Rozanski et al., 2022; Zavagno et al., 2023). As the name suggests, the U-Net network model consists of two opposite paths. The down-sampling part, often called the encoder, is used to capture features from an input image. The encoder consists of several repeated blocks of convolution and max-pooling operations like in a typical convolutional neural network. The up-sampling part, often called the decoder, is used to get precise localisations. The decoder also consists of repeated blocks of an up-sampling (increasing the resolution) of the feature map followed by convolution operations. Therefore, the spatial resolution of the tensor processed in the decoder increases. To get a localisation, the features from the encoder are concatenated with the up-sampled features from the decoder via skip connections. Our neural network model is implemented in the TensorFlow2.x framework (Abadi et al., 2015). The key difference between our solution and the original U-Net architecture is the encoder. As the encoder, we used the MobileNetV2 network model (Sandler et al., 2018), which is more lightweight than the original U-Net encoder, but has demonstrated a similar performance in the Galactic cirrus segmentation problem (Smirnov et al., 2023). Fig. 4 displays the encoder-decoder architecture used. During training experiments, we found accurate neural networks for different relative masks (ones with various \(f_{\text{mask}}\) values). We created separately trained networks for a set of \(f_{\text{mask}}\) values equal to 0.1, 0.3, 0.4, and 0.5. The data preparation to train the neural networks and the results of training experiments are described in Appendix A. To find the best neural network to reproduce each relative mask, we use the IoU metric for the masked regions: \[\text{IoU}\ =\frac{\text{TP}}{\text{TP}+\text{FP}+\text{FN}}\,, \tag{9}\] where TP is the number of true positive pixel results where the network correctly predicts the masked pixel, FP is the number of false positive pixel results where the network predicts the masked pixel but it belongs to the unmasked region, FN is the number of false negative pixel results, where the network predicts the unmasked pixel but it belongs to the masked region. As demonstrated in Fig. 5, the relative mask generated by our network quite accurately reproduces the original relative mask. Note that for galaxies which are aren't viewed perfectly edge-on and where the dust lane is shifted with respect to the galaxy center due to the projection effects, the mask generated by the neural network is also shifted and bent accordingly (see the second row in Fig. 5). Quantitative results for different networks and training methods are shown in Table 1. We summarise the results of our experiments as follows. 1. As one can see in Table 1, the best networks for all considered relative masks have a similar performance (\(0.838\leq\text{IoU}\leq 0.847\)). 2. Networks trained using the <<fine-tuning>> strategy demonstrate the best IoU per generation of all relative masks excluding a relative mask with \(f_{mask}=0.4\), but, as one can see in Table 1, the advantage of these networks over ones trained from scratch or using the transfer learning strategy is insignificant. 3. Our networks generate relative masks for a thousand galaxies in about 80 seconds when running predictions on an AMD Ryzen 9 3900X 12-Core CPU and about 40 seconds when running on an NVIDIA GeForce RTX 3060 GPU. #### 2.4.3 Model with a dust component The second approach in accounting for the dust's impact during the decomposition process that we test in this work is modifying the fitting model such that it includes a dust component. The correct treatment of this problem requires heavy and time consuming computations both of light absorption and of scattering by dust grains. Moreover, such computations are often based on a Monte-Carlo approach, so they introduce some randomness into the model computations. This can impede using minimisation techniques based on gradient computations which are often used in decompositions. One simplification that can be made is neglecting light scattering, so that the only cause of losing photons is absorption. While scattered light can be important in disc galaxies, especially for near face-on orientations (Byun et al., 1994; Baes & Dejonghe, 2001; Gadotti et al., 2010), simulations show that for near edge-on orientations, the fraction of scattered photons in the observed flux declines (Pierini et al., 2004). If a photon is scattered vertically in the disc, there is a high probability it leaves the galaxy and cannot be observed from an edge-on orientation. If a photon is scattered along the disc plane, where the optical depth is high, it will most likely experience another interaction with the dust to being either absorbed or scattered away from the disc plane. Under these conditions, the model flux at a given pixel \((x,y)\) can be found as a line-of-sight integral which includes the optical depth term: for each point along the line-of-sight, we need to integrate over the dust density between this point and the observer's position to compute the absorption for photons emitted at this point: \[I(x,y)=\int_{-\infty}^{+\infty}\rho_{\text{stellar}}(s)e^{-\int_{-\infty}^{x} \rho_{\text{dust}}(s^{\prime})ds^{\prime}}\,ds, \tag{10}\] where \(\rho_{\text{stellar}}(s)\) is the total luminosity density of a stellar model at a given point along a line-of-sight, \(\rho_{\text{dust}}(s)\) represents the dust density with the extinction coefficient \(\kappa\), with the negative line-of-sight direction being towards the observer. In this case, the dust term accounts for the decrease in observed photons due to both absorption and scattering away from the plane of the disc. Figure 4: The encoder-decoder architecture used in this study. To implement this approach, we modified the IMFIT code, by adding a new component function that represents a combined model with a disc, bulge, and dust. The necessity to compute a double integral for every pixel of an image is a drawback for this method, since it imposes a high computational cost on the decomposition. On the other hand, it is more physically realistic than, for instance, a disc with a negative flux that was used to model a dust lane in Savchenko et al. (2017) and Smirnov and Savchenko (2020). Another advantage of implementing an approach with the direct integration in IMFIT is that there is no need in Monte-Carlo simulations in our computations. As a result, a Poisson noise that depends on the number of photon packages is not introduced in the results. Therefore, the output model image is smooth and can be compared with the input galaxy image using standard minimization techniques that involve computations of the numerical derivatives of \(\chi^{2}\) (such as the Levenberg-Marquardt algorithm). Images obtained via Monte-Carlo simulations are noisy and different realizations of the same model can have slightly different \(\chi^{2}\) values which impedes a gradient computation in the fitting procedure, and some other minimization technique is required (such as a genetic algorithm which does not rely on gradient computations but takes a lot more computational time). Using an AMD Ryzen 7 3700X 8-Core Processor, it takes less than 10 seconds to obtain a model image of a dusty galaxy with a size of \(500\times 500\) pixels using our modified IMFIT code. ## 3 Results of simulations In this section we present the results of our simulations. To demonstrate how the dust distorts the measured values of the decomposition parameters, we make plots where the measured value of a parameter is plotted against the face-on optical depth \(\tau\). These plots contain both decomposition results without dust correction and those obtained with using different strategies to account for the dust (see Sec. 2.4), so that their outcomes can be compared. Similar simulations made by Gadotti et al. (2010); Pastrav et al. (2013a,b) were performed for \(\tau\) values up to 8, but the recent study by (Mosenkov et al., 2018) shows that the total face-on absorption in disc galaxies does not reach such high values; the mean measured value for their sample of seven edge-on galaxies was found to be around \(\tau=1\) in the \(V\) band and the highest measured value was 2.01 for NGC 5907. On the other hand, galaxies with a higher amount of dust may exist, so we test our approaches of taking the dust into account in most extreme conditions with different possible \(\tau\) values. Therefore, we decided to increase the investigated range of \(\tau\) values well above the observed \(\tau=2.01\) and set \(\tau=5\) as an upper limit for our computations. Also, we explore how different galaxy models (for example, with different bulge-to-total luminosity ratios or different relative stellar-to-dust disc scale heights) are affected by the dust component with various optical depths. Before proceeding to the results of simulations, we need to mention that the obtained discrepancies between the true values of the parameters and the ones that we infer via decomposition actually have two origins. The first is obviously the influence of dust, whereas the second is some intrinsic decomposition biases. As was previously found in Gadotti et al. (2010); Pastrav et al. (2013a,b), even if dust is absent in a model, the measured decomposition parameters can differ from their true input values. In those studies, this difference was attributed to a mismatch between the models. While the radiative transfer model used for the image creation was three-dimensional, the decomposition model contained a simple two-dimensional exponential disc. This two-dimensional model cannot take the disc's vertical structure into account, which results in an increase of the disc's inclination. For the edge-on galaxies where the disc thickness plays a dominant role, the two-dimensional exponential model cannot be applied to infer disc properties, since the projected light distribution in this case is not exponential, but can be described as a combination of a Bessel and hyperbolic secant function (van der Kruit and Searle, 1981). In this paper, we also find that the results of the decomposition via dust-free models do not exactly match the values of the input parameters, especially for the bulge component. A possible explanation for this error is the presence of the noise we added intentionally to our mock images. This noise has two sources. The skirt code operates in terms of photon packages that are emitted inside the galaxy and then propagate through the galaxy body towards the observer. Since the number of such packages is finite, a model image is not smooth, but demonstrates some photon noise. The second source of the noise is the one we added to convert a SKIRT simulated image to an "observed" image. In this work, we model images of the \(r\)-band Figure 5: An example of using the neural network model for three simulated edge-on galaxies. Left panels: images of simulated galaxies in the \(r\)-band; middle panels: the corresponding relative mask (\(f_{\rm mask}=0.3\)) computed from the model; right panels: masks generated by the neural network model. Background stars on the model images are from random SDSS fields (see text). of the SDSS survey, therefore, we use the noise characteristics of this instrument (see Sec 2.2). To examine how these two noises impact our decomposition results, we run a number of fits with different noise parameters. First, to study how the number of photon packages in a SKIRT simulation affects the decomposition quality, we create the same model but with \(10^{6},\ 5\cdot 10^{6}\), and \(10^{7}\) photon packages and then decompose these models using the same technique. To inspect the impact of the included camera noise, we run these three simulations with a different number of photon packages twice: with and without adding the camera noise. The results of these experiments are listed in Table 2, where the measured value of the Sersic index is shown along with the true value of 4.0. One can see that the measured value is almost unaffected by the number of photon packages, while adding the camera noise makes a significant change to the retrieved parameters. The fact that even for a noise-free model we do not recover the correct value of the Sersic index, but have a somewhat lower value, probably originates from an interference between the bulge and disc components that overlap in the image which leads to a degeneracy of their parameters. From these simulations, we make a conclusion that the added camera noise is the main source of error when decomposing a dust-free galaxy image, and that all follow-up experiments with dusty models also include this bias. Since our main goal is to simulate the decomposition errors for real observations (which always contain noise), we do not correct our results for this bias, but emphasise that the estimated decomposition errors can have various reasons apart from the dust impact which we discuss in detail in the next sections. All simulations in the subsequent sections are made for the SDSS \(r\) band. ### Dust impact on the bulge parameters The bulges of disc galaxies show a peak intensity at their centres and, generally, a rather swift decrease in their surface brightness in their outer regions. As shown in Gadotti et al. (2010), even for galaxies with inclinations far from an edge-on orientation, the bulge parameters can be strongly affected by dust. To demonstrate how the dust component affects the fit parameters of the bulge in an edge-on orientation, we run our algorithm for three models with different bulge parameters: a big, medium, and small bulge (their parameters are listed in Table 3). We note that all models have the same value of the Sersic index equal to 4 (i.e. they represent de Vaucouleurs bulges). Bulges with different Sersic indices have different concentrations and, thus, it is natural to expect that the presence of dust affects their measurements differently, but in this article we do not consider this problem. The disc parameters of these models are the same as in the basic model (Tab. 1). Fig. 6 demonstrates how the measured values of the bulge's effective radius, effective surface brightness, Sersic index, and bulge-to-total luminosity ratio depend on the amount of dust in the model. From this figure, we note that when a galaxy is viewed edge-on, the bulge parameters deteriorate very quickly as the optical depth of the dust component increases. Although we run our simulations for a range of \(\tau\) from 0 to 5.0, in all three models, the bulge component begins to diverge long before reaching the maximal value of \(\tau\). For example, the _small bulge_ model collapses to the lower limit of the Sersic index at \(\tau\approx 0.5\); after that point the gradient descent algorithm starts to converge to random values around the initial conditions. This indicates that the bulge is obscured to such a great degree that it does not affect the total value of the \(\chi^{2}\) statistics. The same happens for the _medium bulge_ model at \(\tau\approx 0.75\) and for the _big bulge_ model at \(\tau\approx 1.5\). Since all the bulge models have collapsed by \(\tau\approx 1.5\), we do not show the results of modelling higher absorptions in this figure. For smaller values of \(\tau\), where we managed to obtain at least some measurable values for the bulge parameters, their behaviour is similar in each model. The fit effective radius of the bulge grows with optical depth, which can be easily understood from a geometrical point of view: the central peak of the bulge is obscured, but its outer regions (outside of the plane of the disc) are essentially unaffected. Thus the radius where half of the total observed bulge flux is confined must be larger than in the dust-free case. The observed Sersic index decreases for the same reason: the obscured central peak leads at a flatter apparent surface brightness distribution, i.e. lower values of \(n\). For the _small bulge_ model, the measured value of the Sersic index drops from a true value of 4.0 to a value of 2.0 already at \(\tau\approx 0.075\), which value of \(\tau\) corresponds to when the dust component begins to appear as a darker lane in the galaxy image (see Fig. 2). Therefore, even if a visual inspection does not reveal a dust lane in a galaxy, it can still contain enough dust to render the parameters of a small bulge completely distorted. This in turn affects some standard galaxy scaling relations, which contain the parameters of the bulges (e.g. the Kormendy relation), or, for example, make the "classical bulge - pseudobulge" dichotomy less pronounced. The other two panels in Fig. 6 show the results for the effective surface brightness (in terms of its difference from its true value) and the observed bulge-to-total luminosity ratio. There is no surprise that for models with a higher dust content, our measurements show progressively fainter bulges, although for the bulge-to-total luminosity ratio, the changes are not as extreme as for the Sersic parameter. This happens probably because the disc component is also obscured by the dust and this suppresses the total \(B/T\) ratio shift to some level. For comparative purposes, Fig. 6 also displays the results of our simulation for the _big bulge_ model inclined by 60 degrees. The result of this simulation closely follows the results reported in Gadotti et al. (2010) where this inclination was the highest among those they considered: the effective radius, Sersic index, and bulge fraction all decrease with optical depth (see their figures 3 and 4), which validates our simulations. It is clear from this comparison that for the edge-on orientation, the impact of the dust on the bulge parameters is disparately higher than for mildly inclined galaxies. We note that the difference between the edge-on and non-edge-on cases is not solely quantitative, but a contrasting behaviour can be observed instead. For the edge-on models, the measured value of the bulge effective radius tends to be higher then the true one, whereas for a mildly inclined model it becomes lower. The same behaviour was also confirmed for \begin{table} \begin{tabular}{c c c c} \hline \hline Photon packages & \(10^{6}\) & \(5\cdot 10^{6}\) & \(10^{7}\) \\ \hline With camera noise & 3.46 & 3.44 & 3.44 \\ Without camera noise & 3.88 & 3.86 & 3.86 \\ \hline \hline \end{tabular} \end{table} Table 2: Measured bulge Sérsic index for a set of decompositions with different noise parameters. \begin{table} \begin{tabular}{c c c c c} \hline \hline Model & \(r_{e}\) [pc] & \(n\) & \(q\) & \(B/T\) \\ \hline Small & 700 & 4 & 0 & 0.1 \\ Medium & 900 & 4 & 0 & 0.2 \\ Big & 1500 & 4 & 0 & 0.3 \\ \hline \hline \end{tabular} \end{table} Table 3: Parameters of the three models corresponding to the different bulges. models with exponential and de Vaucouleurs bulges (Pastrav et al., 2013b) for a wide range of inclinations, from face-on to almost edge-on, but for decomposition with infinitely thin discs. Fig. 7 shows the decomposition results for the _middle bulge_ model with the aid of various techniques for taking the dust into account. There are four bulge parameters shown in this figure (the effective radius \(r_{e}\), the Sersic index \(n\), the error of the effective surface brightness \(\Delta\mu_{e}=\mu_{e}^{\rm measured}-\mu_{e}^{\rm true}\), and the bulge-to-total ratio B/T) and three dust correction methods (the flat mask, relative mask, and dust model), for a total of twelve panels. The columns show different methods, the rows, different parameters. Each panel also contains results for the uncorrected decomposition (the same as in Fig. 6) for comparison purposes, and the true value of each parameter marked as the horizontal dotted line. Below we describe all three approaches for the dust correction separately. ### Flat mask The results of the decomposition with flat masks are shown in the leftmost column of Fig. 7 for three mask sizes: \(h_{\rm mask}=1\), \(2\), and \(4\) (a yellow dash-dotted line, a green dashed line, and a red solid line, respectively), along with the results of the unmasked decomposition (the blue sparsely dashed line). As we discussed earlier, without dust correction the bulge model collapses at \(\tau\approx 0.75\). The same happens for decomposition with a relatively narrow flat mask: for a mask with \(h_{\rm mask}=1\), the model collapses at \(\tau\approx 1\), and for \(h_{\rm mask}=2\) at \(\tau=1.5\). Therefore, the narrow flat mask provides almost no improvement compared to the unmasked case: the decomposition results become highly distorted even for low values of \(\tau\). For a wider mask (\(h=4\)), the results are considerably better. Even though all three parameters still deviate from their true values, this deviation is confined in a narrower range, and moreover, the model converges successfully for all considered \(\tau\) levels up to \(5.0\). There is still an apparent systematic shift in all three parameters in the same direction as for the decomposition with narrower masks, but for a moderate dust content it is not larger than typical uncertainties for decomposition results. ### Relative mask The middle column in Fig. 7 shows the results of the decomposition with three relative masks: \(f_{\rm mask}=0.5\), \(f_{\rm mask}=0.25\), and \(f_{\rm mask}=0.1\) (a yellow dash-dotted line, a green dashed line, and a red solid line, respectively). The first fact that should be noted is that the results of the decomposition with the relative mask are better than both those without any masking and with a flat mask. For a model with \(\tau=1\), the total area that is covered by the relative mask with \(f_{\rm mask}=0.5\) is almost the same as the flat mask with \(h_{\rm mask}=2\) (in fact, it is \(10\%\) smaller, see Fig. 3), so the green curve in the left column can be directly compared to the yellow curve in the middle one to assess the performance improvement of the relative mask. It is clear that despite having virtually the same total covered area, the relative mask does a better job in reducing the dust impact on the decomposition results. The same is true for all tested values of \(\tau\): the relative mask allows us to recover the bulge parameters more reliably while covering a smaller fraction of the galaxy image. ### Dust model The results of applying our combined IMFHT model - which contains a bulge, disc, and dust component - to decompose a synthetic image are shown in the right column of Fig. 7. This decomposition does not employ any masking, and taking the dust into account only happens by including the dust term during the line-of-sight integration (Eq. 10) for the model. The figure suggests that the performance Figure 6: The dependence of the measured bulge parameters’ values on the face-on optical depth \(\tau\) for the effective radius, effective surface brightness, Sérsic index, and bulge-to-total luminosity ratio. Three bulge models are shown: a “small bulge” model (\(r_{e}=700\) pc, \(B/T=0.1\)), "medium bulge” model (\(r_{e}=900\) pc, \(B/T=0.2\)), and “big bulge” model (\(r_{e}=1500\) pc, \(B/T=0.3\)). The grey line shows the results for a “big bulge” model with an inclination of \(60\) degrees. of this approach is comparable with the results of the widest flat (\(h_{\rm mask}=4\)) and relative (\(f_{\rm mask}=0.1\)) masks. Our inability to recover the precise values of the structural parameters when utilizing this approach can be attributed to two facts: i) it does not include the light scattered by the dust, and ii) it reflects the general problems of a decomposition with a complex model, where a degeneracy between some parameters can occur and lead to a systematical shift in parameters (Gadotti et al., 2010). Although the dust model technique does not show considerably better results when compared with dust masking, whilst taking an order of magnitude longer in computation time, it still has an advantage in that it does not lose information about the galaxy by masking out some galaxy regions. If a galaxy has a more complex structure, such as two (thin + thick) stellar discs, the thin disc can be completely covered by a dust mask, whereas the dust model approach can still recover the disc structure with some accuracy. ### Stellar discs Since the dust disc is embedded inside the stellar disc, it is natural to expect that the impact of dust on the fit disc parameters should depend on the scaling relations between the structural parameters of the two discs. To investigate this, we ran simulations for a set of various dust disc parameters while keeping the stellar disc parameters fixed to those given in the _basic model_ (Table 1). The first set of simulations regards a relation between the radial scale lengths for the stellar and dust discs. Observations show that there is a correlation between the radial scale length of the emission profile at 3.4 \(\mu\)m (which traces the bulk of the stellar mass in a galaxy) and that at 100 \(\mu\)m (which is dominated by the emission of cold dust, Mosenkov et al., 2022, see also Mosenkov et al., 2019 for a similar correlation but for the effective radius). Moreover, Casasola et al. (2017) found for 18 face-on spiral galaxies the following ratio between the disc scale lengths of the stellar and dust surface density distributions \(h_{\rm E,dust}/h_{\rm E,Star}\)\({}_{3.6\mu{\rm m}}=1.80\pm 0.13\) which is generally consistent with the results from radiative transfer modeling (Mosenkov et al. in prep.). In this study, we decided to consider three possible situations: a dust disc which is slightly shorter than a stellar disc (\(h_{\rm dust}=3000\) pc - some galaxies harbor a shorter dust disc than their stellar disc, e.g. NGC 4013, Mosenkov et al., 2018), both discs having the same radial scale (\(h_{\rm dust}=4000\) pc), and a dust disc which is more extended than a stellar disc (\(h_{\rm dust}=5000\) pc), to see if these Figure 7: Results of fitting with dust correction methods for the bulge parameters. Each panel shows the measured value as function of \(\tau\). The top row shows results for the effective radius \(r_{e}\), the second row shows the Sérsic index \(n\), the third one shows the error of the effective surface brightness \(\Delta\mu_{e}\), and the bottom row shows the measured bulge-to-total luminosity ratio. Different dust correction techniques are shown in the columns: the leftmost column shows the decomposition with flat masks with \(h_{\rm mask}=1\) (dot-dashed line), \(h_{\rm mask}=2\) (green densely dashed line), and \(h_{\rm mask}=4\) (solid line); the middle column shows relative dust masks with \(f_{\rm mask}=0.5\) (dot-dashed line), \(f_{\rm mask}=0.25\) (green densely dashed line), and \(f_{\rm mask}=0.1\) (red line); the rightmost column shows the results of decomposition with a dust model (red line). For comparison, each panel shows the uncorrected values (blue sparsely dashed line) and the true values as a grey dotted level. differences translate into various systematical shifts in the derived parameters. The results of these simulations are demonstrated in the top row of Fig. 8. The values of the measured parameters are shown as a function of the face-on dust optical depth \(\tau\). The left panel shows the observed central surface brightness in terms of a decimal logarithm of a fraction of the observed to true value, without dust attenuation. The middle panel shows the measured radial exponential scale of the stellar disc. The right panel shows the measured value of its vertical scale. For comparison, in the panels with surface brightness attenuation and radial scale we also show the results of simulations for a _basic model_ inclined at 60 degrees. Again, the results of decomposition with the inclined disc, in general, follow the results presented in Gadotti et al. (2010), and for the edge-on orientation, the impact of the dust on the disc parameters is higher. From these plots it is clear that the dust component severely changes the observed values of the stellar disc. All three models show a similar behaviour for the observed central surface brightness: its value drops quickly to about 20% the original level at \(\tau\approx 0.5\div 0.75\), and then after a short pause a shallower attenuation is observed. A possible explanation for this behaviour is that at these values for the face-on optical depth, the edge-on disc absorption extremities almost all observed photons near the galaxy midplane, and then the following attenuation occurs in regions far from the disc plane where the dust content is lower. From this plot it can also be seen that the general behaviour of the attenuation in edge-on discs follows the what's seen in inclined discs, although in edge-on discs it is qualitatively stronger. A different picture is seen for the disc radial scale length (top-middle panel). While for a disc inclined at 60 degrees the fit value of this parameter gradually increases with \(\tau\) and reaches a relative change of 10% only at \(\tau\gtrsim 2\) (i.e. this disc would demonstrate a very prominent dust lane if observed in an edge-on orientation), for an edge-on disc a very sharp increase is observed. In this case, at the same value of \(\tau\approx 1-1.5\), the relative change of the \(h_{\rm disc}\) reaches a peak that is approximately 50% higher than the true value (a similar \(\approx 50\)% increase of the observed values of \(h_{\rm disc}\) was reported for an (almost) edge-on orientation, but for decomposition with an infinitely thin disc). After that, the observed value of \(h_{\rm disc}\) quickly drops to \(\approx 5000\) pc (25% higher than the true value), and its behaviour for higher \(\tau\) depends on the relation between \(h_{\rm dust}\) and \(h_{\rm disc}\). If \(h_{\rm dust}<h_{\rm disc}\), the curve of the measured value for \(h_{\rm disc}\) starts to grow again. If \(h_{\rm dust}\geq h_{\rm disc}\), it appears to be more stable and does not change appreciably for a wide range of \(\tau\). The observed break at \(\tau\approx 1.5\) in the plot for \(h_{\rm disc}\) (top-middle plot of Fig. 8) is caused by issues with the bulge fitting. For high absorption levels, the Sersic component cannot fit the bulge properly because the dark dust lane in the galaxy midplane suppresses its maximal brightness. This leads to the appearance of two under-fit bulge "remnants" above and below the disc plane, where the bulge protrudes from the dusty disc. Their fraction in the residuals increases with \(\tau\) because the bulge fit becomes progressively worse and the disc itself becomes darker due to absorption, while these bulge regions remain almost unobscured. At some point, it becomes more efficient for the model (in terms of its achieved \(\chi^{2}\) value) to fit these bulge remnants as part of the disc component. This leads to a more concentrated disc model and, therefore, a shorter radial scale. The results for the last disc parameter, its vertical scale height \(z_{\rm disc}\), are shown in the top-right panel of Fig. 8. Again, all three models show the same general trend in that the fit value of \(z_{\rm disc}\) increases with \(\tau\). This is easy to understand since the dust absorbs more photons close to the disc plane, making the vertical brightness distribution flatter, therefore it can be approximated with a higher value of \(z_{\rm disc}\). We also point out a clear systematic trend: the larger the dust exponential scale height, the greater its impact on the observed \(z_{\rm disc}\) value. Since in both Gadotti et al. (2010) and Pastrav et al. (2013a,b), an infinitely thin disc model was utilized during decomposition, the \(z_{\rm disc}\) value was not inferred in their simulations and our results cannot be compared with these studies. The bottom row of Fig. 8 shows the results of our simulations with different \(z_{\rm dust}\) values. One can see that, in general, the behaviour of all three parameters are the same as those obtained for models with varied \(h_{\rm dust}\) values. We note that all else being equal, a thicker dust disc leads to more distorted stellar disc parameters. #### Flat mask In this and the next two paragraphs, we describe the results of taking the dust into account using three various techniques in a similar way as it was done for the bulge. We begin with a flat mask approach, which results are demonstrated in the left column of Fig. 9. The figure shows that while a flat mask allows one to enhance the quality of the decomposition, only a mask that is four times wider than \(z_{\rm dust}\) yields parameter estimates that are close to their true values. Narrower masks result in a systematic shift in the parameters (a rapid decline of the disc flux, and an increase in both the radial and vertical scales) that depend on the value of \(\tau\). #### Relative mask For retrieving disc parameters, a relative mask (middle column of Fig. 9) provides better results than a flat mask. Even the mask with the smallest covered area (\(f_{\rm mask}=0.5\)) gives estimates of the radial and vertical disc scales closer to the true values when compared to the considerably larger flat masks. However, the central surface brightness is still systematically underestimated. The most extended relative mask (\(f_{\rm mask}=0.1\)) allows us to recover the radial exponential scale almost perfectly, although for this value, there are some trends present in the central surface brightness and the vertical scale. Their errors are comparable to the characteristic uncertainties of the decomposition. We conclude that, as is the case for the bulge parameters, a disc decomposition with a relative mask generally gives better results for some fixed fraction of the covered galaxy image. #### Dust model The right column of Fig. 9 shows the results of a decomposition using a model with a dust component. This approach allows us to almost perfectly recover both the radial and vertical exponential scales, and there is only a slight overestimation of the disc central surface brightness. The fact that our model allows us to retrieve almost exact values for the disc parameters even for a high dust content seems to confirm the fact that light scattering has little impact on galaxy decompositions in an edge-on orientation, as was mentioned by Pierini et al. (2004). Also, for an the edge-on orientation, the spatial overlap between the disc and bulge components is lower, which reduces the degeneracy of their parameters. ## 4 Demonstration on real galaxies In previous sections, we described the techniques we used to account for a dust component and the results of their application to synthetic images of several model galaxies. To further validate these approaches in inferring the structural parameters of disc galaxies, we present the results of decompositions for a couple of real edge-on galaxies. We use two methods to account for the dust component: a neural-network generated relative dust mask, and an IMFIT-model with a dust component. For this purpose, we selected two edge-on galaxies with prominent dust lines but without significant complex features in their discs (warps, flarings, bright halos, etc.) from the EGIPS catalog2, which contains a sample of 16551 edge-on galaxies. The selected objects are PGC 27896 and PGC 2441449. We downloaded their images from the Pan-STARRS1 survey (Chambers et al., 2016; Flewelling et al., 2020) in the \(i\)-band. Footnote 2: [https://www.sao.ru/edgeon/catalogs.php?cat=EGIPS](https://www.sao.ru/edgeon/catalogs.php?cat=EGIPS) To compare decomposition results with and without dust correction, we first decompose these galaxies using a photometric model consisting of a Sersic bulge and a 3D exponential disc, along with taking a dust mask or a dust model into account. In all cases, we use the same PSF image that was obtained by fitting a Moffat function to a stacked image of bright isolated stars in the galaxy frame. We masked off background and foreground objects using a segmentation map produced by the SEXTRACTOR package (Bertin & Arnouts, 1996). The results of our decompositions are shown in Fig. 10; the left column is for PGC 27896, the right is for PGC 2441449. The top panels show the original images of these two galaxies in the \(i\) band with yellow bars marking a \(30\arcsec\)scale. The second row in Fig. 10 shows images of the dust masks generated by our neural network for the \(f_{mask}\) parameter value equal to 0.3. The next three rows show best model images for the simple decomposition (without dust correction) first, then for the decomposition with a dust mask, and finally the decomposition with a dust model. It is evident that while the two former models can, in general, reproduce the overall shape of the galaxy, they miss an absorption lane, while the later one demonstrate the presence of a dust lane that is a part of the model. We note that the galaxy PGC 27896 (left column) is not oriented perfectly edge on, but is slightly inclined which results in a shift of the dust lane below the visible disc centre and in an asymmetry of the obscured bulge region. Both of these features are reproduced by our model that has converged to an inclination value of 89 degrees. The last three rows in Fig. 10 contain residual "image - model" maps for the three decomposition methods. The residuals for the decomposition without dust correction looks as expected, because the model converges to some averaged disc+dust solution, in the galaxy plane the models are too bright, resulting in the residual maps showing over-subtracted negative regions. Above and below the galactic plane, where the dust absorption is weaker, the model is too faint, which results in bright regions in the residuals. The residual maps for decompositions with a dust mask are different. Because the dust lanes of these galaxies are masked out, this does not lead to attenuated disc models, and a brighter disc appears instead. As a result, the residual maps show overly subtracted regions where the dust attenuation is high, but there are no bright regions above and Figure 8: Disc parameters retrieved by decomposition for a set of models with different dust disc parameters. Top row: results for the disc central surface brightness (left panel), radial exponential scale (middle), and vertical scale (right) for three dust components with a radial scale shorter, equal to, and larger than that of the stellar disc. For comparison, the grey lines for the panels about surface brightness and exponential scale show results for the same disc with \(h_{\rm dust}=4000\) pc, but with an inclination of 60 degrees. Bottom row: the same parameters obtained for dust models with different vertical scales (\(z_{\rm dust}=100\), 150, and 200 pc). below as in the previous case. This means that the disc model better fit low absorption regions of images. The residuals of the decomposition with our new IMFHT model are shown in the bottom row of Fig. 10. They demonstrate a better agreement between the observed images and the corresponding models. While there are still some deviations from the zero level, they are considerably smaller. The dust lane is not overly subtracted and there are no bright regions around the disc plane. Although there are still slightly over-subtracted regions in the outer regions of the disc, they are attributed to the fact that both galaxies have Freeman Type II discs (Freeman, 1970), whereas in our models the disc is a pure exponential (Type I). In addition, Marchuk et al. (2022) cataloged these galaxies as having a B/PS bulge in the central part which is clearly visible in our residual images. Numerical values for the decomposition results are listed in Table 4, where for both tested galaxies we present the fit structural parameters for a simple bulge+plus disc model (marked as "not corrected" in the table), for a decomposition with a dust mask ("dust mask"), and for our model in a dust component ("dust component"). One can see that for real galaxies we observe the same systematic shifts between the uncorrected and corrected values (see Figs. 6 and 8). After correction, the discs become brighter, thinner, and have shorter radial scales, whereas the bulges become brighter (with higher bulge-to-total luminosity values) and with larger Sersic index values. We also mention that the results of the decomposition with a dust model have higher discrepancies with uncorrected parameters than the results of the decomposition with a dust mask. The latter appears to be somewhere in between the uncorrected parameters and the results of the decomposition with a dust model. This also aligns with the results of our numerical tests which demonstrate that the decomposition with dust masks of all kinds still has higher errors and systematical shifts when compared with the dust model decomposition. This is especially clear for the Sersic parameters, which are almost the same as the uncorrected values. ## 5 Conclusions In this article we ran a number of numerical simulations in order to determine how the presence of dust impacts the measured values for the parameters of edge-on galaxies. To achieve this, we created a set of artificial galaxy images with various parameters and applied a standard decomposition method to them to be able to compare the input and output values of the structural parameters. We also tested three different techniques for how this impact can be minimized, two of which are based on masking dust attenuated regions of galactic images, and the third involves an analytical model that includes dust absorption. Our main conclusions can be summarized as follows. We confirm the findings of previous authors (Gadotti et al., 2010; Pastrav et al., 2013, 2013) who utilized two-dimensional decomposition to infer the general trends of how the bulge and disc parameters are Figure 9: Results of the fitting with different dust correction methods for the disc parameters. The general structure of the figure follows similarly to the bulge decomposition results (see the caption of Fig. 7), except for the disc parameters, the rows correspond to the central surface brightness (top), the radial scale (middle), and the vertical scale (bottom). Figure 10: Decomposition of two real galaxies, PGC 27896 (left column) and PGC 2441449 (right column). Top panels: images of the galaxies in the \(i\) band with 30′′scale bars. Second row: dust masks created by our neural network. Third, fourth and fifth rows: best fit models for a decomposition without taking the presence of dust into account, with a dust mask, and with a dust model. Three bottom rows: residual maps for these decomposition approaches. Red hatched areas on the residuals with a dust mask show masked areas. The fit \(\chi^{2}\) statistics is shown for decompositions without dust correction or with a dust component (the decompositions with the dust mask have a different masked area, so its statistics cannot be accurately compared with the other two methods). altered due to the varied dust absorption. Using three-dimensional decomposition that accounts for the vertical structure of the stellar disc, we show that these trends hold true for perfectly edge-on galaxies, for which the disc thickness can not be neglected. For bulges, the measured values of the effective radius tend to be larger than the true (intrinsic) values, whereas the effective surface brightness, Sersic index, and bulge-to-total luminosity ratio tend to be lower. In other words, bulges in dusty edge-on galaxies appear fainter and less concentrated than they really are. For discs, the measured values of the central surface brightness tend to be lower than the true values, while the radial and vertical scales tend to be larger. Therefore, discs also appear fainter and less concentrated than they are in reality. For both bulges and discs, the absolute values of the parameters' shifts depend on the properties of these components and on the dust content in the galaxy. Masking out the regions most affected by dust in a galactic image allows one to considerably reduce the dust's influence and obtain better estimates of the galactic parameters. Comparing different masking techniques showed that a dust mask which is more extended in the center of the galaxy and is narrower in the outer region is better than a dust mask which has a constant width. A neural network can be trained to effectively generate such masks for images of real galaxies. An analytical model that includes an absorbing dust component in a form of a 3D exponential disc can be used to perform decomposition of a galaxy with a prominent dust lane and infer the galaxy structural parameters corrected for dust. Even if this model does not include light scattering, for simplification, it performs better than any masking techniques that we tested. The results of applying the proposed methods to a couple of real galaxies whilst taking the dust component into account are in agreement with numerical experiments and demonstrate the validity of our approach. We plan to continue our research of the dust impact on the decomposition of galaxies, and consider other wavelengths (such as ultraviolet and infrared ranges) as well as other, more complicated galaxy models. We also plan to continue the development of different algorithms to correct the decomposition results for the presence of dust in galaxies that are viewed in an orientation close to edge-on. ## Acknowledgements We acknowledge financial support from the Russian Science Foundation (grant no. 20-72-10052). ## Data Availability The data underlying this article will be shared on reasonable request to the corresponding author.
2309.07149
Decoding visual brain representations from electroencephalography through Knowledge Distillation and latent diffusion models
Decoding visual representations from human brain activity has emerged as a thriving research domain, particularly in the context of brain-computer interfaces. Our study presents an innovative method that employs to classify and reconstruct images from the ImageNet dataset using electroencephalography (EEG) data from subjects that had viewed the images themselves (i.e. "brain decoding"). We analyzed EEG recordings from 6 participants, each exposed to 50 images spanning 40 unique semantic categories. These EEG readings were converted into spectrograms, which were then used to train a convolutional neural network (CNN), integrated with a knowledge distillation procedure based on a pre-trained Contrastive Language-Image Pre-Training (CLIP)-based image classification teacher network. This strategy allowed our model to attain a top-5 accuracy of 80%, significantly outperforming a standard CNN and various RNN-based benchmarks. Additionally, we incorporated an image reconstruction mechanism based on pre-trained latent diffusion models, which allowed us to generate an estimate of the images which had elicited EEG activity. Therefore, our architecture not only decodes images from neural activity but also offers a credible image reconstruction from EEG only, paving the way for e.g. swift, individualized feedback experiments. Our research represents a significant step forward in connecting neural signals with visual cognition.
Matteo Ferrante, Tommaso Boccato, Stefano Bargione, Nicola Toschi
2023-09-08T09:13:50Z
http://arxiv.org/abs/2309.07149v1
Decoding visual brain representations from electroencephalography through Knowledge Distillation and latent diffusion models ###### Abstract Decoding visual representations from human brain activity has emerged as a thriving research domain, particularly in the context of brain-computer interfaces. Our study presents an innovative method that employs to classify and reconstruct images from the ImageNet dataset using electroencephalography (EEG) data from subjects that had viewed the images themselves (i.e. "brain decoding"). We analyzed EEG recordings from 6 participants, each exposed to 50 images spanning 40 unique semantic categories. These EEG readings were converted into spectrograms, which were then used to train a convolutional neural network (CNN), integrated with a knowledge distillation procedure based on a pre-trained Contrastive Language-Image Pre-Training (CLIP)-based image classification teacher network. This strategy allowed our model to attain a top-5 accuracy of 80%, significantly outperforming a standard CNN and various RNN-based benchmarks. Additionally, we incorporated an image reconstruction mechanism based on pre-trained latent diffusion models, which allowed us to generate an estimate of the images which had elicited EEG activity. Therefore, our architecture not only decodes images from neural activity but also offers a credible image reconstruction from EEG only, paving the way for e.g. swift, individualized feedback experiments. Our research represents a significant step forward in connecting neural signals with visual cognition. ## 1 Introduction Electroencephalography (EEG) is increasingly recognized as a valuable instrument for decoding visual representations within the human brain. The primary advantage of EEG lies in its non-invasive nature and its ability to provide real-time insights into human brain function via electrical activity recordings from the scalp. Despite its spatial resolution constraints, its unparalleled temporal resolution renders it ideal for real-time applications. Recent technological advancements have facilitated the decoding of intricate visual stimuli from EEG signals, notably from expansive datasets such as ImageNet [1, 13]. Both convolutional (CNN) and recurrent neural networks (RNN) have demonstrated efficacy in classifying EEG signals into distinct image categories with appreciable accuracy. The successful decoding of complex visual stimuli from EEG signals can pave the way for innovative neural prosthetics and biofeedback systems. Translating brain activity patterns into decoded image categories or reconstructions could potentially offer visually impaired individuals a semblance of artificial vision. Additionally, EEG decoding can revolutionize brain-centric image searches, communication platforms, and augmented reality interfaces. Real-time visualizations of decoded brain activity can also usher in novel neurofeedback paradigms, facilitating self-regulation of brain states through integrated EEG decoding and external visual feedback mechanisms [3]. However, a predominant focus in current research is on multisubject models, which involve averaging EEG signals across multiple participants. This methodology may overlook the nuances of individual-specific neural representations. Models tailored to individual subjects could offer a more granular decoding and introduce an added dimension of data privacy, as each model is uniquely calibrated for a specific individual, precluding its application to others. Also, in spite of recent progress, the task of reconstructing visual stimuli based on the EEG signals they elicit remains a formidable challenge. The inherent low spatial resolution of EEG poses difficulties in reconstructing detailed visual nuances. Presently, image reconstructions predominantly capture broader features, such as shapes, colors, and textures, thereby constraining the depth of visual feature decoding and image reconstructions. To overcome this obstacle, instead of attempting pixel-precise reproductions, a more pragmatic approach might be semantic image reconstructions. In this context, techniques such as generative adversarial networks (GANs) Here's the revised LaTeX text with a more sober and scientific style: The emergence of methods such as [7] offers semantically coherent reconstructions directly from EEG signals, rather than estimating the EEG from an image and subsequently attempting to reconstruct the image from this estimated signal. Despite the challenges associated with the fidelity of image reconstruction, the rapid advancements in deep neural networks show potential. This research aims to improve existing methodologies for translating perceptual experiences from EEG patterns, with a focus on real-time applications. We present a methodology that advances this field, outlining a pipeline (as shown in Fig 1) that facilitates the training of a single-subject model within a limited experimental timeframe, leading to near-real-time brain decoding. ## 2 Related Works EEG are widely processed in the context of brain-computer interfaces (BCI) to perform brain decoding for a wide variety of tasks [22]. A number of prior works have explored decoding visual representations from EEG signals using deep learning models. Kavasidis et al. [6] were among the first to propose generating images from EEG data. They recorded EEG while subjects viewed ImageNet images, and used an Long Short Term Memory (LSTM) model com Figure 1: Our pipeline can be described as follows: First, we record EEG data while the subject is viewing natural images. This data is then preprocessed and converted into spectrograms, which serve as the input for our neural network. Our EEG decoder is trained using a knowledge distillation method based on the CLIP model. The outputs from the EEG decoder, which are predictions of the image that elicited the EEG data, are then combined with an image generation pipeline. This end-to-end approach allows us to reconstruct images from the neural activity data captured by the EEG. bined with variational autoencoders or GANs to reconstruct images. The key difference is they aimed for class-level image generation rather than detailed reconstruction and focuses on processing data in the time domain. Spampinato et al. [19] also analyzed EEG responses to ImageNet stimuli. They trained an LSTM encoder to classify EEG signals into image categories. For reconstruction, they trained a separate CNN regressor to predict EEG features from images and replaced the EEG signal with this encoder model. Palazzo et al. [12] extended [19] using contrastive learning to align EEG and visual image features. However, their goal was improving image classification rather than reconstruction, and various challenges emerged [9]. Singh et al. [17] proposed an EEG-to-image GAN framework but focused on smaller (i.e. with fewer images) datasets of characters and shapes. In this work, we propose a modularized pipeline for reconstructing detailed photorealistic visual stimuli (i.e. images) directly from EEG brain signals, using a novel CLIP based knowledge distillation of a convolutional neural network trained on time-frequency decomposition (TFD) and generative diffusion synthesis, generating semantically plausible and visually similar images reconstruction to the original stimuli. ## 3 Material and Methods This section delineates the methodology adopted and the dataset utilized. The dataset, sourced from ImageNet EEG [7], is publicly accessible. All computational experiments and model training were conducted on a server outfitted with four NVIDIA A100 GPU cards (each with 80GB RAM connected via NVLINK) and 2 TB of system RAM. The codebase was developed using Python 3.9, leveraging libraries such as Pytorch, Pytorch Lightning, and scikit-learn for model implementation. ### Data The EEG recordings employed in this study were sourced from [18]. These recordings were obtained from six subjects who were exposed to images from 40 distinct ImageNet [2] classes, with each class comprising 50 images. The sampling rate for these recordings was 1000 Hz. The image presentation protocol involved sequential display in 25-second intervals, succeeded by a 10-second intermission. In each display interval, images are shown sequentially for 0.5 seconds each. This protocol yielded a total of 2,000 images spanning 1,400 seconds (or 23 minutes and 20 seconds) of recording time. Each subject underwent four recording sessions, each lasting 350 seconds. The experiments utilized a 128-channel cap with active, low-impedance electrodes (actiCAP 128Ch, Brainproducts) for EEG data collection. Brainvision amplifiers and data acquisition systems were used to record the EEG signals at a sampling rate of 1000 Hz with 16-bit resolution. The EEG data resulted in 11,466 sequences post the exclusion of recordings of suboptimal quality. The comprehensive nature of this experimental design facilitated the examination of EEG responses to a diverse array of visual stimuli from ImageNet. The multi-channel EEG recordings, captured during the viewing of thousands of stimuli, furnish a rich dataset conducive for training decoding models. For further detail about acquisition protocol please see the original article [18]. ### Preprocessing Prior to utilizing the EEG signals for training our decoding models, a series of preprocessing steps were executed. Initially, a notch filter in the 49-51 Hz range was applied to mitigate power line interference. Subsequently, a second-order band-pass Butterworth filter, ranging between 14 and 70 Hz, was employed to focus on frequency bands pertinent to visual attention and object recognition. The signals were then standardized across channels. For the purpose of neural network input generation, the filtered EEG signals were segmented into 40 ms windows moving each time 20 ms. Time-frequency decompositions (TFD) were computed for these segments using the short-time Fourier transform (STFT), converting each trial into a 128-channel image that depicted the spectrum across both time and frequency dimensions. This process yielded 2,000 EEG spectrogram images, each with 128 channels, for every subject. These images were then used for training and evaluation of our convolutional neural network tailored for EEG decoding. This multi-channel spectral representation encapsulates the spatial and temporal intricacies of the EEG, allowing our model to extract features essential for visual stimuli classification. It is worth noting that the preprocessing described herein is specific to the architecture proposed in this study. Alternative baselines adopted slightly varied preprocessing techniques, such as direct time domain data analysis, starting from the same filtered data in the time domain. These variant preprocessing methodologies are elaborated upon in 3.6. ### Decoding pipeline Our approach employs a CNN with integrated residual connections to classify EEG TFDs. The architecture begins with a series of convolutional layers, progressively increasing the number of filters to effectively extract both spatial and temporal features. Subsequent to this, global average pooling and fully-connected layers are utilized for classification tasks. For the training of the CNN, we adopt a knowledge distillation methodology [5]. Initially, an image classifier is pretrained using CLIP (Contrastive Language-Image Pre-Training) [14] features to anticipate the stimulus classes, achieving a commendable accuracy of 99%. This pretrained classifier furnishes "soft targets" to guide our EEG model. During the training phase, EEG spectrograms are fed into the CNN, while CLIP image features are directed to the teacher classifier. The objective is to train the CNN such that it aligns with the class probability distributions produced by the teacher. This distillation approach not only stabilizes the training process but also enhances the model's performance in comparison to direct training on class labels. For inference, only the EEG-based CNN is deployed to predict classes from novel time-frequency decompositions. Through the distillation of knowledge from the image model, our CNN is equipped to derive robust representations, enabling the decoding of visual stimuli solely from EEG signals. Post the training of our EEG decoding model, it becomes capable of predicting ImageNet classes from fresh EEG TFDs. To validate these predictions and reconstruct images that could potentially induce analogous neural responses, we employ the Stable Diffusion generative model [15]. For every EEG prediction, a text prompt such as "an image of a predicted class" is formulated. This prompt, in conjunction with random noise vectors, is input into Stable Diffusion to generate images congruent with the predicted class. This methodology facilitates the reconstruction of visual stimuli exclusively from neural activity patterns. The EEG decoder identifies the class, while Stable Diffusion fabricates a semantically coherent image. A comprehensive diagram of the decoding pipeline is depicted in Fig 1, and the knowledge distillation procedure is illustrated in Fig. 2. ### Reconstruction Pipeline Diffusion models are generative frameworks trained to invert a noise diffusion process, facilitating image synthesis. Stable Diffusion operates as a latent diffusion model, proficient in generating lifelike images from random noise vectors, conditioned by textual descriptions. The model's strategy involves the iterative addition of noise to genuine images, followed by the learning of a parametric denoising function to eradicate the noise over multiple timesteps. By repetitively applying the denoising function, the model can produce lifelike images, conditioned on textual descriptions. This iterative denoising offers tight control over image generation, guided by text at every iteration. In the sampling phase, Stable Diffusion accepts a text prompt and progressively diffuses noise vectors until they converge into an image that aligns semantically with the provided description. For the task of reconstructing images from EEG signals, Stable Diffusion's text conditioning capability proves invaluable. The EEG decoder outputs a label indicative of the visual stimulus class. This discrete label is then employed to generate corresponding images via Stable Diffusion, bypassing the need for direct pixel reconstruction. This approach facilitates the synthesis of plausible image reconstructions based on the decoded semantic category from neural activity patterns. This model-centric strategy also addresses the inherent resolution constraints of EEG for high-fidelity decoding. The guided diffusion modeling ensures the generation of visualizations that are both realistic and interpretable to human observers. ### Knowledge Distillation Knowledge distillation facilitates the transfer of insights from a comprehensive, pretrained teacher model to a more compact student model [5]. This process empowers the student model to attain performance metrics that are typically associated with larger models. Consider \(f_{t}(x)\) as the output vector of class probabilities produced by the teacher model for a given input \(x\), representing the stimulus image. Similarly, let \(f_{s}(e;\theta)\) denote the student model, characterized by parameters \(\theta\), where \(e\) represents the EEG recordings obtained during the presentation of stimulus \(x\). The student model is trained through knowledge distillation by minimizing: \[\mathcal{L}(\theta)=\alpha\mathcal{L}CE(fs(e;\theta),y)+(1-\alpha)\mathcal{L}KD (fs(x;\theta),f_{t}(x)) \tag{1}\] Here, \(\mathcal{L}CE\) represents the cross-entropy loss between the predictions of the student model and the actual ground truth labels \(y\). In contrast, \(\mathcal{L}KD\) denotes the distillation loss, capturing the difference between the outputs of the student and teacher models. The temperature parameter \(T\) is employed to modulate the probability distribution of the teacher: \[\mathcal{L}KD(fs,ft)=-\sum_{e}\frac{\exp(f_{t,c}/T)}{\sum_{c^{\prime}}\exp(f_{ t,c^{\prime}}/T)}\log\frac{\exp(f_{s,c}/T)}{\sum_{c^{\prime}}\exp(f_{s,c^{ \prime}}/T)}. \tag{2}\] Figure 2: Illustration of the training procedure. Knowledge distillation facilitates the training of a compact ”student” model to emulate the outputs of a more extensive ”teacher” model. This enables the student to achieve performance levels akin to larger models, even when initiated from distinct yet related inputs. Training the student model to replicate the comprehensive probability distribution of the teacher facilitates the transfer of insights regarding inter-class relationships, offering a richer supervisory signal than mere ground truth labels. In our implementation, we set \(\alpha=0.5\) and \(T=1\). For EEG decoding, a linear classifier was trained atop the CLIP [14] CLS tokens. CLIP, an acronym for Contrastive Language-Image Pre-Training, is a neural architecture trained to correlate images and text through contrastive learning. Comprising an image encoder and a text encoder, CLIP is trained to discern whether an image-text pairing is congruent or not. The image encoder in CLIP, a vision transformer (VIT), embeds images into latent representations. Throughout its training, CLIP cultivates an embedding space where semantically congruent images and texts are proximate. A pivotal element of the image encoder is the CLS token, an auxiliary token introduced to the network's input, enabling the encoder to generate a holistic representation of the entire image. A linear classifier was trained atop this CLS token for every image in the training dataset to predict the appropriate class. This amalgamation of CLIP and the classifier served as the teacher model, functioning as a bridge between EEG spectrograms and image classes. The student CNN, when exposed solely to EEG data, derives insights from both the teacher's distributions and the true labels. This distillation process accentuates the student's focus on neural patterns pertinent to visual recognition, enhancing convergence, accuracy, and generalization. By assimilating insights from a domain expert in image processing, the streamlined student decoder becomes adept at extracting visual representations from EEG signals. ### Baselines In order to underscore the efficacy of employing computer vision techniques for EEG signal decoding, we assessed a spectrum of baseline methodologies, spanning from conventional machine learning paradigms to contemporary neural network architectures. Initially, we employed a basic baseline wherein the raw EEG signals were standardized, squared, and subsequently averaged across channels. Following this, a Logistic Regression classifier was trained on the resultant data. An extension of this approach involved applying the Logistic Regression classifier to EEG signals that were averaged over an 80-point sliding window. In another variant we executed PCA on the windowed average EEG, preserving 29 components that accounted for \(95\%\) of the variance, prior to classifier training. Notably, these methodologies overlook the inherent spatial and temporal intricacies of the EEG signal. The main advantage of using the PCA is providing orthogonal features to the model that already integrate relevant spatiotemporal relationships. In this context, a recent proposition by CEBRA [16] demonstrated a deep learning technique that employs contrastive learning to project neural data onto lower-dimensional manifolds conducive for decoding. In alignment with this, we projected our EEG data onto a 32-dimensional manifold, utilizing CLIP features as a guiding mechanism. The value was chosen to be close to the number of features used in the PCA, picking the closest power of 2. This offers a robust nonlinear neural baseline that effectively harnesses both spatial and temporal patterns. In terms of neural network architectures that directly process EEG time series data, we examined both a LSTM model and a 1D convolutional network (CNN) equipped with temporal convolutions. Both architectures incorporated 4 layers and were regularized using dropout, ensuring a consistent parameter count across models. Further, we explored CNNs that operate on 2D representations of the EEG, thereby leveraging computer vision methodologies. One such model treated the raw EEG traces as a 2D image. Another model employed a wavelet decomposition utilizing the Daubechies db4 wavelet from PyWavelets [2][8], which has been recognized as an efficient time-frequency representation for EEG [20]. Our final CNN baseline ingested the short-time Fourier transform (STFT) of the EEG, processed with a 40 ms window. This ensemble of baselines, ranging from classical signal processing to avant-garde deep learning, offers a holistic comparative framework and accentuates the significance of spatiotemporal neural network modeling in the realm of EEG decoding. The computer vision-oriented strategies adeptly harness the structural nuances present in the multi-channel EEG. For consistency, all neural networks were evaluated with a similar parameter count range (1.1-1.2 M). Each was trained using the Adam optimizer at a learning rate of \(3e-4\). Additional training specifications included an early stopping callback with a 10-epoch patience based on validation loss variations, a batch size of 64, gradient clipping at a magnitude of \(1.0\), and a maximum epoch count set to 50. ## 4 Results ### Performance Evaluation The efficacy of our model is evaluated using a comprehensive set of metrics: top-5, top-3, top-1 accuracy, F1 score, and the normalized kappa score. Figure 5 demonstrates that our knowledge distillation CNN consistently outperforms both the standard CNN baseline and a random classifier. Notably, the proposed approach--employing a CNN on TFD with CLIP-based knowledge distillation--exhibits superior performance compared to the same network without the distillation technique. This superiority is further evident when juxtaposed with other baselines detailed in Table 1. **Method** & **Metrics [Mean (Std)]** & **Top3 Accuracy** & **Top5 Accuracy** & **F1** & **Kappa** \\ LR on average square signal & 0.3600 (0.1313) & 0.6619 (0.1758) & 0.8156 (0.1619) & 0.3493 (0.1375) & 0.3435 (0.1345) \\ LR on windowed signal & 0.0205 (0.0058) & 0.0636 (0.0083) & 0.1092 (0.0110) & 0.0156 (0.0054) & 0.0009 (0.0061) \\ LR on PCA windowed signal & 0.0175 (0.0040) & 0.0536 (0.0084) & 0.0961 (0.0063) & 0.0097 (0.0047) & 0.0020 (0.0039) \\ CEBRA + kNN & 0.0240 (0.0050) & 0.0831 (0.0116) & 0.1402 (0.0136) & 0.0223 (0.0061) & -0.0012 (0.0056) \\ LSTM & 0.3605 (0.0938) & 0.7376 (0.1226) & 0.8868 (0.1030) & 0.3392 (0.0894) & 0.3437 (0.0960) \\ Conv1d & 0.2623 (0.0511) & 0.6013 (0.0826) & 0.7971 (0.0851) & 0.2582 (0.0520) & 0.2432 (0.0524) \\ Knowledge distillation on eeg (img) & 0.2819 (0.0836) & 0.5773 (0.1379) & 0.7295 (0.1339) & 0.2742 (0.0794) & 0.2632 (0.0857) \\ Knowledge distillation on wavelet & 0.4060 (0.1154) & 0.7490 (0.1282) & 0.8787 (0.1007) & 0.3889 (0.1148) & 0.3905 (0.1183) \\ plain CNN on spectrograms & 0.2819 (0.0836) & 0.5773 (0.1379) & 0.7295 (0.1339) & 0.2742 (0.0794) & 0.2632 (0.0857) \\ **Knowledge distillation on STFT** & **0.4120 (0.1131)** & **0.7530 (0.1068)** & **0.8782 (0.0806)** & **0.4027 (0.1133)** & **0.3966 (0.1160)** \\ \end{tabular} Table 2 provides a summarized view of the decoding performance across various methods applied to EEG data. Clear trends in accuracy emerge across model types. Classical machine learning baselines, which utilize averaged or PCA-reduced EEG, yield near chance-level accuracy, underscoring the inadequacy of hand-engineered features for decoding intricate visual stimuli. An exception is the Logistic Regression model trained on squared data averages. Conversely, deep learning models that harness spatiotemporal EEG TFDs patterns consistently achieve superior accuracy. Both convolutional and recurrent neural networks processing raw EEG time series deliver satisfactory results. Yet, the best performance is reached by models using 2D representations of multi-channel EEG. Specifically, CNNs fed with TFD computed using wavelet-transformed or spectrogram images both surpass \(85\%\) in top-5 accuracy, underscoring the benefits of computer vision techniques that learn directly from 2D structures in signal processing. Both wavelet and spectrogram decompositions seem to encapsulate pertinent time-frequency domain information for decoding. A closer examination of the top-3 and top-5 accuracy metrics reveals a consistent trend: deep learning models outclass classical baselines. The elite CNNs achieve over \(75\%\) in top-3 accuracy, implying that in approximately 3 out of 4 trials, the true label ranks within the top three predictions. The performance gap relative to the LSTM network is also noteworthy. This accentuates the efficacy of 2D convolutions in discerning the pertinent semantic categories from EEG patterns. The consistency of the top-5 accuracy across deep learning models suggests potential inherent challenges in precisely mapping EEG to granular image labels. However, the models adeptly identify the overarching category within their top predictions, underscoring the viability of EEG-based visual concept decoding. From a qualitative perspective, Figures 3 and 4 showcase examples of predicted and reconstructed images. While the model predominantly identifies the correct visual concept from EEG patterns, minor category confusions do arise. For instance, "bolete" might be misinterpreted as "pizza," or "banana" as "Margherita". Nevertheless, the model's ability to accurately discern the overarching semantic category and produce corresponding reconstructions is noteworthy. In conclusion, our findings underscore the pivotal role of neural networks and image-centric representations in harnessing the rich multidimensional EEG representation. Directly classifying TFD inputs using a computer vision approach emerges as the potent strategy for EEG-based decoding. ## 5 Discussion The primary objective of this study was to decode and reconstruct visual representations from EEG-recorded human brain activity. By employing deep convolutional neural networks trained on EEG TFD and guided by the CLIP-based knowledge distillation technique, we managed to predict image classes from the ImageNet dataset with an accuracy of \(87\%\) in the top-5 category. This knowledge distillation approach yielded a marked improvement in performance when compared to a baseline model and other data processing methodologies. While the model's predictions were generally reliable for the majority of subjects, it did exhibit some confusion between closely related classes. The capability to extract the semantic content of image stimuli from non-invasive EEG recordings presents significant implications for the future of brain-computer interfaces. The methodology we developed for image reconstruction could potentially pave the way for a form of ar Figure 4: On the left, the target classes are presented and each column show result from a single subject. Figure 5: Results for EEG decoder. **Ours** is the CLIP-based approach, **plain** is a vanilla CNN with the same architecture trained for classification and **chance** serves as comparison with chance level. Bars are average across subjects and error bars are standard deviations. tificial vision, where decoded contents from a user's neural activity are visualized in real-time. Furthermore, our model introduces the possibility of innovative neurofeedback experiments, wherein subjects could receive instantaneous visual feedback of decoded EEG patterns, facilitating the voluntary self-regulation of brain states [3]. However, the study is not without limitations. EEG serves as a macroscopic lens into the brain's visual processing mechanisms. To address the limitations of EEG's spatial resolution, integrating it with other imaging techniques, such as fMRI, which boasts superior spatial resolution, is a promising avenue. Such multimodal strategies have shown potential in reconstructing images with a higher degree of detail [4, 10, 11, 21]. Also, the model in its current configuration has not been optimized for decoding images outside the 40 categories used in the experiment, suggesting a need for further refinement. The variability in EEG decoding abilities across different subjects or sessions, influenced by cognitive and neural factors, remains a topic that warrants deeper exploration. One of the significant concerns in EEG decoding revolves around the inadvertent extraction of personal perceptual data, which must be rigorously addressed. Our methodology places a strong emphasis on the creation of subject-specific models. This ensures that the decoding process is both consensual and uniquely tailored to the individual, mitigating potential ethical concerns. This approach not only necessitates voluntary participation but also minimizes the risk of misinterpretations due to the model's specificity to individual neural patterns. The rapid training methodology we have introduced also holds promise for real-time feedback paradigms using models tailored to individual subjects, with a couple of seconds in inference time needed to predict class and generate the image on an A100 GPU. As the field of deep learning and generative models continues to evolve, we anticipate parallel advancements in EEG decoding and reconstruction capabilities. ## 6 Conclusions In this study, we demonstrated the potential of deep neural networks, coupled with generative diffusion models, to reconstruct visual experiences directly from non-invasive EEG recordings. The application of knowledge distillation from language-image pretraining enabled our convolutional decoder to effectively extract semantic information from brain activity patterns. This capability significantly surpassed the performance of classical signal processing baselines. By generating images based on the predicted labels, we were able to produce visualizations that closely align with the decoded neural activity. Our emphasis on creating subject-specific models not only ensures a certain degree of privacy but also underscores the unique capabilities of EEG data in decoding individual mental representations. These techniques, which focus on translating neural signals into their corresponding images, can kickstart significant advancements in the domains of brain-computer interfaces and neural prosthetics, as well as human-computer interaction research. Overall, our findings highlight the potential of non-invasive brain imaging as a tool to provide insights into the human cognitive experience.
2309.08291
Breaking down the relationship between academic impact and scientific disruption
We examine the tension between academic impact - the volume of citations received by publications - and scientific disruption. Intuitively, one would expect disruptive scientific work to be rewarded by high volumes of citations and, symmetrically, impactful work to also be disruptive. A number of recent studies have instead shown that such intuition is often at odds with reality. In this paper, we break down the relationship between impact and disruption with a detailed correlation analysis in two large data sets of publications in Computer Science and Physics. We find that highly disruptive papers tend to be cited at higher rates than average. Contrastingly, the opposite is not true, as we do not find highly impactful papers to be particularly disruptive. Notably, these results qualitatively hold even within individual scientific careers, as we find that - on average - an author's most disruptive work tends to be well cited, whereas their most cited work does not tend to be disruptive. We discuss the implications of our findings in the context of academic evaluation systems, and show how they can contribute to reconcile seemingly contradictory results in the literature.
Mingtang Li, Giacomo Livan, Simone Righi
2023-09-15T10:12:17Z
http://arxiv.org/abs/2309.08291v1
# Breaking Down the Relationship between Academic Impact and Scientific Disruption + ###### Abstract We examine the tension between academic impact - the volume of citations received by publications - and scientific disruption. Intuitively, one would expect disruptive scientific work to be rewarded by high volumes of citations and, symmetrically, impactful work to also be disruptive. A number of recent studies have instead shown that such intuition is often at odds with reality. In this paper, we break down the relationship between impact and disruption with a detailed correlation analysis in two large data sets of publications in Computer Science and Physics. We find that highly disruptive papers tend to be cited at higher rates than average. Contrastingly, the opposite is not true, as we do not find highly impactful papers to be particularly disruptive. Notably, these results qualitatively hold even within individual scientific careers, as we find that - on average - an author's most disruptive work tends to be well cited, whereas their most cited work does not tend to be disruptive. We discuss the implications of our findings in the context of academic evaluation systems, and show how they can contribute to reconcile seemingly contradictory results in the literature. scientific impact scientific disruption scientific careers ## 1 Introduction In an increasingly competitive academic environment, the performance of researchers is constantly monitored, quantified, and ranked in a variety of dimensions. Some of these can be measured rather objectively (e.g., productivity, ability to attract funding, etc. [1, 2, 3]), while others are more elusive, such as the ability to innovate and/or to produce impactful research [4, 5]. Conventionally, these dimensions are often measured as a function of the citations received by published work [6, 7], either via simple citation counts or via more sophisticated bibliometric indicators, such as the well-known \(h\)-index [8], \(g\)-index [9], or indicators of an author's performance relative to their field (see, e.g., [10]). These indicators reflect the extent to which research outputs are recognized by the scientific community. However, they also play an increasingly pervasive role in research evaluation systems, as they influence research rankings, grant attributions, tenure and promotion decisions [11, 12, 13, 14, 15]. Given the significance of citation metrics in academic evaluation, a growing number of studies have been devoted to investigating the factors shaping the number of citations received by a paper. Among these factors, interdisciplinary has a considerable influence on scientific impact [16; 17]. Indeed, it has been found that 'long-distance' interdisciplinary research on average attracts citations at higher rates [7], but there exists an interdisciplinary 'tipping point' beyond which highly interdisciplinary publications tend to have lower impact [2; 18]. In fact, papers that are more likely to be highly cited tend to draw heavily from conventional combinations of existing research while still integrating unusual combinations [19]. The accumulation of citations and its determinants are also studied from the viewpoint of authors and their career progression. For instance, it is well known that scientific careers are characterized by the so called 'random impact rule', i.e., each paper within a authors's publication sequence has the same likelihood of becoming their most-cited work [20]. Nevertheless, authors can experience 'hot streak' periods during which they produce a series of high-impact papers [21; 22]. Citation-based bibliometric indicators have been increasingly scrutinized by the academic community and have become somewhat controversial [11; 23; 24; 25; 26]. One of the major concerns is that such indicators -- and citations in general -- are not a comprehensive proxy of scientific innovation [6; 27; 28; 29]. To better quantify the innovativeness of scientific outputs, the CD index, also known as the disruption score, has been put forward [30; 31]. This indicator has been applied as a measure of innovation in a variety of studies [31; 32; 33; 34], and it has been proven to be effective at distinguishing between disruptive and developmental works. Despite its surging popularity, the disruption score has been criticized for being temporally biased and easily distorted by citation inflation [35]. We anticipate that in this paper we will adopt a variant of the disruption score to mitigate these potential biases (see Methods). A number of studies have leveraged the disruption score to explore scientific dynamics that cannot be explained by citations or impact. For instance, papers with a larger number of authors are more likely to be cited [36]. However, papers authored by large teams tend to be developmental, disruptive research tends to be produced by smaller teams [31]. A recent study investigated the relationship between productivity, innovation, and impact, showing that the authors typically produce more innovative work during periods of low productivity. Conversely, high-impact publications tend to be produced during stretches of high productivity [37]. Another very recent paper found that papers and patents are becoming less disruptive over time [33]. The above findings show that scientific impact and innovation exhibit rather different patterns, almost to the point that they should be treated as two distinct concepts [27]. Yet, the combination of such two concepts has also been shown to be effective, e.g., as a way to identify revolutionary scientific contributions. In fact, Nobel Prize-winning papers generally obtain more citations and achieve higher disruption scores [34]. However, such a result seems to contradict the findings by Zeng _et al._ that disruptive papers in science are losing impact [38]. Motivated by these observations, in this paper we seek to fully explore the relationship between scientific impact and innovation. We begin by breaking down the correlation between disruption scores and citations across each percentile of the top disruptive papers. Then we uncover the full picture of the relationship between disruption scores and citations by investigating whether the most cited papers in a field are also disruptive. Finally, we extend our paper-level findings to the context of career analysis, showing that the relationship between disruption scores and citations also holds at the level of entire careers. ## 2 Results We collect papers published between 1986 and 2015 in Computer Science and and Physics from the AMiner citation network dataset (version 12) and the Web of Science database, respectively (see Methods). We associate a disruption score to each paper, which characterizes a paper as more disruptive when ensuing publications in the same field cite such a paper at a higher rate than the publications in its bibliography (see Methods). We quantify scientific impact as the citations accumulated over the first five years after publication, which is a customary proxy in the literature [33; 38; 39]. Overall, our analysis comprises 898,624 papers in Computer Science and 1,236,016 papers in Physics. ### A detailed breakdown on the correlations between disruptions and citations We begin our analysis with a detailed breakdown of the correlation between scientific disruption and impact. Namely, we rank all the papers in our dataset based on their disruption scores and their impact. In the following, we shall refer to the rankings computed via disruption scores and citations as the 'disruption rank' and the 'impact rank', respectively. We select all papers in the top 1% of the disruption rank, and compute the Kendall correlation coefficient with their positions in the impact rank. We then repeat this process for papers in the top 2%, top 3%, etc, until all papers in Computer Science (898,624 papers in total, 8,986 papers in each percentile) and Physics (1,236,016 papers in total, 12,360 papers in each percentile) have been included. In Fig. 1 (a)-(c) we plot the aforementioned correlation coefficients as a function of percentiles of the disruption distribution in Computer Science and Physics. We observe a positive correlation coefficient between disruption and impact for papers in the top percentiles of the disruption distribution. Such correlation increases as we incorporate more percentiles into our analysis, reaching a peak value around the top 25th percentile, then declining to negative values. To explain such a pattern, in Fig. 1 (b)-(d) we report the proportion of citations received by papers in each percentile of the disruption distribution. We find that the papers receiving the lowest share of citations are those around the 25th percentile, i.e., where we observe the peak in correlation between disruption and impact. After that, less disruptive papers progressively become more cited, which causes the correlation coefficient to decrease. Eventually, the correlation coefficient becomes negative when we consider a large enough portion of papers in our dataset, which supports the result by Zeng _et al._ on the negative correlation between disruption and impact. Fig. 1 (b)-(d) also show that the most disruptive papers are quite well recognized, as evidenced by the relatively higher proportion of citations received by the most disruptive papers in both Computer Science and Physics, although with remarkable differences. In fact, highly disruptive papers in Computer Science are cited at a rate which is much higher than one would expect from a random baseline (i.e., all percentiles receiving a 1% share of all citations). The same cannot be said for Physics, where the most disruptive papers are cited at a rate which is slightly lower than the random baseline. These differences are responsible for the positive (negative) correlation between disruption and impact observed in the top percentiles of the disruption distribution in Computer Science (Physics). We test the robustness of the aforementioned results in three different ways. First, we split all papers in our dataset into three groups based on their publication year, namely 1986-1995, 1996-2005, and 2006-2015, and then repeat the above experiment for each group. The aim of this test is to illustrate that our results are robust over different periods. As can be seen in Fig. 1 (a) and (c), we find consistent patterns across the three groups. Second, we standardize the disruption score (see Methods) of each paper to account for the fact that papers tend to become less disruptive over time [33]. We perform the same experiment with the standardized disruption scores, obtaining consistent results across the two disciplines (see Appendix Fig. 5). Third, we run the same experiments with a null model created by reshuffling the 5 years of accumulated citations received by each paper while keeping their disruptions intact. By reshuffling citations, we randomize the position of top disruptive papers in the impact rank, thus the new correlation coefficients are calculated under the null model. We find that the correlation patterns cannot be explained by the null model, and the correlation coefficients across different percentiles of disruptive papers are around 0 (see Appendix Fig. 6). ### Most-cited papers are relatively less disruptive After examining the relationship between disruption and impact across various percentiles of the disruption distribution, we now explore the relationship from the opposite perspective, i.e, by analysing different percentiles of the impact distribution. Similar to the procedure described in the previous section, we choose the top 1% most cited papers in the impact rank and identify their respective positions in the disruption rank. We calculate the correlation coefficient between these two position vectors and repeat the procedure for the top 2%, 3%, up to all the papers in both disciplines. As shown in Fig. 2, a negative correlation coefficient is apparent across most percentiles of the impact rank in both Computer Science and Physics. The negative correlation strengthens as we incorporate more percentiles into our analysis. Such a pattern indicates that the most-cited papers tend to be less disruptive, and vice versa. Moreover, in both disciplines, the correlation coefficients are generally higher for papers published between 1986 and 1995. In particular, we can find a positive correlation coefficient in the top 1%-30% of the most-cited Computer Science papers. By contrast, for papers published in more recent decades (1996-2005, 2006-2015), the correlation trajectory is not only negative over the entire distribution, but the negative correlation becomes even more pronounced. To further corroborate these results, we perform the same experiment with the standardized disruption scores and with the null model, through the same methods described in the previous section. Our results are valid under both robustness tests (see Appendix Fig. 7). ### The relationship between disruptions and citations in scientific careers Given the aforementioned results, a natural extension of our research is to investigate the relationship between disruption and impact in scientific careers. To construct an author-centred dataset, we first match each paper in our datasets with its respective authors, and then identify long-lived researchers with an active publication record. Specifically, we only include in our analysis authors who started their careers between 1980 and 2000 and had an academic career of at least 20 years. Among these authors, we retain only those who published more than 10 papers, with a publication frequency of at least one paper every 5 years (in line with [40]). Based on these selection criteria, we are left with 27,598 Computer Scientists and 34,527 Physicists (see Methods). We first seek to generalize the findings concerning the disruptive papers to our pool of researchers. Following the method described in 2.1, we start by creating a disruption rank and an impact rank for papers published during a scientific career. We then locate the top 1% disruptive papers within both disruption and citation rank, and compute the correlation coefficient between these two sets of ranks. For this analysis, we do not calculate the correlation coefficient Figure 1: (a) Correlation coefficient across different percentiles of disruptive papers in Computer Science. The thick blue curve is derived from all the papers in our datasets, while dashed lines represent the correlation coefficients corresponding to the 1986-1995, 1996-2005, and 2006-2015 groups. (b) The proportion of citations received by each percentile of Computer Science papers. The correlation pattern observed in (a) can be explained by the proportion of citations received as shown in (b). (c) and (d) are the equivalent versions of the correlation trajectory and the proportion of received citations in Physics. They can be interpreted in a similar way to (a) and (b). Figure 2: Correlation coefficient across different percentiles of the most-cited papers in Computer Science (left) and Physics (right). The patterns of correlation coefficients are very similar in both disciplines, which indicates that the most-cited papers tend to be less disruptive. for each top percentile of papers because doing so can produce an excessive number of repeated values due to the relatively small amount of publications in the publication sequence of an author. Instead we repeat such a process exclusively for the top 1%, 5%, 10%, 15%, and so on. Finally, we collate the correlation coefficients for the same top percentile across all authors in Computer Science and Physics, and plot the mean values of each top percentile in Fig. 3 (a) and (c). Similar to section 2.1, we also show the proportion of citations received by each percentile of papers in disruption distributions at the author level Fig. 3 (b) and (d). We observe that the overall trends in panels (b) and (d) are comparable to the trends in the paper-level results. It is noted that the curves in (b) and (d) achieve higher values compared to the results in the paper datasets. This happens because the same papers can fall within different percentiles when considering less prolific authors. When we restrict our scope of investigation to researchers who have more than 100 publications, i.e., no overlaps between percentiles, our results are very much similar to the paper-level results (see Appendix Fig. 8). As can be seen in Fig. 3, the findings we obtain here are fairly similar to those we observe at the paper level. Specifically, the most disruptive papers in the careers of Computer Scientists and Physicists still attract a relatively high proportion of citations. The correlation trajectories in scientific careers also display a pattern of initial increase followed by a decrease, and such a trajectory can also be explained by the proportion of citations received by each percentile of papers published in a career. Furthermore, we observe a negative correlation coefficient when considering all papers in a career, indicating that the overall negative relationship between disruption and impact persists at the career level. The only significant difference between our findings in academic publications and scientific careers is that the correlation coefficients for the most disruptive papers are now positive in both disciplines. This suggests that the most disruptive papers within a career are well rewarded in terms of impact by their respective scientific communities. We then expand our results regarding the correlations for the most-cited papers to the context of scientific careers. To achieve this, we follow the steps outlined in 2.2 and compute rank-rank correlations over an increasing number of percentiles of the impact distributions obtained at the career level. The results are illustrated in Fig. 4. In line with our results at the paper level, we can still observe negative correlation coefficients across each impact percentile in the Figure 3: (a) The mean value of correlation coefficient across different percentiles of disruptive papers in the careers of Computer Scientists. (b) The average value of the proportion of citations received by papers at each percentile in the publication sequence of Computer Science researchers. Again, the correlation pattern observed in (a) can be explained by the proportion of citations received as depicted in (b). (c) and (d) are the equivalent version of (a) and (b) in Physics. We can observe that in both disciplines, our paper-level results are consistent in the career-level analysis. careers of both Computer Scientists and Physicists. This reinforces our conclusion that the most-cited papers tend to be less disruptive. In order to further substantiate these findings, we repeat the aforementioned experiments using the standardized disruption score and obtain consistent results (see Appendix Fig. 9 and Fig. 11). Moreover, we construct two null models in a similar manner to the previous experiments by reshuffling the disruption score and the 5-year accumulated citations in each author's publication sequence. We then reapply the career-level analysis utilizing these null models, and find that our conclusions cannot be explained by these null models (see Appendix Fig. 10 and Fig. 11). Based on all these results, we believe that our paper-level conclusions regarding the relationship between disruptions and citations still hold in scientific careers. ## 3 Discussion We examined the relationship between scientific innovation and impact, measured in terms of disruption scores and citations, respectively. Our aim is to fully capture the relationship between such two dimensions through two main research questions, namely (1) are disruptive papers highly cited?; and (2) are high-impact papers disruptive? To answer the first question, we analyzed the correlation coefficients between disruption scores and citations across different percentiles of papers ranked by their disruption scores. In both Computer Science and Physics, we find that the correlation varies when we observe different samples of disruptive papers, and that the variations in the correlation coefficients can be explained by the proportion of the citations received by each percentile of papers. Our results reconcile the seemingly contradictory conclusions between Wei _et al._[34] and Zeng _et al._[38]. Specifically, papers with higher levels of disruption exhibit a positive correlation between disruption scores and citations. This pattern is consistent with the finding, e.g., that Nobel Prize-winning papers typically receive more citations and are characterized by higher disruption scores [34]. However, as we incorporate more percentiles in the disruption score distribution, the correlation coefficient gradually shifts from positive to negative values, and ends up with a negative correlation when we include most of the papers in our analysis, in line with findings by Zeng _et al._[38]. Concerning the second question, we find a negative correlation between disruption scores and citations in both disciplines, which suggests that the most-cited papers tend to be less disruptive. Moreover, we observe that such a negative correlation intensifies over time. Having determined the relationship between disruption scores and impact at the level of academic publications, we then expand our results to the careers of Computer Scientists and Physicists, concluding that the aforementioned results remain equivalent at the level of careers. Our results suggest that there are two strategies researchers might adopt to maximize their citation rates. The first strategy aims to publish truly disruptive papers. This strategy is beneficial to the development of science as a whole but requires researchers to accumulate research experience, go through periods of focus and low productivity [37], and undertake the risk of receiving only a limited number of citations. The second strategy is to produce papers that attract a large number of citations. Such a strategy favors the career progression of Figure 4: Correlation coefficient across different percentiles of the most-cited papers in the careers of Computer Science (left) and Physics (right). It can be seen that our paper-level results hold true in scientific careers. individual researchers. However, it may also incentivize researchers to focus excessively on popular research topics and incremental work, which can be detrimental to the overall diversity and innovation of scientific research [41]. A common criticism of the disruption metric is that the score of a paper can be distorted upward by receiving only a small number of citations [33], i.e., high scores do not indicate high research quality but might simply reward papers that are less appreciated by citations. However, our results show that the top 1% of disruptive papers not only achieve high disruption score levels but also attract a high proportion of citations. These findings are consistent in both Computer Science and Physics, and apply to both paper-level and career-level analysis. Therefore, such a criticism does not apply to papers with very high disruption scores. The evaluation of research outputs is often based on bibliometric indicators of scientific impact [42]. Our study reveals that when research assessment relies excessively on citations, papers that stand out in this regime tend to be less disruptive. Similarly, if evaluations were to be purely based on disruption scores, some high-scoring papers may also exhibit limited scientific impact. A more effective approach would integrate both innovation and impact as complementary dimensions. Such an approach would enable us to identify papers that are both disruptive and impactful. Papers that excel under those criteria are typically recognized as work of very high quality [34]. Therefore, we advocate that scientific evaluation should be carried out through a comprehensive analysis of publications [27; 43]. ## 4 Methods ### Data We collect publication and citation data pertinent for Computer Science from the AMiner citation network dataset (version 12). The AMiner dataset extracts papers published between 1960s to 2020 from DBLP, ACM, MAG, and other sources in Computer Science [44], and it records a total of 4,894,081 papers and 45,564,149 citation relationships. The AMiner dataset has been utilized in a variety of bibliometric studies [45; 46; 47; 37]. For publications in Physics, we retrieve data from the Web of Science (WOS) database. We extract the papers published by long-lived researchers who maintain an active publication record, along with the citation network related to their publications. In total, we collect 1,619,039 papers and 12,621,175 citation relationships from 1985 to 2020. It is important to note that the WOS database does not provide unique author identifiers. To link authors with their respective publications, we employ the method proposed by Caron _et al._ to disambiguate author names [48]. This method determines a similarity score between pairs of authors by considering various attributes, such as ORCID identifiers, names, affiliations, emails, coauthors, grant numbers, etc. If a pair of authors has a higher similarity score, they are more likely to be identified as the same person. The effectiveness of this method has been validated by a recent study with precision and recall scores higher than 90% [49]. In our analysis, we only calculate disruption scores for papers published before 2016, thereby allowing papers in our pool to accumulate citations for at least 5 years. We set filtering criteria for researchers in line with [40], performing our career analysis on a total of 27,598 and 34,527 researchers in Computer Science and Physics, respectively. ### The disruption score We employ the disruption score to quantify the disruption level of each paper in our datasets. The fundamental idea of the disruption score is that a highly disruptive publications can overshadow preceding papers in the same field. The subsequent papers are more likely to cite the disruptive work over the references listed in its bibliography. The disruption score is particularly useful in differentiating between disruptive and developmental pieces of work, and it has been validated using data from academic publications, patents, and software products [30; 31; 33]. To be more specific, we create a citation network centered on a focal papers, combined with its references (preceding papers) and subsequent papers. The subsequent papers can be further categorized into three groups: papers citing only the focal paper, those citing both the focal paper and the references, and those citing only the references of the focal paper. Let us assume that the number of subsequent papers in the three groups are \(n_{i}\), \(n_{j}\) and \(n_{k}\), respectively. Then the disruption score can be determined as \[D=\frac{n_{i}-n_{j}}{n_{i}+n_{j}+n_{k}} \tag{1}\] where \(n_{i}-n_{j}\) quantifies the extent to which the focal paper has eclipsed attention towards preceding papers, and \(n_{i}+n_{j}+n_{k}\) represents the total number of subsequent papers within the citation network.
2306.17534
Analytical prediction of the temperature and the lifetime of an evaporating spherical droplet
In this paper, we propose to predict analytically the temperature of an evaporating spherical droplet. To do so, we first review, from data in the literature, the effect of temperature on the physical parameters involved in cooling-induced evaporation, namely the saturating vapor pressure, the diffusion coefficient of vapor in air, the liquid density, the enthalpy of vaporization and the thermal conductivity of air. These data support a series of approximations that allows us to derive an implicit equation for the liquid temperature. We propose a quadratic approximation of the variation of the saturating vapor concentration with temperature to obtain an explicit prediction of the drop temperature. As a result, an analytical prediction of the droplet lifetime including the cooling effect is proposed.
Marie Corpart, Frédéric Restagno, François Boulogne
2023-06-30T10:45:28Z
http://arxiv.org/abs/2306.17534v1
# Analytical prediction of the temperature and the lifetime of an evaporating spherical droplet ###### Abstract In this paper, we propose to predict analytically the temperature of an evaporating spherical droplet. To do so, we first review, from data in the literature, the effect of temperature on the physical parameters involved in cooling-induced evaporation, namely the saturating vapor pressure, the diffusion coefficient of vapor in air, the liquid density, the enthalpy of vaporization and the thermal conductivity of air. These data support a series of approximations that allows us to derive an implicit equation for the liquid temperature. We propose a quadratic approximation of the variation of the saturating vapor concentration with temperature to obtain an explicit prediction of the drop temperature. As a result, an analytical prediction of the droplet lifetime including the cooling effect is proposed. ## 1 Introduction Sublimation of solid spheres has been investigated experimentally by Morse in 1910 revealing that the mass loss is not proportional to the surface area but to the radius [1]. Langmuir rationalized these findings by considering an adiabatic process where mass transfer is controlled by the diffusion of the vapor in the air [2]. The study of spherical droplet evaporation holds significant importance in diverse scientific and technical domains that involve aerosols. Aerosol are produced naturally under different phenomena such as sea spray, fog, clouds, and rain drops. Suspended droplets are also generated by animals and humans during breathing and speaking, which has recently gained attention for airborne contaminants [3, 4, 5]. Aerosols can also be produced artificially with spraying techniques for cooling, painting applications, or fuel dispersion in motor engines [6]. Therefore, understanding the mass transfer of airborne volatile drops is crucial. This phenomenon is complex due to the coupled heat and mass transfer associated with the phase change, while the transport could occur in a diffusive or a convective manner. As a result, the theoretical description of the system is more challenging than in the case of the Langmuir adiabatic model. Therefore, the physics community proposes models to rationalize drop evaporation and predicting their lifetime. These attempts often involve numerical resolutions of coupled equations that describe transport phenomena, including a broad range of physical effects such as convective or radiative transfer that may occur during the process [7, 8, 9, 10, 11, 12, 13, 6]. At the same time, some studies derived analytical predictions of evaporation kinetics after making several hypothesis [4, 14, 15]. These analytical predictions have the ability to suggest directly how the mechanisms are at play in the quantities of interest. In particular, the cooling effect due to the enthalpy of vaporization is known to have a significant effect on the drop lifetime [16, 17, 18, 19, 20, 9, 8, 21, 22]. The variation of temperature in the system leads to the variation of several physical quantities relevant in the process, such as diffusion coefficient, saturating vapor pressure, enthalpy of vaporization, density and thermal conductivity. In this article, we focus our attention on water spherical droplet evaporating at ambient temperature. We collect data available in the literature on this system to report the temperature variation of the relevant physical quantities to legitimate upcoming approximations. Next, we consider the evaporation of the drop in the diffusion-limited regime, and we propose to numerically solve the coupling between evaporation and cooling to obtain the interfacial temperature. We also use a quadratic approximation to describe the variation of the saturating vapor pressure, which enables to compute analytically the drop temperature and thus its evaporation rate and lifetime. We compare our description with two other approximations used in the literature, and in particular a linear description of the saturating vapor pressure. We show that this linear approximation leads to significant differences with the numerical solution, while a quadratic approximation provides an excellent analytical description. Temperature variation of some physical constants In this section, we present data available in the literature on the temperature variation of the relevant physical constants. Whenever possible, we present experimental data and reference data extracted from a Handbook of chemistry and physics [23] and a meteorological table [24]. We consider the saturating vapor pressure, the diffusion coefficient of vapor in air, the liquid density, the enthalpy of vaporization and the thermal conductivity of air. We limit our study to an ambient range of temperature between 0 and 30 \({}^{\circ}\)C. ### Temperature variation of water physical constants #### 2.1.1 Saturating vapor pressure \(P_{\rm sat}\) and concentration \(c_{\rm sat}\) The measurements of saturating vapor pressure are generally carried out in a closed chamber, containing only the compound to be analyzed, where the temperature is set, and the equilibrium pressure is measured. The evolution of saturating vapor pressure of water \(P_{\rm sat}\) as a function of the temperature \(T\) is presented in figure 1(a) where reference data extracted from a Handbook of chemistry and physics [23] are symbolized by the plus symbols. It shows a significant increase of the saturating vapor pressure with the temperature. The variation of the saturating vapor pressure \(P_{\rm sat}\) with temperature satisfies the Clausius-Clapeyron [15, 8] equation \[\frac{{\rm d}P_{\rm sat}}{{\rm d}T}=\frac{\Delta H_{\rm vap}M_{\rm w}P_{\rm sat }}{{\cal R}T^{2}}, \tag{1}\] where \(\Delta H_{\rm vap}\) is the enthalpy of vaporization of the considered material, here water, \(M_{\rm w}\) its molar mass and \({\cal R}\simeq 8.314\) J-mol\({}^{-1}\cdot\)K\({}^{-1}\). Assuming that the enthalpy of vaporization does not depend on temperature, the Clausius-Clapeyron equation becomes \[\frac{P_{\rm sat}}{p^{\circ}}=\exp{\left(\frac{-\Delta H_{\rm vap}M_{\rm w}}{ {\cal R}}\left(\frac{1}{T}-\frac{1}{T^{\circ}}\right)\right)}, \tag{2}\] where \(p^{\circ}\) and \(T^{\circ}\) are the pressure and temperature of a reference boiling point. A more robust equation, known as Antoine equation, can be obtained with an additional fitting parameter [31] \[P_{\rm sat}(T)=p^{\circ}\,10^{A-B/(C+T)}, \tag{3}\] where \(p^{\circ}=10^{5}\) Pa and \(A\), \(B\), \(C\) are constants. For water at \(T\in[0,30]\)\({}^{\circ}\)C, \(A,B,C\) are obtained by fitting the data extracted from [23], with \(A=5.341\pm 0.003\) K, \(B=1807.5\pm 1.6\) K, and \(C=-33.9\pm 0.1\) K. Figure 1(a) shows a nice agreement between the data and the model. The typical error between the data and the fit with these parameters is about 0.1 Pa. Alternative expressions are also available in the literature, such as Buck's relation (See for instance [9]), without any noticeable improvements. The vapor saturating concentration \(c_{\rm sat}\) expressed in kg.m\({}^{-3}\), can be obtained from the ideal gas law \[c_{\rm sat}(T)=\frac{P_{\rm sat}(T)M_{\rm w}}{{\cal R}T}, \tag{4}\] where \(M_{\rm w}=18.02\cdot 10^{-3}\) kg/mol. #### 2.1.2 Diffusion coefficient \({\cal D}\) of water vapor in air To calculate theoretically the diffusion coefficient of a binary system, a molecular theory of diffusion is developed based on collisions of hard sphere in a gas. To solve the obtained Boltzmann equation the Chapman-Enskog method is used, which gives at first order for a molecule \(A\) diffusing in \(B\)[32] \[{\cal D}(A,B)=\frac{8.258\cdot 10^{-3}}{\sqrt{2}}\,\frac{T^{3/2}\sqrt{\frac{1}{ M_{\rm A}}+\frac{1}{M_{\rm B}}}}{P_{\rm atm}\cdot\overline{\Omega}_{A,B}}, \tag{5}\] where \(M_{\rm A}\) and \(M_{\rm A}\) are the molar masses, \(T\) is the temperature, \(P_{\rm atm}\) is the atmospheric pressure and \(\overline{\Omega}_{A,B}\) is the diffusion collision integral for hard spheres. However, this equation fails to capture the variation of \({\cal D}(A,B)\) with the temperature. Indeed, the 3/2 power-law dependence is obtained with an ideal hard spheres model, but experimentally the measured exponent lies between 1.5 and 2 [33]. To obtain a more accurate description, additional details on the inter-molecular interactions are required, which depends on the temperature range considered and greatly increases the complexity of the calculation. That is why most of the time, estimation of diffusion coefficient relies on semi-empirical correlation. For example, the diffusion coefficient of non-polar gases may be estimated by the Fuller, Schettler, and Giddings' method [33, 34] that gives \[{\cal D}(A,B)=\frac{T^{1.75}\sqrt{\frac{1}{M_{\rm A}}+\frac{1}{M_{\rm B}}}}{P_ {\rm atm}\left(V_{A}^{1/3}+V_{B}^{1/3}\right)^{2}}\cdot 10^{-6}. \tag{6}\] The diffusion coefficient \({\cal D}\) is expressed in m\({}^{2}\)/s, the molar mass of compound \(i\) is in g/mol and \(V_{i}\) is the diffusion volume of the molecule \(i\) where \(V_{i}=\sum_{j}n_{j}V_{j}\) with \(j\) a given atom composing the molecule. The atomic parameters were determined by regression analysis of experimental data and are available in [34]. For water diffusing in air, the diffusion coefficient is written \({\cal D}\) and we have \(V_{\rm air}=19.710^{-6}\) m\({}^{3}\)/mol, \(V_{\rm w}=13.110^{-6}\) m\({}^{3}\)/mol, \(M_{\rm w}=18.02\) g/mol, and \(M_{\rm air}=28.96\) g/mol [34, 35]. To test the validity of the Fuller's method, we gathered measurements from different studies [25, 26, 27, 28, 29] together with reference values [24, 23] in figure 1(b) and compare them to equation (6). There are many different ways to measure diffusion coefficient which have various accuracy. However, to measure diffusion coefficient in air at room temperature, the most used method is the evaporation tube method [26, 27, 28, 29]. Water partially fill a capillary and evaporates into the stagnant gas filling the rest of the tube. Evaporation rate is measured from the variation of height of the liquid or the variation of weight of the system. Under the assumptions of quasi-steady evaporation, and, vapor and air being ideal gas, the diffusion coefficient is obtained from the evaporation rate at the temperature of the experiment. To get precise measurements, the liquid must be carefully kept at constant temperature at each time to avoid evaporative cooling of the liquid. Indeed, small errors on the estimation of the temperature of the interface and thus on the values of other physical-chemistry parameters (such as \(P_{\rm sat}\)) can lead to significant errors on the estimated diffusion coefficient. Finally, surface contamination or convection effects can also lead to inaccurately estimate the diffusion coefficient. This explains the quite large dispersion of the experimental data in Figure 1. According to [32], at best, the reliability of the measurements by the evaporative tube method is several percent (\(\approx 10\) %). Equation (6) thus provides a correct estimation of the diffusion coefficient value and its temperature variation, even if it underestimates most of the experimental results plotted in figure 1(b) of about 5 %. Other empirical models [9, 20, 36, 15] exist to calculate the diffusion coefficient of a \(A\) in \(B\) at a temperature \(T\) but to use them you need to know the value of \(\mathcal{D}(A,B)\) at a given reference temperature and their use does not significantly improve the estimation of the diffusion coefficient. Moreover, the Fuller's method has the advantage of being easily applicable to the study of other chemical compounds for which experimental measurements of the diffusion coefficient are limited, unreliable or non-existent. Figure 1: Temperature effect on some physical constants of water. (a) Saturating vapor pressure \(P_{\rm sat}\). Plus symbols are reference data extracted from a Handbook of chemistry and physics [23], solid blue curve is equation (3). (b) Diffusion coefficient \(\mathcal{D}\) of water vapor in air. Experimental data are extracted from Brown and Escombe [25] (\(\star\)), Gilliland [26] (\(\diamond\)), Brookfield _et al._[27] (\(\square\)), Kimpton and Wall [28] (\(\circ\)), Lee and Wilke [29] (\(\triangle\)). Reference data are extracted from List [24] (\(\triangledown\)) and Lide [23] (\(+\)). The deep blue curve is Fuller’s equation (Eq. (6)) (c) Enthalpy of vaporization of water \(\Delta H_{\rm vap}\). Experimental data are extracted from Osborne [30] (\(\bigcirc\)) and reference data are extracted from List [24] (\(\triangledown\)) and Lide [23] (\(+\)), the deep blue curve is equation (7). (d) Water density \(\rho\). The reference data are extracted from Lide [23] (\(+\)). The deep blue curve is a quadratic fit of the data. Fitting parameters are given in equation (8). #### 2.1.3 Enthalpy of vaporization Experimental data obtained from calorimetry measurements [30] as well as reference data for enthalpy of vaporization of water \(\Delta H_{\rm vap}\) are plotted in figure 1(c) [24, 23]. To predict the evolution of the enthalpy of vaporization of water with temperature, we choose to use the empirical equation given by Fleagle [37] and Andreas [9]: \[\Delta H_{\rm vap}=-2.274\cdot 10^{3}\,T+3.121\cdot 10^{6}, \tag{7}\] where \(\Delta H_{\rm vap}\) is expressed in J/kg for \(T\) in Kelvin [37, 9]. Equation (7) is plotted in solid blue curve in figure 1(c). There is a good agreement between both experimental and reference data and equation (9), the difference between equation (7) and the data of the literature being less than 1 %. #### 2.1.4 Liquid density In figure 1(d), we plot reference values for water density extracted from [23] as function of temperature. We fit the experimental data with a quadratic equation for \(T\in[10,30]^{\circ}\)C and we extend the fit to the entire temperature range which gives \[\rho=-5.3\cdot 10^{-3}\,T^{2}+2.9\,T+6.0\cdot 10^{2}, \tag{8}\] where \(\rho\) is in kg.m\({}^{-3}\) and \(T\) in Kelvin. Equation (8) gives a good estimation of water density with a an error of the order of \(5\cdot 10^{-2}\) % for \(T\in[0;10]\)\({}^{\circ}\)C and \(5\cdot 10^{-3}\) % for \(T\in[10;30]\)\({}^{\circ}\)C. ### Temperature variation of air thermal conductivity of air In figure 2, we plot experimental data of the thermal conductivity of dry air \(\lambda_{\rm air}\) measured with the hot wire method [38, 39, 40]. This method consists in recording the temperature variation of a heated wire placed in the fluid of interest to determine its thermal conductivity. We also plot in figure 2 the reference data for \(\lambda_{\rm air}\) extracted from [24, 23]. The equation to describe the evolution of the thermal conductivity of dry air with temperature, given by Andreas [9] \[\lambda_{\rm air}=-3.47\cdot 10^{-8}\,T^{2}+9.88\cdot 10^{-5}\,T-2.75\cdot 10^{-4}, \tag{9}\] is also represented in figure 2. This equation describes, with a good accuracy, the data from the literature with an error of less than 1 % between equation (9) and reference data [24, 23] and an error of 1 to 2 % with experimental data [38, 39, 40]. There are other models in the literature to predict the evolution of the thermal conductivity of moist air as a function of temperature and relative humidity [36, 20, 41] but they are more tedious to compute and do not lead to a significant improvement of the description of the data. At 20 \({}^{\circ}\)C, the relative difference between the thermal conductivity of dry and saturated air is about 2 % so we consider that \(\lambda_{\rm air}\) is independent of \(\mathcal{R}_{\rm H}\) and is equal to the thermal conductivity of dry air. Moreover, the expression given in [36, 20, 41] for the thermal conductivity of dry and moist air slightly underestimate (error of 1 %) the reference data [24, 23]. ### Summary The evolution of the physical parameters of the system with temperature are evaluated with equations (3) and (4) for \(c_{\rm sat}(T)\), (6) for \(\mathcal{D}\), (7) for \(\Delta H_{\rm vap}\), (8) for \(\rho\) and (9) for \(\lambda_{\rm air}\). These equations are in good agreement with data from the literature. From this we can evaluate the variation of all the important parameters when the temperature increases from 0 to 30 \({}^{\circ}\)C. This analysis shows that, when \(T\) varies from 0 to 30 \({}^{\circ}\)C, \(c_{\rm sat}\) increases by 250 %, \(\mathcal{D}\) increases by 20 %, \(\Delta H_{\rm vap}\) increases by 3 %, \(\rho\) decreases by 0.5 %, and \(\lambda_{\rm air}\) increases by 8 %. ## 3 Model for thermal effect on drop lifetime ### Equation of mass transfer We consider the mass transfer of the water vapor in the atmosphere surrounding the spherical drop of radius \(R(t)\) and we assume that this process is limited by diffusion, which is valid in a quiescent atmosphere. This is true for droplet radius significantly larger than the mean-free path of the vapor molecules, _i.e._\(R\) larger than few micrometers [15]. Over a timescale \(R_{0}^{2}/\mathcal{D}\), where \(R_{0}\) is the initial radius, the transfer can be considered to be in a stationary regime. In practice, we can check that Figure 2: Temperature effect on thermal conductivity of air. Experimental data obtained by hot wire measurements extracted from Taylor and Johnston [38] (\(\star\)), Kannuluik and Carman [39] (\(\circ\)), Rastorguev and Geller [40] (\(\circ\)) and reference data extracted from List [24] (\(\triangledown\)) and Lide [23] (\(+\)). Deep blue curve is equation (9). this timescale is short compared to the total evaporating time, such that the contribution of the starting non-stationary regime is negligible. Thus, the concentration field \(c\) is the solution of the Laplace equation \(\triangle c=0\), which writes in spherical coordinates \[\frac{1}{r^{2}}\frac{\mathrm{d}}{\mathrm{d}r}\left(r^{2}\frac{\mathrm{d}c}{ \mathrm{d}r}\right)=0. \tag{10}\] This equation is supplemented by two boundary conditions on the concentration, respectively at the liquid-vapor interface and far from the interface, \[c(r=R) =c_{\mathrm{sat}}(T_{\mathrm{i}}), \tag{11}\] \[c(r\rightarrow\infty) =c_{\infty}, \tag{12}\] where \(T_{\mathrm{i}}\) is the temperature of the interface. The relative humidity is defined as \(\mathcal{R}_{\mathrm{H}}=p_{\infty}/P_{\mathrm{sat}}(T_{\infty})\approx c_{ \infty}/c_{\mathrm{sat}}(T_{\infty})\) in the ideal gas approximation, where \(T_{\infty}\) is the air temperature far from the droplet. By integrating (10), the local evaporative flux given by the Fick's law, \(j=-\mathcal{D}\left.\frac{\mathrm{d}c}{\mathrm{d}r}\right|_{r=R}\), writes \[j=\mathcal{D}(T_{\mathrm{i}})\frac{\Delta c^{\star}}{R}, \tag{13}\] with \(\Delta c^{\star}=c_{\mathrm{sat}}(T_{\mathrm{i}})-c_{\infty}\). The integration of the local flux over the evaporating surface gives \(Q_{\mathrm{ev}}=\int j\,\mathrm{d}S=4\pi R\mathcal{D}(T_{\mathrm{i}})\Delta c ^{\star}\), which can be rewritten \[Q_{\mathrm{ev}}=4\pi R\mathcal{D}(T_{\mathrm{i}})c_{\mathrm{sat}}(T_{\infty })\left(\frac{c_{\mathrm{sat}}(T_{\mathrm{i}})}{c_{\mathrm{sat}}(T_{\infty} )}-\mathcal{R}_{\mathrm{H}}\right). \tag{14}\] To compute the evaporation rate \(Q_{\mathrm{ev}}\) the temperature of the liquid must be determined. To do so, we write in the next paragraph the heat transfer between the atmosphere and the drop. ### Equation of heat transfer As for the mass transfer, we consider a diffusion limited process in a stationary regime, for which, as for the mass transfer, the air temperature field is a solution of the Laplace equation \(\triangle T=0\) with the boundary conditions \(T(r=R)=T_{\mathrm{i}}\) and \(T(r\rightarrow\infty)=T_{\infty}\). The steady-state assumption also implies that the temperature in the drop has reached its equilibrium value \(T_{\mathrm{i}}\) and is uniform in the liquid. This is validated if the timescale over which the heat diffuses through the liquid \(R_{0}^{2}/\kappa\ell\) with \(\kappa\ell\) the thermal conductivity of the liquid, is short compared to the evaporative time [8]. In practice, this is valid for water droplet evaporating under ambient conditions [19, 9, 8, 3, 14]. The integration of the Laplace equation leads to a total heat flux \[Q_{\mathrm{h}}=-4\pi R\lambda_{\mathrm{air}}(\overline{T})\Delta T^{\star}, \tag{15}\] where \(\Delta T^{\star}=T_{\infty}-T_{\mathrm{i}}\) and \(\overline{T}\) is the average air temperature \(\overline{T}=(T_{\infty}-T_{\mathrm{i}})/2\)[19]. We assume that the air temperature can be approximated by the effective temperature \(\overline{T}\) as done in various studies [15, 6]. The heat and mass fluxes are coupled through the enthalpy of vaporization \(\Delta H_{\mathrm{vap}}(T_{\mathrm{i}})\), \(\Delta H_{\mathrm{vap}}\,Q_{\mathrm{ev}}=-Q_{\mathrm{h}}\), which gives \[T_{\infty}-T_{\mathrm{i}}=\frac{\Delta H_{\mathrm{vap}}(T_{\mathrm{i}}) \mathcal{D}\left(T_{\mathrm{i}}\right)c_{\mathrm{sat}}(T_{\infty})}{\lambda _{\mathrm{air}}\left(\overline{T}\right)}\left(\frac{c_{\mathrm{sat}}(T_{ \mathrm{i}})}{c_{\mathrm{sat}}(T_{\infty})}-\mathcal{R}_{\mathrm{H}}\right). \tag{16}\] By finding the root of this equation, we can obtain the interface temperature \(T_{\mathrm{i}}\). We remark that this temperature is independent of the droplet radius. ## 4 Discussion We aim to provide an analytical expression of the interfacial temperature \(T_{\mathrm{i}}\). First, we present the results obtained with a numerical approach without further approximation. Then, we recall approximations found in the literature, and we present a solution based on the quadratic approximation of \(c_{\mathrm{sat}}(T)\). All these solutions are compared to the numerical prediction, and we also provide expressions for the drop lifetime. ### Numerical approach In this section, we consider the resolution of equation (16) to obtain the temperature of the interface \(T_{\mathrm{i}}\) for given atmospheric conditions, namely the temperature \(T_{\infty}\) and the relative humidity \(\mathcal{R}_{\mathrm{H}}\). From the interfacial temperature, the concentration ratio \(c_{\mathrm{sat}}(T_{\mathrm{i}})/c_{\mathrm{sat}}(T_{\infty})\) can be computed, and thus the drop evaporation and lifetime. The typical evolution of the concentration ratio with the temperature is plotted in figure 3(a) for \(T_{\infty}=20\)\({}^{\circ}\)C, where the reference data extracted from [23] are represented by the \(+\) symbols and Antoine equation (3) is plotted with a deep blue line. A numerical approach can be employed to determine the root \(T_{\mathrm{i}}\) of equation (16) by using Newton's method from scipy [42], together with equations (3)-(9) to get the temperature evolution of the physical parameters. The temperature of the liquid obtained by the full numerical resolution is plotted in solid gray line in the inset of figure 3(b). Nevertheless, the complexity of equation (16) prevents analytical solutions. Thus, further approximations must be made to solve equation (16). Due to the weak variations of \(\Delta H_{\mathrm{vap}},\mathcal{D}\) and \(\lambda_{\mathrm{air}}\) with temperature, we assume that these parameters are independent of the temperature, and we choose to evaluate them at the ambient temperature \(T_{\infty}\). In this framework, equation (16) becomes \[T_{\infty}-T_{\mathrm{i}}=\chi\left(\frac{c_{\mathrm{sat}}(T_{\mathrm{i}})}{c _{\mathrm{sat}}(T_{\infty})}-\mathcal{R}_{\mathrm{H}}\right). \tag{17}\] where \(\chi=\Delta H_{\mathrm{vap}}\mathcal{D}c_{\mathrm{sat}}(T_{\infty})/\lambda _{\mathrm{air}}\). To test the validity of this hypothesis, we solve numerically (with Newton's method from scipy) equation (17) where \(c_{\rm sat}(T)\) is given by Antoine's equation (Eq. (3)) under the perfect gas approximation (Eq. (4)). The temperature of the liquid obtained is plotted in dashed dotted black line in figure 3(b). Results are in excellent agreement with the full numerical resolution (see the inset of Fig. 3(b)), the maximum error being about \(0.4\)\({}^{\circ}\)C for \({\cal R}_{\rm H}=0\). In the rest of the paper, we will thus work under the assumption that \(\Delta H_{\rm vap},{\cal D}\) and \(\lambda_{\rm air}\) are independent of the temperature and their values are taken at \(T_{\infty}\). Applying this approximation to equation (14) we get the evaporation rate of the drop \[Q_{\rm ev}=Q_{0}\left(\frac{c_{\rm sat}(T_{\rm i})}{c_{\rm sat}(T_{\infty})}-{ \cal R}_{\rm H}\right) \tag{18}\] with \[Q_{0}=4\pi R{\cal D}c_{\rm sat}(T_{\infty}), \tag{19}\] the evaporation rate of a spherical drop without evaporative cooling (\(T_{\rm i}=T_{\infty}\)) and placed in a dry atmosphere (\({\cal R}_{\rm H}=0\)). The droplet lifetime is obtained from the conservation of the drop volume \(\Omega=\frac{4}{3}\pi R^{3}\), \[Q_{\rm ev}=-\rho\frac{{\rm d}\Omega}{{\rm d}t}, \tag{20}\] Figure 3: (a) Ratio \(c_{\rm sat}(T)/c_{\rm sat}(T_{\infty})\) as a function of \(T_{\infty}-T\) for \(T_{\infty}=20\)\({}^{\circ}\)C. Black crosses are data from literature presented in Fig. 1 combined with equation (4). The deep blue curve is obtained from Antoine equation (Eq. (3)). The green curve is computed from the linear approximation given by eq. (24). The light blue curve is the quadrature defined by equation (31) with the fitting coefficient \(\alpha_{1}=-5.5\cdot 10^{-2}\)K\({}^{-1}\) and \(\alpha_{2}=9.8\cdot 10^{-4}\)K\({}^{-2}\) at \(T_{\infty}=20\)\({}^{\circ}\)C. (b–d) Results obtained for spherical droplets evaporating at \(T_{\infty}=20\)\({}^{\circ}\)C as a function of the relative humidity. (b) Interfacial temperature \(T_{\rm i}\) obtained with the numerical resolution of equation (17)(dashed-dotted black line), the linear approximation (Eq. (26)) (green), and the quadratic approximation (Eq. (33)) (blue). The inset shows the interfacial temperature obtained by the numerical resolution of equation (16) (gray) and of equation (17) (black) as a function of the relative humidity \({\cal R}_{\rm H}\). Dimensionless (c) evaporative flux \(Q_{\rm ev}/Q_{0}\) and (d) lifetime \(\tau/\tau_{0}\) as a function of \(1-{\cal R}_{\rm H}\). The numerical resolution is represented in black. Results obtained with the linear approximation are computed in green lines and results obtained with the quadratic approximation are represented in light blue lines. Solid lines are equations (c) (27) (green) and (34) (blue) for the evaporative flux and (d) (28) (green) and (35) (blue) for the drop lifetime. In dashed lines we check the internal coherence of the two approximations by plotting equations (c) (29) and (d) (30) in which \(T_{\rm i}\) is given by either the linear approximation (eq. (26)) (dashed green lines) or the quadratic approximation (eq. (33) (dashed blue lines). where \(\rho\) is the liquid density at the temperature of the liquid \(T_{\rm i}\). We assume that \(\rho(T_{\rm i})=\rho(T_{\infty})\), which is fairly reasonable, as \(\rho\) decreases by only 0.5 % when the air temperature varies from 0 to 30\({}^{\circ}\)C. After integration from \(R(0)=R_{0}\) to \(R(\tau)=0\), we have the dynamics of the droplet radius \(R(t)=R_{0}\sqrt{1-t/\tau}\), where the droplet lifetime is \[\tau=\tau_{0}\left(\frac{c_{\rm sat}(T_{\rm i})}{c_{\rm sat}(T_{\infty})}-{ \cal R}_{\rm H}\right)^{-1}. \tag{21}\] We noted \(\tau_{0}=\rho R_{0}^{2}/2{\cal D}c_{\rm sat}(T_{\infty})\) the lifetime of a spherical drop evaporating in a dry atmosphere (\({\cal R}_{\rm H}=0\)) without cooling effect. Results for evaporation rate and drop lifetime calculated with \(T_{\rm i}\) obtained by numerical resolution of equation (17) are plotted in dashed-dotted black lines respectively in figure 3(c) and in figure 3(d) as a function of the relative humidity. The complexity of Antoine equation still prevents us to solve analytically equation (17). Therefore, we need to establish further approximations on \(c_{\rm sat}(T)\) to pursue analytical calculations. Next, we consider two approximations of the literature, namely a Taylor expansion of Clausius-Clapeyron in the limit \(\Delta T\to 0\)[15] and a linearized approximation of the evolution of the saturating vapor concentration with the temperature [3, 14]. We also propose to use a quadratic approximation of the variation of \(c_{\rm sat}(T)\) and we discuss the level of accuracy of each approach. ### Taylor expansion of the Clausius-Clapeyron equation In the case \(T_{\rm i}\approx T_{\infty}\) and \((T_{\infty}-T_{\rm i})/T_{\infty}\ll 1\), Fuchs [15] provides an analytical solution of equation (17) by performing a Taylor expansion of the Clausius-Clapeyron equation (2) that gives at first order \[\frac{P_{\rm sat}(T_{\rm i})}{P_{\rm sat}(T_{\infty})}\approx\frac{c_{\rm sat }(T_{\rm i})}{c_{\rm sat}(T_{\infty})}\approx 1-\frac{M_{\rm w}\cdot\Delta H_{ \rm vap}}{{\cal R}}\frac{T_{\infty}-T_{\rm i}}{T_{\infty}^{2}}. \tag{22}\] In figure 3(a), we plot this equation in yellow line for \(T_{\infty}=20\)\({}^{\circ}\)C. The Taylor expansion of Clausius-Clapeyron equation provides an excellent approximation at \(T\approx T_{\infty}\) but leads to errors of the order of 30 % at \(T=10\)\({}^{\circ}\)C and 200 % at \(T=0\)\({}^{\circ}\)C in the estimation of \(c_{\rm sat}(T)/c_{\rm sat}(T_{\infty})\). The substitution of equation (22) into equation (17) gives an explicit expression for the temperature of the liquid \[T_{\rm i}=T_{\infty}-\frac{\chi}{1+\chi\frac{\Delta H_{\rm vap}M_{\rm vap}}{{ \cal R}T_{\infty}^{2}}}\left(1-{\cal R}_{\rm H}\right). \tag{23}\] This equation is represented in yellow line in figure 3(b). The temperature drop in a drop of water is of the order of 10 \({}^{\circ}\)C, which invalidates the assumptions used to perform the limited expansion of the Clausius-Clapeyron equation and therefore leads to an incorrect estimation of the temperature in the drop. Equation (23) underestimates the interface temperature by about 4 \({}^{\circ}\)C for \({\cal R}_{\rm H}=0\). Thus, this method only provides an analytical prediction of the cooling effect in the vicinity of \(\Delta T\to 0\). Nevertheless, for water, the cooling effect can be significant such that we seek for a more robust prediction. ### Linear approximation of \(c_{\rm sat}(T)\) In [3, 14], Netz and Eaton suggest using a linear approximation of \(c_{\rm sat}(T)\) to perform an analytical resolution of equation (17). We reproduce here this valuable approach and comment it afterwards. The linearized concentration writes \[c_{\rm sat}(T)=c_{\rm sat}(T_{\infty})\left[1-\Gamma(T_{\infty}-T)\right], \tag{24}\] where \[\Gamma=\frac{1}{(T_{\infty}-T_{\rm m})}\frac{c_{\rm sat}(T_{\infty})-c_{\rm sat }(T_{\rm m})}{c_{\rm sat}(T_{\infty})}, \tag{25}\] with \(T_{\rm m}\) is the melting temperature of the liquid. In figure 3(a), we plot this equation in green line for \(T_{\infty}=20\)\({}^{\circ}\)C. The linear approximation gives a good description of the saturation concentration close to \(T_{\rm m}\) and \(T_{\infty}\) but lead to errors higher than 10 % for \(T\in[2;15.5]\)\({}^{\circ}\)C in the estimation of \(c_{\rm sat}(T)/c_{\rm sat}(T_{\infty})\) and a maximal error of the order of 20 % for \(T\approx 8^{\circ}\)C. As shown in figure 3(b), the interfacial temperature of the droplet is in the range \([4.5;15]\)\({}^{\circ}\)C for \({\cal R}_{\rm H}\in[0;60]\) %, the overestimation of \(c_{\rm sat}(T_{\rm i})\) will lead to errors when calculating the evaporative flux and the drop lifetime in this humidity range. Combining equations (17) and (24) leads to an analytical prediction of the temperature difference \[T_{\infty}-T_{\rm i}=\frac{\chi}{1+\chi\Gamma}\ (1-{\cal R}_{\rm H})\,. \tag{26}\] In figure 3(b), we plot the interfacial temperature as a function of the relative humidity that we compare to the numerical solution at \(T_{\infty}=20\)\({}^{\circ}\)C. We observe that in the range \([0,80]\) % of relative humidity, the linear approximation underestimate the cooling effect by about 1.5 \({}^{\circ}\)C. Next, to compute the concentration ratio \(c_{\rm sat}(T_{\rm i})/c_{\rm sat}(T_{\infty})\), and therefore the evaporative flux \(Q_{\rm ev}\) and the drop lifetime \(\tau\), two approaches can be considered. The first approach consists in keeping the linear approximation (Eq. (24)) for using it into equation (18), which leads to the evaporation rate \[Q_{\rm ev}=Q_{0}\frac{1-{\cal R}_{\rm H}}{1+\chi\Gamma}, \tag{27}\] where \(Q_{0}\) is given by equation (19). In figure 3(c), the dimensionless evaporative flux \(Q_{\rm ev}/Q_{0}\) is plotted against the relative humidity, and we compare again the linear approximation given by equation (27) (solid green line) to the numerical solution (black line). By substituting equation (27) in equation (20), we obtain the lifetime of the drop \[\tau=\tau_{0}\frac{1+\chi\Gamma}{1-\mathcal{R}_{\rm H}}. \tag{28}\] Equation (28) is plotted in figure 3(d) in solid green line. Figure 3(c) and (d) (solid green lines) shows that the evaporation rate appears to be overestimated by about 20 % leading to underestimating the drop lifetime by about 15 % over the range \(\mathcal{R}_{\rm H}=[0,80]\) %. The second approach is to consider Antoine equation (Eq. (3)), as a better approximation, after using equation (24) to calculate \(T_{\rm i}\), which is at the expense of the internal coherence. The evaporative flux and the drop lifetime write, respectively, \[Q_{\rm ev} =Q_{0}\left(\frac{T_{\infty}}{T_{\rm i}}10^{B\left([C+T_{\infty} ]^{-1}-[C+T]\right)^{-1}}-\mathcal{R}_{\rm H}\right), \tag{29}\] \[\tau =\tau_{0}\left(\frac{T_{\infty}}{T_{\rm i}}10^{B\left([C+T_{ \infty}]^{-1}-[C+T]\right)^{-1}}-\mathcal{R}_{\rm H}\right)^{-1}, \tag{30}\] with \(T_{\rm i}\) given by equation (26). The results are plotted in dashed green lines in figure 3(c) for the evaporation rate and in figure 3(d) for the drop lifetime. Due to the overestimation of the cooling effect observed in figure 3(b) and the accurate description of the saturated pressure in the second step of the calculation, the evaporative flux is now underestimated by 30 % and the lifetime is overestimated by about 50 % and up to 170 % at high relative humidity (\(\mathcal{R}_{\rm H}=80\) %). As a result, we conclude that the linear approximation used with the two previous approaches predicts the correct trends for the evaporation rate and drop lifetime but leads to significantly badly estimated values. The two approaches are also inconsistent in their predictions. Naturally, the difference in these physical quantities depends on the atmospheric temperature and relative humidity. We limited ourselves to a common situation of \(T_{\infty}=20\)\({}^{\circ}\)C. Indeed, for conditions where the interfacial temperature tends either to the atmospheric temperature, _i.e._ at high relative humidity values, or the melting temperature \(T\), the linear approximation will be better. In the next paragraph, we propose to refine the model of the saturated pressure while allowing analytical calculations. ### Quadratic approximation of \(c_{\rm sat}(T)\) We refine the model by introducing a quadratic approximation of \(c_{\rm sat}(T)\), defined as \[c_{\rm sat}(T)=c_{\rm sat}(T_{\infty})\left(1+\alpha_{1}(T_{\infty}-T)+\alpha _{2}(T_{\infty}-T)^{2}\right), \tag{31}\] where \(\alpha_{1}\) and \(\alpha_{2}\) are obtained by fitting the data from the literature as shown by the light blue curve in figure 3(a). For \(T_{\infty}=20\)\({}^{\circ}\)C we have \(\alpha_{1}=-5.5\cdot 10^{-2}\)K\({}^{-1}\) and \(\alpha_{2}=9.8\cdot 10^{-4}\)K\({}^{-2}\). The additional order provides a better description of the saturation concentration as shown in figure 3(a) and equation (31) is an excellent approximation of Antoine equation. Combining equations (17) and (31), we get \[\chi\alpha_{2}\left(T_{\infty}-T_{\rm i}\right)^{2}+(\chi\alpha_{1}-1)\,\left( T_{\infty}-T_{\rm i}\right)+\chi\left(1-\mathcal{R}_{\rm H}\right)=0. \tag{32}\] Among the two roots admitted by equation (32), we keep the one for which \(T_{\infty}-T_{\rm i}\) decreases as \(\mathcal{R}_{\rm H}\) increases, _i.e._ \[T_{\infty}-T_{\rm i}=\frac{1-\chi\alpha_{1}-\sqrt{\left(1-\chi\alpha_{1} \right)^{2}-4\chi^{2}\alpha_{2}\left(1-\mathcal{R}_{\rm H}\right)}}{2\chi \alpha_{2}}. \tag{33}\] The previous equation is plotted in light blue line in figure 3(b) and is in excellent agreement with the numerical solution (black line) of equation (17). The quadratic approximation provides a correct description of the liquid temperature over the entire relative humidity range with a maximum deviation of 0.1 \({}^{\circ}\)C. Then, using again equation (31), the evaporative flux (Eq. (18)) and the drop lifetime (Eq. (21)) can be written \[Q_{\rm ev} =Q_{0}\left[\alpha_{2}(T_{\infty}-T_{\rm i})^{2}+\alpha_{1}(T_{ \infty}-T_{\rm i})+1-\mathcal{R}_{\rm H}\right], \tag{34}\] \[\tau =\tau_{0}\left[\alpha_{2}(T_{\infty}-T_{\rm i})^{2}+\alpha_{1}(T_ {\infty}-T_{\rm i})+1-\mathcal{R}_{\rm H}\right]^{-1}, \tag{35}\] where \(T_{\infty}-T_{\rm i}\) is directly provided by equation (33). These equations are plotted in solid blue lines respectively in figures 3(c) and 3(d), which compare exceptionally well with the numerical resolution and mitigates the error observed with the linear approximation. Comparison with numerical results shows that quadratic approximation leads to underestimating the drop lifetime of about 1 %. We also checked that, by getting the liquid temperature with equation (33) and inserting it into equations (29) and (30), we get the same results for \(Q_{\rm ev}\) and \(\tau\) as those obtained with equations (34) and (35). The two approaches are consistent in their predictions as shown by the superposition of the dashed and solid blue lines in the figure 3(c) and (d). ## 5 Conclusion In this paper, we developed an analytical method to predict the lifetime of a spherical drop evaporating in still air by taking into account the evaporative cooling of the liquid. Here, we focused on water droplets evaporating in still air at ambient temperature, but this study can easily be extended to other liquids and other atmospheric conditions.
2309.11918
Multi-Passive/Active-IRS Enhanced Wireless Coverage: Deployment Optimization and Cost-Performance Trade-off
Both passive and active intelligent reflecting surfaces (IRSs) can be deployed in complex environments to enhance wireless network coverage by creating multiple blockage-free cascaded line-of-sight (LoS) links. In this paper, we study a multi-passive/active-IRS (PIRS/AIRS) aided wireless network with a multi-antenna base station (BS) in a given region. First, we divide the region into multiple non-overlapping cells, each of which may contain one candidate location that can be deployed with a single PIRS or AIRS. Then, we show several trade-offs between minimizing the total IRS deployment cost and enhancing the signal-to-noise ratio (SNR) performance over all cells via direct/cascaded LoS transmission with the BS. To reconcile these trade-offs, we formulate a joint multi-PIRS/AIRS deployment problem to select an optimal subset of all candidate locations for deploying IRS and also optimize the number of passive/active reflecting elements deployed at each selected location to satisfy a given SNR target over all cells, such that the total deployment cost is minimized. However, due to the combinatorial optimization involved, the formulated problem is difficult to be solved optimally. To tackle this difficulty, we first optimize the reflecting element numbers with given PIRS/AIRS deployed locations via sequential refinement, followed by a partial enumeration to determine the PIRS/AIRS locations. Simulation results show that our proposed algorithm achieves better cost-performance trade-offs than other baseline deployment strategies.
Min Fu, Weidong Mei, Rui Zhang
2023-09-21T09:30:49Z
http://arxiv.org/abs/2309.11918v1
Multi-Passive/Active-IRS Enhanced Wireless Coverage: Deployment Optimization and Cost-Performance Trade-off ###### Abstract Both passive and active intelligent reflecting surfaces (IRSs) can be deployed in complex environments to enhance wireless network coverage by creating multiple blockage-free cascaded line-of-sight (LoS) links. In this paper, we study a multi-passive/active-IRS (PIRS/AIRS) aided wireless network with a multi-antenna base station (BS) in a given region. First, we divide the region into multiple non-overlapping cells, each of which may contain one candidate location that can be deployed with a single PIRS or AIRS. Then, we show several trade-offs between minimizing the total IRS deployment cost and enhancing the signal-to-noise ratio (SNR) performance over all cells via direct/cascaded LoS transmission with the BS. To reconcile these trade-offs, we formulate a joint multi-PIRS/AIRS deployment problem to select an optimal subset of all candidate locations for deploying IRS and also optimize the number of passive/active reflecting elements deployed at each selected location to satisfy a given SNR target over all cells, such that the total deployment cost is minimized. However, due to the combinatorial optimization involved, the formulated problem is difficult to be solved optimally. To tackle this difficulty, we first optimize the reflecting element numbers with given PIRS/AIRS deployed locations via sequential refinement, followed by a partial enumeration to determine the PIRS/AIRS locations. Simulation results show that our proposed algorithm achieves better cost-performance trade-offs than other baseline deployment strategies. Intelligent reflecting surface (IRS), active IRS, IRS deployment, network coverage, cost-performance trade-off, graph theory. ## I Introduction Intelligent reflecting surface (IRS) has received increasingly high attention in wireless communications due to its passive, full-duplex, and controllable signal reflection, which can improve the spectral and energy efficiency of future wireless networks cost-effectively [1, 2]. Specifically, IRS consists of a large array of passive reflecting elements, each of which can be dynamically tuned to alter the amplitude/phase of its reflected signal [1]. However, IRS incurs multiplicative path loss over its cascaded channel, which may limit the signal coverage performance. To compensate for the multiplicative path loss, a new active IRS (AIRS) architecture has recently been proposed, where each reflecting element is equipped with an active load (or called negative resistance), such that it can reflect the incident signal with additional power amplification [3, 4, 5, 6, 7]. However, compared to the conventional passive IRS (PIRS), the AIRS induces higher hardware cost and non-negligible amplification noise in its reflected signals, which may degrade the communication performance especially when the number of reflecting elements is large and/or its amplification gain is low [7]. As such, the joint use of PIRS and AIRS emerges as an appealing solution to reap their complementary advantages [8]. Furthermore, to practically reap the beamforming gain by PIRS/AIRS, their deployment should be carefully designed to ensure the reflected signal coverage considering their half-space reflection constraints as well as the practical environment conditions. Therefore, there have been some prior works addressing the deployment optimization for PIRS/AIRS. For example, in the case of PIRS, the authors in [9] formulated and solved a PIRS location optimization problem to maximize the weighted sum rate in multi-user communications under three different multiple access schemes. Moreover, the authors in [10] optimized the PIRS's location in a secure wireless communication system to maximize the secrecy rate. A deployment optimization problem for an unmanned aerial vehicle (UAV)-mounted PIRS was formulated and solved in [11] to maximize the worst-case signal-to-noise ratio (SNR) among all user locations in a target area. In addition to the PIRS's location optimization, the authors in [12] and [13] further optimized its rotation and showed their joint effectiveness in terms of performance enhancement. Furthermore, the authors in [14, 15, 16, 17] delved into the multi-PIRS deployment design in a given area and aimed to optimize the locations and/or number of PIRSs deployed. In the case of AIRS, the authors in [7] and [8] aimed to optimize the location of an AIRS to maximize the achievable rate in a single-user system, which revealed that the optimal location of an AIRS generally differs from that of a PIRS. All the above works, however, focused only on the deployment design for either PIRS or AIRS, rather than their Fig. 1: Multi-IRS aided wireless network in a typical indoor environment. joint use. Furthermore, they took into account only the single reflection by a PIRS/an AIRS, while the multi-IRS reflection may also be exploited to create blockage-free cascaded line-of-sight (LoS) signal paths between multiple base stations (BSs) and distributed user locations, which helps enhance the wireless coverage performance particularly in complex environments with dense and scattered obstacles (e.g., the indoor environment shown in Fig. 1). Moreover, the severe multiplicative path loss due to multi-IRS signal reflections can be compensated for by the AIRS amplification gain and/or the pronounced cooperative passive beamforming (CPB) gain via successive PIRS reflections [2]. Due to the above benefits, the authors in [18] and [19] studied a new beam routing problem in a multi-PIRS-reflection aided wireless network, which aims to select an optimal multi-PIRS-reflection path from a BS to each user and optimize the beamforming at the BS and selected IRSs, such that the received SNR at each user is maximized. Such a beam routing problem was later extended to the case with joint use of an AIRS and multiple PIRSs in [20]. However, the above works [18, 19, 20] assumed known IRSs' locations without their deployment optimization. While our recent work [21] considered a given multi-IRS reflection path formed by one AIRS and multiple PIRSs, and optimized the AIRS's location over this path jointly with all IRSs' beamforming designs to maximize the received SNR at the user. However, this work only considered a single AIRS and assumed fixed PIRS locations. Thus, it remains unknown how to properly deploy multiple PIRSs and AIRSs in a general multi-IRS-reflection aided wireless network. It is worth noting that in [22], we have studied a relevant multi-IRS deployment problem to jointly optimize the locations of multiple PIRSs in a multi-PIRS-reflection aided wireless network. Nevertheless, its results cannot be directly applied to the case with joint use of PIRSs and AIRSs due to their different signal reflection models. In addition, the communication performance in [22] was evaluated in terms of the number of signal reflections per multi-PIRS link, which may not accurately indicate the SNR performance of each user. To tackle the above issues, in this paper, we investigate a new multi-IRS deployment problem in a general multi-PIRS/AIRS-reflection-aided wireless network. The main contributions of this paper are summarized as follows: * First, we propose a graph-based system model for the joint PIRS and AIRS deployment by dividing the considered region into multiple non-overlapping cells, each of which may contain one predetermined candidate location for deploying a single PIRS or AIRS with a maximum allowable size, as shown in Fig. 1. By this means, we then characterize the total cost of any given IRS deployment design and its resulting communication performance for each cell, which is measured by the maximum worst-case received SNR among all desired user locations within this cell over its all possible direct and cascaded LoS links with the BS. It is also shown that there exist several cost-performance trade-offs based on the above characterization. * Next, to optimally resolve these trade-offs, we formulate a joint PIRS and AIRS deployment problem, which aims at selecting an optimal subset of all candidate locations to deploy PIRSs/AIRSs and optimizing the number of reflecting elements deployed at each selected location, so that a given SNR target can be met for all cells while minimizing the total deployment cost. However, such a joint deployment problem is NP-hard and generally difficult to be solved optimally. To address this challenge, we first optimize the number of reflecting elements per candidate location for given PIRS/AIRS deployed locations using the sequential refinement method. Then, based on the obtained solution, we propose a partial enumeration method to determine the PIRS/AIRS deployed locations. Numerical results show that our proposed algorithm can achieve near-optimal performance of full enumeration and yield better cost-performance trade-offs than other baseline deployment strategies. The remainder of this paper is organized as follows. Section II describes the system model. Section III characterizes the total IRS deployment cost and SNR performance in the considered system. Section IV presents the problem formulation for the joint PIRS and AIRS deployment design. Section V presents our proposed algorithms to solve this problem. Section VI presents numerical results to show the effectiveness of our proposed algorithms. Section VII concludes this paper and discusses future work directions. _Notations_: \(\binom{n}{k}\) denotes the number of combinations to choose \(k\) elements from a set of \(n\) elements. \(\mathbb{N}^{+}\) denotes the set of positive integers. \(\jmath\) denotes the imaginary unit. \(x\sim\mathcal{CN}(0,\sigma^{2})\) represents a circularly symmetric complex Gaussian random variable \(x\) with zero mean and variance \(\sigma^{2}\). \(\mathbb{E}(\cdot)\) denotes the statistical expectation. \((\cdot)^{\mathsf{H}}\) and \((\cdot)^{\mathsf{T}}\) denote the conjugate transpose and transpose, respectively. For a complex-valued vector \(\mathbf{x}\), \(\|\mathbf{x}\|\) denotes its Euclidean norm, and \(\text{diag}(\mathbf{x})\) denotes a diagonal matrix with each diagonal entry being the corresponding element in \(\mathbf{x}\). For a set \(\mathcal{S}\), \(|\mathcal{S}|\) denotes its cardinality. \(\emptyset\) denotes an empty set. For two sets \(\mathcal{S}\) and \(\mathcal{S}^{\prime}\), \(\mathcal{S}\cap\mathcal{S}^{\prime}\) denotes their intersection, \(\mathcal{S}\cup\mathcal{S}^{\prime}\) denotes their union, and \(\mathcal{S}\backslash\mathcal{S}^{\prime}\) is the set of elements that belong to \(\mathcal{S}\) but are not in \(\mathcal{S}^{\prime}\). ## II System Model As shown in Fig. 1, in this paper, we study a wireless communication system in a given region, denoted by \(\mathcal{D}\), where dense obstacles severely block a large portion of communication links. Assume that a single BS (or access point (AP)) equipped with \(M\) antennas has already been deployed in \(\mathcal{D}\) to establish direct LoS links with as many desired user locations as possible. To further boost network coverage, we consider that multiple IRSs, including PIRSs and AIRSs, can be deployed in \(\mathcal{D}\) to create virtual LoS paths from the BS to other desired user locations in \(\mathcal{D}\). To facilitate their deployment, we assume that a number of candidate locations, denoted by \(I_{0}\), have been identified in \(\mathcal{D}\), each of which may be deployed with either a PIRS or an AIRS. Furthermore, it is assumed that deploying IRSs at all these candidate locations would enable achieving global LoS coverage from the BS to any user location in \(\mathcal{D}\); however, such a deployment incurs practically prohibitive deployment cost. Therefore, due to the substantial deployment cost, only a subset of these candidate locations may be selected for deploying IRSs depending on prescribed coverage and communication performance requirements, as pursued in this paper. To ease IRS deployment, similarly to [22], we divide \(\mathcal{D}\) into \(J\) (\(J\geq I_{0}\)) non-overlapping cells, where the BS is deployed in cell \(0\), and each of the remaining cells contains at most one candidate IRS location, as shown in Fig. 2(a). Additionally, we assume that LoS coverage can be locally achieved between the candidate IRS location (or the BS) and any possible user locations in its located cell. Let \(\mathcal{J}\triangleq\{0,\ldots,J-1\}\) denote the set of all cells and \(\mathcal{I}_{0}\) denote the set of cells containing candidate IRS locations, with \(\mathcal{I}_{0}\subseteq\mathcal{J}\) and \(|\mathcal{I}_{0}|=I_{0}\). Moreover, we denote by \(\mathcal{P}\) and \(\mathcal{A}\) the sets of cells deployed with PIRS and AIRSs, respectively, with \(\mathcal{P}\subseteq\mathcal{I}_{0},\mathcal{A}\subseteq\mathcal{I}_{0}\), and \(\mathcal{P}\cap\mathcal{A}=\emptyset\). For convenience, we refer to the IRS deployed in cell \(i,i\in\mathcal{I}_{0}\), as IRS \(i\). In particular, it is also referred to as PIRS \(i\) or AIRS \(i\) if \(i\in\mathcal{P}\) or \(i\in\mathcal{A}\), respectively. Furthermore, to mount the IRS efficiently at each candidate location, we consider that each IRS is assembled by a number of tiles (or subsurfaces) of the same fixed size when mounting it. Let \(N\) denote the number of reflecting elements in both horizontal and vertical dimensions per tile and \(T_{i}\) denote the number of tiles on IRS \(i\); hence, its total number of reflecting elements is given by \(N_{i}\triangleq T_{i}N^{2}\). Note that due to the practically limited size of IRS deployment, we assume that there is a maximum allowable number of tiles that can be deployed at each candidate location, denoted as \(T_{0}^{\max}\). As such, we have \(T_{i}\leq T_{0}^{\max},i\in\mathcal{I}_{0}\). For each cell \(i\) without IRS deployed (i.e., in the set \(\mathcal{I}_{0}\backslash(\mathcal{P}\cup\mathcal{A})\)), we set \(T_{i}=0\). For convenience, in the sequel of this paper, we refer to tile numbers as those in the cells deployed with IRSs, i.e., \(\mathcal{P}\cup\mathcal{A}\). Accordingly, let \(\mathcal{T}\triangleq\{T_{i}|i\in\mathcal{P}\cup\mathcal{A}\}\) denote the ensemble of numbers of tiles deployed in these cells. For PIRS \(p,p\in\mathcal{P}\), let \(\boldsymbol{\Phi}_{p}=\mathrm{diag}\{e^{\jmath\theta_{p,1}},\ldots,e^{\jmath \theta_{p,N_{p}}}\}\in\mathbb{C}^{N_{p}\times N_{p}}\) denote its reflection coefficient matrix, where \(\theta_{p,n}\in[0,2\pi]\) denotes the phase shift of the \(n\)-th reflecting element, and its amplitude is set to one to maximize the reflected signal power [1]. While for AIRS \(a,a\in\mathcal{A}\), we denote its reflection coefficient matrix as \(\boldsymbol{\Phi}_{a}=\mathrm{diag}\{\eta_{a}e^{\jmath\theta_{a,1}},\ldots, \eta_{a}e^{\jmath\theta_{a,N_{a}}}\}\in\mathbb{C}^{N_{a}\times N_{a}}\), where \(\eta_{a}\) (\(\eta_{a}>1\)) denotes a common amplification factor for all of its reflecting elements. Unlike the PIRSs, each AIRS introduces non-negligible amplification noise into its reflected signal. For AIRS \(a,a\in\mathcal{A}\), we denote its amplification noise by \(\boldsymbol{n}_{a}\in\mathbb{C}^{N_{a}\times 1}\), where \(\boldsymbol{n}_{a}\sim\mathcal{CN}(\boldsymbol{0}_{N_{a}},\sigma^{2}\boldsymbol {1}_{N_{a}})\) with \(\sigma^{2}\) denoting the noise power. For convenience, we refer to the BS and the candidate IRS location in cell \(i,i\in\mathcal{I}_{0}\) as nodes 0 and \(i\), respectively. To describe the LoS availability between any two nodes in the network, we define a set of binary variables \(\mu_{i,i^{\prime}}\), \(i\neq i^{\prime},i,i^{\prime}\in\mathcal{I}\triangleq\{0\}\cup\mathcal{I}_{0}\) to indicate the LoS availability between nodes \(i\) and \(i^{\prime}\) (by setting \(\mu_{i,i}=0\)). In particular, \(\mu_{i,i^{\prime}}=1\) holds if and only if (iff) the candidate IRS location in cell \(i\) (or the BS if \(i=0\)) can achieve an LoS path with that in cell \(i^{\prime}\) (or the BS if \(i^{\prime}=0\)). To further describe the LoS availability from any candidate IRS location or the BS to all possible user locations within each cell \(j,j\in\mathcal{J}\), we define a virtual node \(J+j\) to represent the latter in cell \(j\), as shown in Fig. 2(a). Accordingly, we define additional binary variables \(\mu_{i,J+j}\), \(\forall i\in\mathcal{I},\forall j\in\mathcal{J}\), which is equal to 1 iff the candidate IRS location in cell \(i\) (or the BS if \(i=0\)) can achieve LoS paths with all possible user locations in cell \(j\). Let \(\mathcal{U}\triangleq\{J,J+1,\ldots,2J-1\}\) denote the set of all virtual nodes. It is not difficult to verify that \(\mu_{i,i^{\prime}}=\mu_{i^{\prime},i},i,i^{\prime}\in\mathcal{I}\cup\mathcal{U}\). It should be mentioned that since local LoS coverage can be achieved within each single cell with a BS or candidate IRS location, we have \(\mu_{j,J+j}=1,\forall j\in\mathcal{I}\). In practice, the above LoS indicators \(\mu_{i,i^{\prime}}\) can be measured offline in the region interested through various techniques such as ray tracing [23]. In this paper, we focus on the downlink from the BS to all possible users in different cells. Hence, we can set \(\mu_{i,0}=0,i\in\mathcal{I}\) and \(\mu_{J+j,i}=0,j\in\mathcal{J},i\in\mathcal{I}\cup\mathcal{U}\). The results can be easily extended to the uplink scenario as a direct/cascaded LoS path from the BS to any user in the downlink is also available for its uplink communication to the BS in practice. **Remark 1**.: It is worth noting that in our previous work [22], since we focused on a network-level performance metric to infer the user communication performance (i.e., the minimum number of PIRS reflections required for the BS to establish an LoS link with each cell for any given PIRS deployment), it suffices to represent the candidate IRS location and all possible user locations in each cell by a single node without accounting for the actual channel conditions. However, this cannot be applied to this work focusing on fine-grained signal-level performance metric (i.e., user SNRs). Next, we characterize the LoS channel between any two nodes in the system (if any). To this end, let \(d_{i,i^{\prime}}\), \(i\neq i^{\prime},i,i^{\prime}\in\mathcal{I}\) be the distance between nodes \(i\) (BS/IRS) and \(i^{\prime}\) (BS/IRS). Since there exists an infinite number of possible user locations within each cell \(j\), to describe the distances between node \(i\) (BS/IRS) and node \(J+j\), we consider the worst-case user location in cell \(j\) that achieves the largest distance from node \(i\). Accordingly, let \(d_{i,J+j}^{\max},i\in\mathcal{I},j\in\mathcal{J}\) Fig. 2: Illustrations for (a) the cells in region \(\mathcal{D}\) and the nodes representing the BS or candidate IRS locations, (b) LoS paths created by PIRS and/or AIRS deployed at partial candidate locations, and (c) the corresponding graph \(G\) of considered region \(\mathcal{D}\). denote the distance between node \(i\) and its associated worst-case user location in cell \(j\) (or node \(J+j\)). Define \(\mathbf{H}_{0,i},i\in\mathcal{I}_{0}\) as the baseband equivalent channel from node \(0\) (BS) to node \(i\) (IRS), \(\mathbf{S}_{i,i^{\prime}},i,i^{\prime}\in\mathcal{I}_{0},i\neq i^{\prime}\) as that from node \(i\) (IRS) to node \(i^{\prime}\) (IRS), and \(\mathbf{g}_{i,J+j}^{\mathbf{\mathrm{t}}}\), \(i\in\mathcal{I}\), \(j\in\mathcal{J}\) as that from node \(i\) (BS/IRS) to node \(J+j\) (or node \(i\)'s associated worst-case user location in cell \(j\)). Without loss of generality, we assume that the BS and each IRS are equipped with a uniform linear array (ULA) and a uniform planar array (UPA) parallel to the \(x\)-\(z\) plane, respectively. Moreover, it is assumed that far-field propagation can be achieved over all LoS links; hence, the LoS channel between any two nodes can be modeled as the product of transmit and receive array responses at the two sides. For the ULA at the BS, its transmit array response with respect to (w.r.t.) IRS \(i\) is written as \[\tilde{\mathbf{h}}_{0,i,t}=\mathbf{u}(\frac{2d}{\lambda}\cos\varphi_{0,i}^{t},M)\in \mathbb{C}^{M\times 1}, \tag{1}\] where \(\varphi_{0,i}^{t}\) denotes the angle-of-departure (AoD) from the BS to IRS \(i\), \(\lambda\) denotes the signal wavelength, \(d\) denotes the spacing between two adjacent antennas/elements at the BS/each IRS, and \(\mathbf{u}\) is the steering vector function defined as \[\mathbf{u}(\varsigma,M^{\prime})=[1,e^{-2\pi\varsigma},\ldots,e^{-\jmath\pi(M^{ \prime}-1)\varsigma}]^{\mathsf{T}}\in\mathbb{C}^{M^{\prime}\times 1}. \tag{2}\] For the UPA at each IRS, we establish a local coordinate system on it and assume that it is parallel to the \(x\)-\(z\) plane. Hence, transmit/receive array response vector can be expressed as the Kronecker product of two steering vector functions in the \(x\)- and \(z\)-axis directions, respectively. In particular, let \(N_{i,x}\) and \(N_{i,z}\) denote the number of elements at IRS \(i\) in the \(x\)- and \(z\)-axis directions, respectively, with \(N_{i}=N_{i,x}\times N_{i,z}\). Then, the transmit array response of IRS \(i\) w.r.t. node \(j\) (IRS/user) is expressed as \[\tilde{\mathbf{s}}_{i,j,t}=\mathbf{u}(\frac{2d}{\lambda}\cos(\varphi_{i,j}^{t})\sin( \vartheta_{i,j}^{t}),N_{i,x})\otimes\mathbf{u}(\frac{2d}{\lambda}\cos(\vartheta_{ i,j}^{t}),N_{i,z}), \tag{3}\] where \(\varphi_{i,j}^{t}\) and \(\vartheta_{i,j}^{t}\) denote the azimuth and elevation AoDs from IRS \(i\) to node \(j\), respectively. Similarly, we can define the receive array response of node \(j\) w.r.t. node \(i\) (IRS/BS) and denote it as \(\tilde{\mathbf{s}}_{j,i,r}\). Based on the above, if \(\mu_{0,i}=1\), \(i\in\mathcal{I}_{0}\), the BS\(\rightarrow\)IRS \(i\) LoS channel is expressed as \[\mathbf{H}_{0,i}=e^{-\frac{2\pi d_{0,i}}{\lambda}}\kappa_{0,i}\tilde{\mathbf{s}}_{0,i,r}\tilde{\mathbf{h}}_{0,i,t}^{\mathsf{H}}\in\mathbb{C}^{N_{i}\times M}, \tag{4}\] where \(\kappa_{0,i}\triangleq\sqrt{\beta_{0}}/d_{0,i}^{\mathsf{d}}\) denotes the LoS path gain from the BS to IRS \(i\); \(\alpha\) and \(\beta_{0}\) respectively denote the LoS path-loss exponent and path gain at the reference distance of one meter (m.). Similarly, if \(\mu_{i,i^{\prime}}=1\), \(i,i^{\prime}\in\mathcal{I}_{0}\), the IRS \(i\rightarrow\)IRS \(i^{\prime}\) LoS channel is given by \[\mathbf{S}_{i,i^{\prime}}=e^{-\frac{2\pi d_{i,i^{\prime}}}{\lambda}}\kappa_{i,i^{ \prime}}\tilde{\mathbf{s}}_{i,i^{\prime}},\tilde{\mathbf{r}}_{i,i^{\prime}}^{\mathsf{ H}}\in\mathbb{C}^{N_{i}\times N_{i^{\prime}}}. \tag{5}\] Finally, if \(\mu_{i,j+j}=1\), \(i\in\mathcal{I}\), \(j\in\mathcal{J}\), the LoS channel from node \(i\) (BS/IRS) to node \(J+j\) is given by \[\mathbf{g}_{i,J+j}^{\mathsf{H}}=\begin{cases}\kappa_{0,J+j}e^{-\frac{2\pi d_{0,j} ^{\mathsf{H}}}{\lambda}}\tilde{\mathbf{h}}_{0,J+j,t}^{\mathsf{H}}\in\mathbb{C}^{1 \times M}&i=0,\\ \kappa_{i,J+j}e^{-\frac{2\pi d_{0,j}^{\mathsf{H}}}{\lambda}}\tilde{\mathbf{h}}_{i,J +j,t}^{\mathsf{H}}\in\mathbb{C}^{1\times N_{i}}&i\in\mathcal{I}_{0},\end{cases} \tag{6}\] where \(\kappa_{i,J+j}\triangleq\sqrt{\beta_{0}}/(d_{i,J+j}^{\mathsf{H}})^{\frac{1}{2}}\) denotes the corresponding worst-case LoS path gain. Next, we model the considered region and all LoS paths inside it based on a directed LoS graph \(G=(V,E)\), where the vertex set \(V\) consists of the nodes in \(\mathcal{I}\) and \(\mathcal{U}\), i.e., \(V=\mathcal{I}\cup\mathcal{U}\), and the edge set is given by \(E=\{(i,i^{\prime})|\mu_{i,i^{\prime}}=1,i\neq i^{\prime},i,i^{\prime}\in V\}\), i.e., there is an edge from vertex \(i\) to vertex \(i^{\prime}\) iff \(\mu_{i,i^{\prime}}=1\). Note that for each cell without any candidate IRS location or BS, e.g., cell \(i,i\in\mathcal{J}\backslash\mathcal{I}\), we only need to consider its possible user locations in \(G\), i.e., node \(J+i\) with \(\{J+i\}\in\mathcal{U}\). By this means, we can establish a one-to-one mapping between the LoS path from the BS to any user location in \(\mathcal{D}\) and a path in \(G\). Fig. 2(c) depicts one graph \(G\) generated based on the considered region \(\mathcal{D}\) in Fig. 2(a) and their pairwise binary LoS indicators \(\mu_{i,i^{\prime}}\)'s. Hereafter, we use vertices and nodes interchangeably and refer to a path as both the LoS path in \(\mathcal{D}\) and its corresponding path in \(G\) without ambiguity. ## III Deployment Cost and Communication Performance Characterization In this section, we characterize the total IRS deployment cost and the communication performance in the considered system to reveal the fundamental trade-offs between them. ### _Deployment Cost_ We consider that the total IRS deployment cost depends on both the number of cells selected for IRS deployment and that of tiles deployed in selected cells. Accordingly, we express the total deployment cost as \[c(\mathcal{P},\mathcal{A},\mathcal{T})=\underbrace{c_{P,0}|\mathcal{P}|+c_{A,0 }|\mathcal{A}|}_{\text{Cell-use cost}}+\underbrace{c_{P}\sum_{p\in\mathcal{P}}T_{p }+c_{A}\sum_{a\in\mathcal{A}}T_{a}}_{\text{Hardware cost}}, \tag{7}\] where \(c_{P,0}\) (\(c_{A,0}\)) denotes a fixed cost for deploying a PIRS (an AIRS) in any cell, and \(c_{P}\) (\(c_{A}\)) denotes the cost per passive (active) tile. It is important to note that \(c_{P,0}\) (\(c_{A,0}\)) captures the cell-use cost of deploying a PIRS (AIRS), including mounting cost, controller cost, and power cost etc., which are regardless of the size of the IRS (or number of tiles deployed); while \(c_{P}\) (\(c_{A}\)) captures the hardware cost of deploying a passive (active) tile. Note that \(c_{A}>c_{P}\) and \(c_{A,0}>c_{P,0}\) generally hold in practice due to the higher power consumption and tile cost (with power amplification) associated with AIRS than PIRS. ### _Communication Performance_ By properly deploying PIRSs/AIRSs in \(\mathcal{D}\), multiple direct/cascaded LoS links may be achieved from the BS to each cell and the desired user locations within it, which enables us to properly select one or multiple LoS paths to serve them. As such, in this paper, we consider the maximum worst-case SNR among all desired user locations within each cell achievable by selecting one LoS path from all possible direct and cascaded LoS paths from the BS to it1 as the communication performance metric. This SNR performance depends on the number and locations of cells deployed with IRSs (i.e., \(\mathcal{P}\) and \(\mathcal{A}\)), as well as the number of tiles deployed (i.e., \(\mathcal{T}\)). This is because deploying IRSs in more cells may create more signal paths, which helps enhance the coverage of the BS. While for any given \(\mathcal{P}\) and \(\mathcal{A}\), by increasing the number of tiles per IRS, the strength of the signal over each multi-IRS-reflection path can be boosted thanks to the improved CPB gain. Furthermore, for each cell, if a multi-IRS-reflection LoS link needs to be selected for the user locations inside it, we consider the presence of at most one AIRS over it, which ensures a sufficiently high amplification gain of each deployed AIRS and also simplifies the IRS reflection design [21]. Based on the above, there exist three types of LoS transmissions from the BS to any cell, i.e., direct transmission, hybrid PIRS and AIRS enabled transmission, and all-PIRS enabled transmission, as shown in Fig. 2(b). Next, we derive the worst-case SNR performance for each cell under each type of transmission over any LoS path for any given \(\mathcal{P}\), \(\mathcal{A}\), and \(\mathcal{T}\). To this end, we first define \(\Omega=\{0,b_{1},b_{2},\ldots,b_{L},J+j\}\) as any LoS path from vertex \(0\) (BS) to vertex \(J+j,j\in\mathcal{J}\) (i.e., the worst-case user location in cell \(j\) w.r.t. IRS \(b_{L}\)) in \(G\), where \(L\geq 0\) and \(b_{l}\in\mathcal{P}\cup\mathcal{A}\) denote the number of intermediate vertices in \(\Omega\) and the index of the \(l\)-th intermediate vertex (PIRS/AIRS). Since at most one AIRS can exist over any LoS path, it should hold that \(|\Omega\cap\mathcal{A}|\leq 1\). For convenience, we define \(b_{0}=0\) and \(b_{L+1}=J+j\). Let \(\mathbf{w}\in\mathbb{C}^{M\times 1}\) and \(P_{0}\) denote the BS's beamforming vector and transmit power, respectively, with \(\|\mathbf{w}\|^{2}=P_{0}\). #### Iii-B1 Type 1: Direct Transmission In this case, we have \(L=0\) and \(\mu_{0,J+j}=1\). Through the direct link, the received signal at the worst-case user location in cell \(j\) (w.r.t. the BS) is given by \[y_{J+j}=\mathbf{g}_{0,J+j}^{\mathsf{H}}\mathbf{w}s+z, \tag{8}\] where \(s\) denotes the transmitted symbol at the BS with \(\mathbb{E}[|s|^{2}]=1\), and \(z\in\mathbb{C}\) denotes the additive white Gaussian noise (AWGN) at the user receiver. It is assumed that \(z\sim\mathcal{CN}(0,\sigma^{2})\), where the AWGN power is assumed to be identical to the amplification noise power of each AIRS as \(\sigma^{2}\). It is known that the maximum-ratio transmission, i.e., \(\mathbf{w}=\sqrt{P_{0}}\tilde{\mathbf{h}}_{0,J+j,t}/\|\tilde{\mathbf{h}}_{0,J+j,t}\|\), can maximize the received signal power, i.e., \(|\mathbf{g}_{0,J+j}^{\mathsf{H}}\mathbf{w}|^{2}\). Therefore, the worst-case received SNR in cell \(j\) in this case is set as \[\gamma_{0,J+j}=C_{0}\kappa_{0,J+j}^{2}, \tag{9}\] where \(C_{0}=\frac{P_{0}M}{\sigma^{2}}\). It can be observed from (9) that \(\gamma_{0,J+j}\) is independent of \(\mathcal{P}\), \(\mathcal{A}\), and \(\mathcal{T}\). Note that if \(\mu_{0,J+j}=0\), we can set \(\gamma_{0,J+j}=-\infty\). #### Iii-B2 Type 2 : Hybrid PIRS and AIRS Enabled Transmission In this case, we have \(L\geq 1\) and \(|\Omega\cap\mathcal{A}|=1\). Let \(b_{l}\) denote the index of the AIRS node in \(\Omega\), i.e., \(\Omega\cap\mathcal{A}=\{b_{l}\}\). Then, BS\(\rightarrow\)AIRS \(b_{l}\) and the worst-case AIRS \(b_{l}\)\(\rightarrow\)cell \(j\) multi-reflection channels under \(\Omega\) can be respectively expressed as \[\mathbf{h}_{0,b_{l}}(\Omega) = \Big{(}\prod_{m=1}^{l-1}\mathbf{S}_{b_{m},b_{m+1}}\mathbf{\Phi}_{b_{m}} \Big{)}\mathbf{H}_{0,b_{1}}\mathbf{w}, \tag{10}\] \[\tilde{\mathbf{g}}_{b_{l},J+j}^{\mathsf{H}}(\Omega) = \mathbf{g}_{b_{l},J+j}^{\mathsf{H}}(\prod_{m=1}^{L-1}\mathbf{\Phi}_{b_{m+ 1}}\mathbf{S}_{b_{m},b_{m+1}}). \tag{11}\] Through the multi-reflection link \(\Omega\), the received signal at the worst-case user location in cell \(j\) (w.r.t. IRS \(b_{L}\)) is given by \[\bar{y}_{0,J+j}=\tilde{\mathbf{g}}_{b_{l},J+j}^{\mathsf{H}}(\Omega)\mathbf{\Phi}_{b_{ l}}(\mathbf{h}_{0,b_{l}}(\Omega)s+\mathbf{n}_{b_{l}})+z. \tag{12}\] Then, the corresponding worst-case received SNR in cell \(j\) under \(\Omega\) is given by \[\bar{\gamma}_{0,J+j}(b_{l},\Omega,\mathcal{P},\mathcal{A},\mathcal{T})=\frac {|\tilde{\mathbf{g}}_{b_{l},J+j}^{\mathsf{H}}(\Omega)\mathbf{\Phi}_{b_{l}}\mathbf{h}_{0,b_ {l}}(\Omega)|^{2}}{|\tilde{\mathbf{g}}_{b_{l},J+j}^{\mathsf{H}}(\Omega)\mathbf{\Phi}_{ b_{l}}|^{2}\sigma^{2}+\sigma^{2}}. \tag{13}\] Let \(P_{A}\) denote the maximum amplification power per reflecting element of the AIRS. As shown in [21], to maximize (13), the optimal AIRS amplification factor, BS beamforming, and each IRS beamforming are respectively given by \[\eta_{b_{l}}^{2}=\frac{P_{A}}{\kappa_{0,b_{l}}^{2}|\tilde{\mathbf{h}}_{0,b_{l}}^{ \mathsf{H}}\mathbf{w}|^{2}(\prod_{m=1}^{l-1}\kappa_{b_{m},b_{m+1}}^{2}|A_{b_{m}}|^ {2})+\sigma^{2}}, \tag{14}\] \[\mathbf{w}=\sqrt{P_{0}}\tilde{\mathbf{h}}_{0,b_{l},l}/\|\tilde{\mathbf{h}}_{0,b_{l},l}\|, \tag{15}\] \[\theta_{b_{m},n}=\arg([\tilde{\mathbf{s}}_{b_{m-1},b_{m-r}}^{\mathsf{H}}][\tilde{ \mathbf{s}}_{b_{m},b_{m+1},t}]n),\forall m,\forall n. \tag{16}\] It is worth noting that in practice, the beamforming designs in (14)-(16) can be determined by applying distributed and local cooperation among different nodes [21]. By substituting (14)-(16) into (13), (13) is simplified as \[\bar{\gamma}_{0,J+j}(b_{l},\Omega,\mathcal{P},\mathcal{A},\mathcal{ T})=\Big{(}\frac{\kappa_{0,b_{l}}^{-2}}{C_{0}N^{2T_{b_{l}}}}\prod_{m=1}^{l-1} \frac{\kappa_{b_{m},b_{m+1}}^{-2}}{N^{4}T_{b_{m}}^{2}}\] \[+\frac{1}{C_{A}}\prod_{m=l}^{L}\frac{\kappa_{b_{m},b_{m+1}}^{-2}}{ N^{4}T_{b_{m}}^{2}}+\frac{\kappa_{0,b_{1}}^{-2}}{C_{0}C_{A}}\prod_{m=1}^{L} \frac{\kappa_{b_{m},b_{m+1}}^{-2}}{N^{4}T_{b_{m}}^{2}}\Big{)}^{-1}, \tag{17}\] with \(C_{A}=\frac{P_{a}}{\sigma^{2}}\). For any given \(b_{l}\), \(\mathcal{P}\), \(\mathcal{A}\), and \(\mathcal{T}\), we should select an optimal reflection path \(\Omega\) that maximizes (17). It has been shown in [20] that such a path selection problem can be equivalently transformed into two subproblems, aiming to select the optimal BS-to-AIRS \(b_{l}\) and AIRS \(b_{l}\)-to-cell \(j\) sub-paths, respectively. These two subproblems can be further simplified to minimizing \(\kappa_{0,b_{l}}^{-2}\prod_{m=1}^{l-1}\frac{\kappa_{b_{m},b_{m+1}}^{-2}}{N^{4}T_{b_ {m}}^{2}}\) and \(\prod_{m=l}^{L}\frac{\kappa_{b_{m},b_{m+1}}^{-2}}{N^{4}T_{b_{m}}^{2}}\)[20], respectively, after discarding irrelevant constant scalars. By further taking their logarithm, it is equivalent to minimizing \[\bar{\lambda}_{0,b_{l}}(\Omega,\mathcal{P},\mathcal{A},\mathcal{T}) = \ln\kappa_{0,b_{l}}^{-2}+\sum_{m=1}^{l-1}\ln\frac{\kappa_{b_{m},b_{m+ 1}}^{-2}}{N^{4}T_{b_{m}}^{2}}, \tag{18}\] \[\bar{\lambda}_{b_{l},J+j}(\Omega,\mathcal{P},\mathcal{A},\mathcal{ T}) = \sum_{m=l}^{L}\ln\frac{\kappa_{b_{m},b_{m+1}}^{-2}}{N^{4}T_{b_{m}}^{2}}. \tag{19}\] Next, to find the optimal reflection path \(\Omega\) that minimizes both (18) and (19), we define the following weight function for each edge \((i,i^{\prime}),(i,i^{\prime})\in E\), i.e., \[W_{i,i^{\prime}}(b_{l},\mathcal{P},\mathcal{A},\mathcal{T})=\begin{cases}\ln \kappa_{0,i^{\prime}}^{-2}&i=0,i^{\prime}\in\mathcal{I}\cup\mathcal{U},\\ \ln\frac{\kappa_{i^{\prime}, LoS path, since the downlink scenario is considered. Third, if no IRS is deployed in cell \(i\), then vertex \(i\) should not be selected as an intermediate vertex by any desired LoS path. As such, in all the scenarios above, edge \((i,i^{\prime})\) should be removed. However, to keep the topology of \(G\), we set their weights to infinity equivalently. As a result, if either \(\bar{\lambda}_{0,b_{l}}(\Omega,\mathcal{P},\mathcal{A},\mathcal{T})\) or \(\bar{\lambda}_{b_{l},J+j}(\Omega,\mathcal{P},\mathcal{A},\mathcal{T})\) is equal to infinity, it implies that one of the two sub-paths involves at least one undesired vertex (e.g., another AIRS or the candidate location without IRS deployed). Consequently, the path \(\Omega\) cannot be used to serve users in cell \(j\) in this case. Let \(\Gamma_{0,b_{l}}\) and \(\Gamma_{b_{l},J+j}\) denote the sets of all LoS paths from vertex \(0\) to vertex \(b_{l}\) and those from vertex \(b_{l}\) to vertex \(J+j\) in \(G\), respectively. With the weights defined in (20), we select the optimal sub-paths of \(\Omega\) to minimize \(\bar{\lambda}_{0,b_{l}}(\Omega,\mathcal{P},\mathcal{A},\mathcal{T})\) and \(\bar{\lambda}_{b_{l},J+j}(\Omega,\mathcal{P},\mathcal{A},\mathcal{T})\), respectively, i.e., \[\bar{\lambda}_{0,b_{l}}(\mathcal{P},\mathcal{A},\mathcal{T})=\min_{\Omega \in\Gamma_{0,b_{l}}}\bar{\lambda}_{0,b_{l}}(\Omega,\mathcal{P},\mathcal{A}, \mathcal{T}), \tag{23}\] \[\bar{\lambda}_{b_{l},J+j}(\mathcal{P},\mathcal{A},\mathcal{T})=\min_{\Omega \in\Gamma_{b_{l},J+j}}\bar{\lambda}_{b_{l},J+j}(\Omega,\mathcal{P},\mathcal{A },\mathcal{T}), \tag{24}\] both of which can be efficiently calculated on \(G\) using classical shortest path algorithms in graph theory (e.g., Bellman-Ford algorithm [25]). Note that if \(\Gamma_{0,b_{l}}=\emptyset\) or \(\Gamma_{b_{l},J+j}=\emptyset\), i.e., there is no path from vertex 0 to vertex \(J+j\) through vertex \(b_{l}\), we equivalently set \(\bar{\lambda}_{0,b_{l}}(\mathcal{P},\mathcal{A},\mathcal{T})=\bar{\lambda}_{b _{l},J+j}(\mathcal{P},\mathcal{A},\mathcal{T})=\infty\). By substituting (23) and (24) into (17), we can obtain the maximum worst-case SNR in cell \(j\) via hybrid-PIRS/AIRS LoS path selection for any given \(b_{l}\), \(\mathcal{P}\), \(\mathcal{A}\), and \(\mathcal{T}\) as \[\bar{\gamma}_{0,J+j}(b_{l},\mathcal{P},\mathcal{A},\mathcal{T})= \Big{(}\frac{e^{\bar{\lambda}_{0,b_{l}}(\mathcal{P},\mathcal{A},\mathcal{T})} }{C_{0}N^{2}T_{b_{l}}}+\frac{e^{\bar{\lambda}_{b_{l},J+j}(\mathcal{P},\mathcal{ A},\mathcal{T})}}{C_{A}}\] \[+\frac{e^{(\bar{\lambda}_{0,b_{l}}(\mathcal{P},\mathcal{A}, \mathcal{T})+\bar{\lambda}_{b_{l},J+j}(\mathcal{P},\mathcal{A},\mathcal{T}))} }{C_{C}C_{A}}\Big{)}^{-1},j\in\mathcal{J}. \tag{25}\] Then, the maximum worst-case SNR in cell \(j\) via hybrid-PIRS/AIRS LoS path selection for given \(\mathcal{P}\), \(\mathcal{A}\), and \(\mathcal{T}\) can be obtained by comparing (25) under different \(b_{l},b_{l}\in\mathcal{A}\), i.e., \[\bar{\gamma}_{J+j}(\mathcal{P},\mathcal{A},\mathcal{T})=\max_{b_{l}\in \mathcal{A}}\bar{\gamma}_{0,J+j}(b_{l},\mathcal{P},\mathcal{A},\mathcal{T}), j\in\mathcal{J}. \tag{26}\] #### Iii-B3 Type 3: All-PIRS Enabled Transmission In this case, we have \(L\geq 1\) and \(\Omega\cap\mathcal{A}=\emptyset\). Consequently, the worst-case BS\(\rightarrow\)cell \(j\) multi-reflection channel is expressed as \[\tilde{g}_{0,J+j}(\Omega)=\mathbf{g}_{b_{L},J+j}^{\mathsf{H}}\mathbf{\Phi}_{b_{L}} \Big{(}\prod\nolimits_{m=1}^{L-1}\mathbf{S}_{b_{m},b_{m+1}}\mathbf{\Phi}_{b_{k}} \Big{)}\mathbf{H}_{0,b_{l}}\mathbf{w}. \tag{27}\] To maximize the end-to-end channel power gain \(|\tilde{g}_{0,J+j}(\Omega)|^{2}\), it can be shown that the optimal beamforming at the BS and IRSs should satisfy (15) and (16), respectively. Therefore, in this case, the worst-case received SNR in cell \(j\) under \(\Omega\) is given by \[\tilde{\gamma}_{0,J+j}(\Omega,\mathcal{P},\mathcal{A},\mathcal{T})=\left( \frac{\kappa_{0,b_{1}}^{-2}}{C_{0}}\prod\limits_{m=1}^{L}\frac{\kappa_{b_{m}, b_{m+1}}^{-2}}{N^{4}T_{b_{m}}^{2}}\right)^{-1}. \tag{28}\] It has been shown in [18] and [19] that selecting the optimal path for maximizing (28) is equivalent to minimizing \[\tilde{\lambda}_{0,J+j}(\Omega,\mathcal{P},\mathcal{A},\mathcal{T})=\ln \kappa_{0,b_{1}}^{-2}+\sum\limits_{m=1}^{L}\ln\frac{\kappa_{b_{m},b_{m+1}}^{-2}}{ N^{4}T_{b_{m}}^{2}}. \tag{29}\] Next, we show that minimizing (29) can still be equivalently reformulated as a shortest path problem under proper weight assignment, similar to (20). However, different from (20), the desired path \(\Omega\) in \(G\) must not involve any AIRS nodes. Accordingly, for each \((i,i^{\prime})\in E\), we modify the weights in (20) as, \[\tilde{W}_{i,i^{\prime}}(\mathcal{P},\mathcal{A},\mathcal{T})=\begin{cases}\ln \kappa_{0,b_{l}^{\prime}}^{-2}&i=0,i^{\prime}\in\mathcal{I},\\ \ln\frac{\kappa_{i,i^{\prime}}^{-2}}{T_{i}^{2}N^{4}}&i\in\mathcal{P},i^{\prime} \in\mathcal{I}\cup\mathcal{U},\\ \infty&\text{otherwise}.\end{cases} \tag{30}\] Given (30) and the path \(\Omega\), it can be verified that \[\tilde{\lambda}_{0,J+j}(\Omega,\mathcal{P},\mathcal{A},\mathcal{T})=\sum \limits_{m=0}^{L}\tilde{W}_{b_{m},b_{m+1}}(\mathcal{P},\mathcal{A},\mathcal{T}). \tag{31}\] Note that in addition to the fourth case in (20), we also set the weight of an edge to infinity in (30) in the following scenarios. First, we set the weights of all edges starting from a vertex \(i,i\in\mathcal{A}\) to infinity, as any AIRS node cannot be involved in \(\Omega\). Second, if \((0,J+j)\in E\) (or \(\mu_{0,J+j}=1\)), we also set its weight to infinity, as this corresponds to the direct LoS transmission from the BS to the users in cell \(j\), which has already been accounted for in Section III-B1. Similarly to (21) and (22), if \(\tilde{\lambda}_{0,J+j}(\Omega,\mathcal{P},\mathcal{A},\mathcal{T})\) is equal to infinity, it implies that the path \(\Omega\) cannot be used to serve users in cell \(j\). Let \(\Gamma_{0,J+j}\) denote the set of all LoS paths from vertex \(0\) to vertex \(J+j,j\in\mathcal{J}\) in \(G\). With the weights defined in (30), we select the optimal \(\Omega\) to minimize \(\tilde{\lambda}_{0,J+j}(\Omega,\mathcal{P},\mathcal{A},\mathcal{T})\), i.e., \[\tilde{\lambda}_{0,J+j}(\mathcal{P},\mathcal{A},\mathcal{T})=\min_{\Omega\in \Gamma_{0,J+j}}\tilde{\lambda}_{0,J+j}(\Omega,\mathcal{P},\mathcal{A}, \mathcal{T}), \tag{32}\] which can still be calculated using the shortest path algorithms in graph theory. Note that if \(\Gamma_{0,J+j}=\emptyset\), we can equivalently set \(\tilde{\lambda}_{0,J+j}(\mathcal{P},\mathcal{A},\mathcal{T})=\infty\). By substituting (32) into (28), we can obtain the maximum worst-case SNR in cell \(j\) via the all-PIRS-reflection path selection for any given \(\mathcal{P}\), \(\mathcal{A}\), and \(\mathcal{T}\) as \[\tilde{\gamma}_{J+j}(\mathcal{P},\mathcal{A},\mathcal{T})=e^{-\tilde{\lambda}_{0,J+ j}(\mathcal{P},\mathcal{A},\mathcal{T})}C_{0},j\in\mathcal{J}. \tag{33}\] Finally, by comparing the maximum worst-case SNRs under the above three types of transmissions in (9), (26), and (33), respectively, we set the maximum worst-case SNR for cell \(j\) under any given \(\mathcal{P}\), \(\mathcal{A}\), and \(\mathcal{T}\) as \[\gamma_{J+j}(\mathcal{P},\mathcal{A},\mathcal{T})=\] \[\max\{\gamma_{0,J+j},\bar{\gamma}_{J+j}(\mathcal{P},\mathcal{A}, \mathcal{T}),\tilde{\gamma}_{J+j}(\mathcal{P},\mathcal{A},\mathcal{T})\}, increase \(|\mathcal{P}|\) and/or \(|\mathcal{A}|\), the cell-use cost in (7) will increase, resulting in a higher total deployment cost. Nonetheless, by this means, the IRSs can cover a larger portion of \(\mathcal{D}\), which helps create a larger number of cascaded LoS paths. This thus results in a larger size of \(\Gamma_{0,b_{l}}\) and \(\Gamma_{b_{l},J+j}\) in (23) and (24), as well as \(\Gamma_{0,J+j}\) in (32), which is beneficial to improve the SNR performance in (34) thanks to the enhanced LoS path diversity. Furthermore, even for a fixed number of cells used for IRS deployment (i.e., \(|\mathcal{P}|+|\mathcal{A}|\)), the cell-use cost (and the total deployment cost) in (7) may also be increased by increasing \(|\mathcal{A}|\) due to \(c_{A,0}>c_{P,0}\). This, on the other hand, may also help improve the SNR performance in (34), thanks to the AIRS's capability of amplifying the incident signal [21]. It thus follows that _the number of cells used for PIRS/AIRS deployment should optimally balance between the cell-use cost in (7) and SNR performance._ To validate this trade-off, we next consider a simplified example, where the BS transmits signal to cell \(j\) over a multi-IRS-reflection path formed by \(L\) IRSs, denoted as \(\hat{\Omega}=\{0,b_{1},\ldots,b_{l},\ldots,b_{L},J+j\}\). We assume that the distance between any two adjacent nodes in \(\hat{\Omega}\) is identical to \(d_{0}\) and consider the following two cases. In the first case, all IRSs over \(\hat{\Omega}\) are PIRSs (each equipped with \(T_{0}\) tiles). While for the second case, PIRS \(b_{l}\) in the first case is replaced by an AIRS equipped with \(T_{0}c_{P,0}/c_{A,0}\) (assumed to be an integer for convenience) tiles, such that its total hardware cost is the same as the first case, while its cell-use cost increases. Next, we show that the second case may achieve a higher received SNR in cell \(j\) under \(\hat{\Omega}\) than the first case. Specifically, in the first case, it follows from (28) that the received SNR in cell \(j\) under \(\hat{\Omega}\) is given by \[\gamma_{P}=C_{0}\kappa_{0}^{2(L+1)}(N^{2}T_{0})^{2L}, \tag{35}\] where \(\kappa_{0}\triangleq\sqrt{\beta_{0}}/d_{0}^{2}\) denotes the LoS path gain between any two adjacent nodes over \(\hat{\Omega}\). In the second case, it follows from (17) that the received SNR in cell \(j\) under \(\hat{\Omega}\) is given by \[\gamma_{A}\!=\!\Big{(}\frac{\kappa_{0}^{-2l}}{C_{0}c^{\prime}N_{0}N_{0}^{2(l- 1)}}\!+\!\frac{\kappa_{0}^{-2(L-l+1)}}{C_{A}c^{\prime 2}N_{0}^{2(L-l+1)}}\!+\! \frac{\kappa_{0}^{-2(L+1)}}{C_{0}C_{A}c^{\prime 2}N_{0}^{2L}}\Big{)}^{-1}, \tag{36}\] where \(c^{\prime}=c_{P,0}/c_{A,0}\) and \(N_{0}=N^{2}T_{0}\). By comparing \(\gamma_{P}\) and \(\gamma_{A}\), we have \[\frac{\gamma_{A}}{\gamma_{P}}=\frac{C_{A}c^{\prime 2}N_{0}^{2}}{C_{A}c^{\prime}N_{ 0}(\kappa_{0}N_{0})^{2(L+1-l)}+C_{0}(\kappa_{0}N_{0})^{2l}+N_{0}^{2}}. \tag{37}\] It is observed that the right-hand side (RHS) of (37) decreases with \(N_{0}\) (or \(T_{0}\)) and increases with \(\kappa_{0}^{-1}\) (or \(d_{0}\)), which implies that the second case can achieve a higher received SNR than the first case if \(T_{0}\) is small and/or \(d_{0}\) is large, thus validating our previous claim. ## IV Problem Formulation To resolve the fundamental trade-offs shown in Section III-C, in this paper, we aim to jointly optimize the PIRS and AIRS deployment (i.e., \(\mathcal{A}\) and \(\mathcal{P}\)) and the deployed tile number per candidate location (i.e., \(\mathcal{T}\)) to minimize the total deployment cost in (7), subject to constraints on the SNR performance in (34). The associated optimization problem is formulated as (P1) \[\underset{\mathcal{P},\mathcal{A},\mathcal{T}}{\text{minimize}} c(\mathcal{P},\mathcal{A},\mathcal{T})\] (38) subject to \[\mathcal{A}\subseteq\mathcal{I}_{0},\mathcal{P}\subseteq \mathcal{I}_{0},\mathcal{P}\cap\mathcal{A}=\emptyset,\] (39) \[\gamma_{J+j}(\mathcal{P},\mathcal{A},\mathcal{T})\geq\gamma_{0}, \forall j\in\mathcal{J},\] \[0<T_{p}\leq T_{0}^{\max},T_{p}\in\mathbb{N}^{+},\forall p\in \mathcal{P},\] (40) \[0<T_{a}\leq T_{0}^{\max},T_{a}\in\mathbb{N}^{+},\forall a\in \mathcal{A},\] (41) where \(\gamma_{0}\geq 0\) denotes a prescribed SNR target. Note that (P1) is a non-convex combinatorial optimization problem with discrete variables and non-convex constraints in (39), which is challenging to solve optimally using conventional optimization algorithms in general. A straightforward approach to optimally solve (P1) is by enumerating all possible PIRS and AIRS location combinations as well as their tile number combinations, which, however, results in practically formidable computational complexity. In Section V, we propose an efficient partial enumeration method that properly discards some undesirable solution sets that cannot achieve optimal performance, thereby significantly reducing the overall computational complexity. Note that for (P1) to be feasible, a maximum \(\gamma_{0}\) should exist due to the maximum tile number constraints in (40) and (41). In the special case of (P1) where only PIRSs are allowed to be deployed with \(\mathcal{A}=\emptyset\), the maximum SNR in (34) or the maximum SNR target \(\gamma_{0}\) in (39) can be achieved by deploying PIRSs in all cells in \(\mathcal{I}_{0}\) with each equipped with the maximum number of (passive) tiles, i.e., \(\mathcal{P}=\mathcal{I}_{0}\) and \(T_{i}=T_{0}^{\max},i\in\mathcal{P}\). In the general case with joint PIRS and AIRS deployment, it is expected that a larger \(\gamma_{0}\) should be allowed thanks to the poor amplification of AIRS. ## V Proposed Solution to (P1) In this section, we first optimize the tile numbers for any given IRS locations, based on which a partial enumeration algorithm is proposed to solve (P1). ### _Tile Number Optimization_ First, we consider the tile number optimization for any given locations of PIRSs and AIRSs (i.e., \(\mathcal{P}\) and \(\mathcal{A}\)). In this case, (P1) can be simplified as (P2) \[\underset{\mathcal{T}}{\text{minimize}} c_{P}\sum_{p\in\mathcal{P}}T_{p}+c_{A}\sum_{a\in\mathcal{A}}T_{a}+C\] (42) subject to \[\gamma_{J+j}(\mathcal{P},\mathcal{A},\mathcal{T})\geq\gamma_{0}, \forall j\in\mathcal{J},\] \[\eqref{eq:P1}\text{ and \eqref{eq:P2}},\] where \(C\triangleq c_{P,0}|\mathcal{P}|+c_{A,0}|\mathcal{A}|\) denotes the cell-use cost that is a constant. Since the maximum SNR can be achieved by deploying the maximum number of tiles at each IRS location, (P2) is feasible iff \(\gamma_{J+j}(\mathcal{P},\mathcal{A},\mathcal{T}^{\max})\geq\gamma_{0}\), \(\forall j\in\mathcal{J}\), where \(\mathcal{T}^{\max}=\{T_{i}|T_{i}=T_{0}^{\max},i\in\mathcal{P}\cup\mathcal{A}\}\). However, (P2) is still a non-convex combinatorial optimization problem. Although it can be optimally solved by enumerating all possible tile number combinations, this results in the worst-case complexity in the order of \((T_{0}^{\max})^{|\mathcal{P}|+|\mathcal{A}|}\), which becomes unaffordable if \(T_{0}^{\max}\) and/or \(|\mathcal{P}|+|\mathcal{A}|\) is practically large. To avoid such high computational complexity, we propose a sequential refinement algorithm to solve (P2), which first obtains an approximate solution by relaxing it into a convex optimization problem. Then, we sequentially refine the number of tiles deployed in each cell based on this approximate solution, as detailed below. #### V-1 Relaxing (P2) into a Convex Optimization Problem The main challenge in solving (P2) arises from the non-convex constraints in (42). Particularly, such constraints cannot be transformed into a convex form by simply relaxing \(T_{i}\) to be continuous variable, due to the "max" and "min" operators involved in \(\gamma_{J+j}(\mathcal{P},\mathcal{A},\mathcal{T})\) arising from path selection in both Type 2 and Type 3 transmissions (see Section III-B). To tackle this challenge, we consider simplifying \(\gamma_{J+j}(\mathcal{P},\mathcal{A},\mathcal{T})\) by considering only a single path from the BS to each cell in each of Type 2 and Type 3 transmissions, so as to avoid path selection and thus the "max" and "min" operations involved. To determine such a single path, we propose to set the tile numbers at all IRS locations as their minimum (i.e., \(\mathcal{T}\) = \(\mathcal{T}^{\min}\triangleq\{T_{i}|T_{i}=1,i\in\mathcal{P}\cup\mathcal{A}\}\)) and then perform the path selection procedures in (23)-(26) and (32) for Type 2 and Type 3 transmissions, respectively. Such a single path selection is motivated by the fact that the overall SNR performance in (34) monotonically increases with \(T_{i}\) due to the increasing CPB and amplification gains by PIRSs and AIRSs, respectively. The effect of these reflection gains can therefore be eliminated by setting their tile number to the minimum, which enables us to find paths with sufficiently high end-to-end LoS path gains and low noise power amplification. Although this may reduce the size of the feasibility solution set of (P2), it greatly simplifies (P2), and we will also further refine its resultant performance sequentially in the next part of this subsection. After setting \(\mathcal{T}\) to \(\mathcal{T}^{\min}\), based on (26) and (33), we obtain the resulting maximum worst-case SNRs in cell \(j\) with Type 2 and Type 3 transmissions as \(\bar{\gamma}_{J+j}(\mathcal{P},\mathcal{A},\mathcal{T}^{\min})\) and \(\bar{\gamma}_{J+j}(\mathcal{P},\mathcal{A},\mathcal{T}^{\min})\), respectively, and denote their selected paths as \(\bar{\Omega}_{j}\) and \(\bar{\Omega}_{j}\). Then, based on (17) and (28), it can be shown that in actual communications, the worst-case received SNRs over the two paths \(\bar{\Omega}_{j}\) and \(\bar{\Omega}_{j}\) can be respectively expressed as \[\bar{\gamma}_{0,J+j}(b_{l},\bar{\Omega}_{j},\mathcal{P},\mathcal{ A},\mathcal{T})=\] \[\quad\quad\Big{(}\frac{\tilde{C}_{0}}{N^{2}T_{b_{l}}}\!\!\prod_{m= 1}^{l-1}\!\!\!\frac{1}{T_{b_{m}}^{2}}\!\!+\!\bar{C}_{A}\!\!\prod_{m=l}^{L}\!\! \frac{1}{T_{b_{m}}^{2}}\!\!+\!\bar{C}_{0}\bar{C}_{A}\!\!\prod_{m=l}^{L}\!\!\! \frac{1}{T_{b_{m}}^{2}}\!\Big{)}^{-1}, \tag{43}\] \[\bar{\gamma}_{0,J+j}(\bar{\Omega}_{j},\mathcal{P},\mathcal{A}, \mathcal{T})=\tilde{C}_{0}^{-1}\prod_{m=1}^{L}T_{b_{m}}^{2}, \tag{44}\] where we assume that \(\bar{\Omega}_{j}\cap\mathcal{A}=\{b_{l}\}\) in (43), \(\bar{C}_{0}=\frac{e^{\tilde{b}_{0,b_{l}}(\Omega_{j},\mathcal{P},\mathcal{A}, \mathcal{T}^{\min})}}{\tilde{C}_{0}}\), \(\bar{C}_{A}=\frac{e^{\tilde{b}_{0,J+j}(\Omega_{j}),\mathcal{P},\mathcal{A}, \mathcal{T}^{\min})}}{\tilde{C}_{0}}\), and \(\tilde{C}_{0}=\frac{e^{\tilde{b}_{0,J+j}(\Omega_{j}),\mathcal{P},\mathcal{A}, \mathcal{T}^{\min})}}{\tilde{C}_{0}}\). Furthermore, in the case of Type 1 transmission, i.e., there is a direct LoS path from the BS to cell \(j\), i.e., \(\{0,J+j\}\), the actual worst-case received SNR in cell \(j\) is given by \(\gamma_{0,J+j}\) in (9). Next, we replace \(\gamma_{J+j}(\mathcal{P},\mathcal{A},\mathcal{T})\) in (42) with the actual worst-case received SNR in cell \(j\) over the following path, \[\Omega_{j}=\begin{cases}\bar{\Omega}_{j}&\text{if }\bar{\gamma}_{J+j}( \mathcal{P},\mathcal{A},\mathcal{T}^{\min})\geq\\ &\max\{\gamma_{0,J+j},\bar{\gamma}_{J+j}(\mathcal{P},\mathcal{A},\mathcal{T}^ {\min})\},\end{cases} \tag{45}\] \[\bar{\Omega}_{j} \quad\text{if }\bar{\gamma}_{J+j}(\mathcal{P},\mathcal{A}, \mathcal{T}^{\min})>\] \[\quad\quad\quad\max\{\gamma_{0,J+j},\bar{\gamma}_{J+j}(\mathcal{P },\mathcal{A},\mathcal{T}^{\min})\},\] \[\{0,J+j\}\quad\text{otherwise},\] i.e., the path corresponding to the maximum worst-case received SNR under \(\mathcal{T}=\mathcal{T}^{\min}\) among the three types of LoS transmission. Hence, \(\gamma_{J+j}(\mathcal{P},\mathcal{A},\mathcal{T})\) in (42) becomes \[\hat{\gamma}_{J+j}(\mathcal{P},\mathcal{A},\mathcal{T})=\begin{cases}\bar{ \gamma}_{0,J+j}(b_{l},\bar{\Omega}_{j},\mathcal{P},\mathcal{A},\mathcal{T})& \text{if }\Omega_{j}=\bar{\Omega}_{j},\\ \bar{\gamma}_{0,J+j}(\bar{\Omega}_{j},\mathcal{P},\mathcal{A},\mathcal{T})& \text{if }\Omega_{j}=\bar{\Omega}_{j},\\ \gamma_{0,J+j}&\text{otherwise},\end{cases} \tag{46}\] and the simplified version of (P2) is given by \[\text{(P2.1)}\ \underset{\mathcal{T}}{\text{minimize}} \quad c_{P}\sum_{p\in\mathcal{P}}T_{p}+c_{A}\sum_{a\in\mathcal{A}}T_{a}+C\] \[\text{subject to} \quad\hat{\gamma}_{J+j}(\mathcal{P},\mathcal{A},\mathcal{T})\geq \gamma_{0},\forall j\in\mathcal{J}, \tag{47}\] Although (P2.1) is still a non-convex discrete problem, it can be efficiently relaxed into a convex one. To this end, we define a set of variables \(x_{i}\triangleq\ln(T_{i}),i\in\mathcal{P}\cup\mathcal{A}\). By substituting them into (43) and (44), \(\hat{\gamma}_{J+j}(\mathcal{P},\mathcal{A},\mathcal{T})\) in (46) can be rewritten as \[\hat{\gamma}_{J+j}(\mathcal{P},\mathcal{A},\mathbf{x})\!=\!\begin{cases}\Big{(} \frac{\tilde{C}_{0}}{N^{2}}e^{-x_{b_{l}}-\sum_{m=1}^{l-1}2x_{b_{m}}}+\bar{C}_{A} e^{-\sum_{m=l}^{L}2x_{b_{m}}}\\ \quad\quad\quad+\tilde{C}_{0}\bar{C}_{A}e^{-\sum_{m=1}^{L}2x_{b_{m}}}\Big{)}^{-1} &\text{if }\Omega_{j}=\bar{\Omega}_{j},\\ \tilde{C}_{0}^{-1}e^{\sum_{m=1}^{L}2x_{b_{m}}}&\text{if }\Omega_{j}=\bar{\Omega}_{j},\\ \gamma_{0,J+j}&\text{otherwise}.\end{cases} \tag{48}\] By substituting (48) into (P2.1) and relaxing each \(x_{i}\) into a continuous variable, we obtain the following optimization problem w.r.t. \(\mathbf{x}\), i.e., \[\text{(P2.2)}\ \underset{\mathbf{x}}{\text{minimize}} \quad c_{P}\sum_{p\in\mathcal{P}}e^{x_{p}}+c_{A}\sum_{a\in\mathcal{A}}e^{x_ {a}}+C\] \[\text{subject to} \quad\hat{\gamma}_{J+j}(\mathcal{P},\mathcal{A},\mathbf{x})\geq\gamma_{0}, \forall j\in\mathcal{J}, \tag{49}\] \[0\leq x_{p}\leq\ln T_{0}^{\max},\forall p\in\mathcal{P},\] (50) \[0\leq x_{a}\leq\ln T_{0}^{\max},\forall a\in\mathcal{A}. \tag{51}\] To solve (P2.2), it is noted that (49) can be equivalently transformed into \(\hat{\gamma}_{J+j}^{-1}(\mathcal{P},\mathcal{A},\mathbf{x})\leq\gamma_{0}^{-1}\), with \(\hat{\gamma}_{J+j}^{-1}(\mathcal{P},\mathcal{A},\mathbf{x})\) being convex w.r.t \(\mathbf{x}\) based on (48). As a result, (P2.2) is a convex problem, which can be optimally solved by the interior-point algorithm [26]. Let \(\mathbf{x}^{*}\triangleq\{x_{i}^{*}\}\) denote the optimal solution to (P2.2). Next, we reconstruct an integer \(T_{i}\) from \(x_{i}^{*}\) as \[\mathcal{T}^{*}=\{T_{i}^{*}|T_{i}^{*}=\lceil e^{x_{i}^{*}}\rceil,i\in\mathcal{P} \cup\mathcal{A}\}, \tag{52}\] where \(\lceil x\rceil\) denotes the least integer greater than or equal to \(x\). Note that \(\mathcal{T}^{*}\) must be a feasible solution to (P2.1) since \(\hat{\gamma}_{J+j}(\mathcal{P},\mathcal{A},\mathcal{T})\) in (46) is an element-wise increasing function of \(\mathcal{T}\), and \(T_{i}^{*}\geq e^{x_{i}^{*}}, is also fixed as \(T_{s}^{*}\) (instead of being optimized), since this may always result in an infeasible problem due to the SNR constraints in (P2). In particular, when the \(t\)-th tile is removed in cell \(s\), i.e., \(T_{s}=T_{s}^{*}-t,1\leq t\leq T_{s}^{*}-1\), the associated optimization problem is given by \[\text{(P2.3.}t)\ \underset{\mathcal{T}}{\text{minimize}} c_{P}\sum_{p\in\mathcal{P}}T_{p}+c_{A}\sum_{a\in\mathcal{A}}T_{a}+C\] \[\text{subject to} \hat{\gamma}_{J+j}(\mathcal{P},\mathcal{A},\mathcal{T})\geq \gamma_{0},\forall j\in\mathcal{J}, \tag{53}\] \[0<T_{i}\leq T_{0}^{\max},\forall i\in\mathcal{P}\cup\mathcal{A} \backslash\mathcal{I}^{\prime},i\neq s,\] (54) \[T_{s}=T_{s}^{*}-t,\] (55) \[T_{i}=T_{i}^{*},\forall i\in\mathcal{I}^{\prime}. \tag{56}\] For (P2.3.\(t\)), we can introduce a similar variable transformation as in (P2.2) and solve the resulting problem w.r.t. \(\mathbf{x}\) (denoted as (P2.4.\(t\)) and omitted here for brevity) optimally using the interior-point algorithm. Let \(\mathbf{x}_{t}^{\prime}\triangleq\{x_{t,i}^{\prime}\}\) denote the optimal solution to (P2.4.\(t\)). Similarly to (V-A), the corresponding reconstructed tile numbers are given by \[\mathcal{T}_{t}^{\prime}=\{T_{i}|T_{i}=[e^{x_{t,i}^{\prime}}],i\in\mathcal{P} \cup\mathcal{A}\}. \tag{57}\] Next, we check whether the incumbent total deployment cost, i.e., \(c(\mathcal{P},\mathcal{A},\mathcal{T}^{*})\), can be reduced by replacing \(\mathcal{T}^{*}\) with \(\mathcal{T}_{t}^{\prime}\), i.e., \(c(\mathcal{P},\mathcal{A},\mathcal{T}_{t}^{\prime})<c(\mathcal{P},\mathcal{A },\mathcal{T}^{*})\). If the above condition is satisfied, we update \(\mathcal{T}^{*}\) as \(\mathcal{T}_{t}^{\prime}\) and proceed to solve (P2.3.\(t+1\)) (as well as (P2.4.\(t+1\))). Otherwise, the successive tile removal terminates, and we update \(\mathcal{I}^{\prime}\) as \(\mathcal{I}^{\prime}\cup\{s\}\) and proceed to the tile refinement for the next cell. It is evident that by applying the above sequential refinement method, the objective value of (P2) or the total deployment cost will be non-increasing with the iterations. Nonetheless, the ultimate performance of the sequential refinement for all cells in \(\mathcal{P}\cup\mathcal{A}\) depends critically on the order of cells selected. In this paper, we propose to determine their selected order based on the following procedures. First, at the beginning of the sequential refinement, we calculate the increase in the tile numbers of each cell \(i,i\in\mathcal{P}\cup\mathcal{A}\) after the reconstruction in (V-A), denoted as \(\delta_{i}^{*}=[e^{x_{i}^{*}}]-e^{x_{i}^{*}},i\in\mathcal{P}\cup\mathcal{A}\). It is evident that a larger \(\delta_{i}^{*}\) indicates that \(e^{x_{i}^{*}}\) is closer to \([e^{x_{i}^{*}}]\) than \([e^{x_{i}^{*}}]\), and thus it may be more likely to reduce the tile number deployed in cell \(i\). As such, we first refine the tile number in cell \(s\) with \(s=\arg\max_{i\in\mathcal{P}\cup\mathcal{A}}\delta_{i}^{*}\). Next, in each iteration, if \(\mathcal{T}^{*}\) can be updated as \(\mathcal{T}_{t}^{\prime}\) in (57), we update \(\delta_{i}^{*}=[e^{x_{t,i}^{\prime}}]-e^{x_{t,i}^{\prime}},i\in\mathcal{P}\cup \mathcal{A}\). After the successive tile removal in this iteration terminates, we update \(s=\arg\max_{\mathcal{P}\cup\mathcal{A}\backslash\mathcal{I}^{\prime}}\delta_{i}^ {*}\) and refine the tile number in cell \(s\) in the next iteration. The main procedures of the proposed sequential refinement algorithm are summarized in Algorithm 1. For Algorithm 1, its computational complexity is mainly due to the use of the Dijkstra algorithm for computing (V-A) and solving the convex problems (P2.2) and (P2.4.\(t\)), which thus yields polynomial complexity in the order of \(|\mathcal{P}|+|\mathcal{A}|\). As a result, Algorithm 1 is ensured to incur much lower complexity than full enumeration which requires an exponential complexity in the order of \((T_{0}^{\max})^{|\mathcal{P}|+|\mathcal{A}|}\). ``` 1:Input: \(\mathcal{P}\), \(\mathcal{A}\), and \(\gamma_{0}\). 2:Initialize: \(\mathcal{T}^{\max}=\{T_{i}|T_{i}=T_{0}^{\max},i\in\mathcal{P}\cup\mathcal{A}\}\). 3:if\(\gamma_{J+j}(\mathcal{P},\mathcal{A},\mathcal{T}^{\max})\geq\gamma_{0},\forall j \in\mathcal{J}\)then 4: Calculate \(\mathcal{T}^{*}\) in (V-A) via solving (P2.2). 5: Calculate \(\delta_{i}^{*}=[e^{x_{i}^{*}}]-e^{x_{i}^{*}},i\in\mathcal{P}\cup\mathcal{A}\) and set \(s=\arg\max_{i\in\mathcal{P}\cup\mathcal{A}}\delta_{i}^{*}\). 6: Initialize: \(\mathcal{T}=\emptyset\). 7:while\(|\mathcal{I}^{\prime}|\leq|\mathcal{P}|+|\mathcal{A}|\)do 8: Let \(t=1\). 9:while\(t\leq T_{s}^{*}-1\)do 10: Calculate \(\mathcal{T}_{t}^{\prime}\) in (57) via solving (P2.4.\(t\)). 11:if\(c(\mathcal{P},\mathcal{A},\mathcal{T}_{t}^{\prime})<c(\mathcal{P},\mathcal{A}, \mathcal{T}^{*})\)then 12: Update \(\mathcal{T}^{*}=\mathcal{T}_{t}^{\prime}\) and \(\delta_{i}^{*}=\lceil e^{x_{t,i}^{\prime}}\rceil-e^{x_{t,i}^{\prime}},i\in \mathcal{P}\cup\mathcal{A}\). 13: Update \(t=t+1\). 14:else 15: Break. 16:endif 17:endwhile 18: Update \(\mathcal{I}^{\prime}=\mathcal{I}^{\prime}\cup\{s\}\) and \(s=\arg\max_{i\in\mathcal{P}\cup\mathcal{A}\backslash\mathcal{I}^{\prime}}\delta_{i}^ {*}\). 19:endwhile 20:else 21: (P2) is infeasible. 22:endif 23: Output \(\mathcal{T}^{*}\) as the optimized solution to (P2). ``` **Algorithm 1** Sequential Refinement Algorithm for (P2) ### _IRS Location Optimization_ In this subsection, we aim to solve the original problem (P1), based on the proposed solution to (P2) via Algorithm 1. Due to the inherent complex structure of (P1), we enumerate all possible PIRS and AIRS location combinations; while for each PIRS and AIRS location deployment, Algorithm 1 can be invoked to obtain the tile number solution. Then, the solution to (P1) can be obtained as the one that achieves the minimum objective value. Although such an enumeration involves the search among \(\sum_{i=0}^{[2]}\sum_{j=0}^{[2]-i}\binom{[2]_{0}}{i}\binom{[2]_{0}-i}{j}\) possible IRS location combinations, most of them can be safely discarded due to the following two reasons. First, many location combinations cannot yield a feasible solution to (P1). Second, the total deployment cost by some location combinations may be even higher than the incumbent minimum cost even if we set the tile number per location to its minimum (i.e., \(\mathcal{T}=\mathcal{T}^{\min}\)), especially when \(|\mathcal{P}|\) and/or \(|\mathcal{A}|\) is large. In both cases, there is no need to implement the sequential refinement process in Algorithm 1. Hence, only a partial enumeration of all possible location combinations is needed, which incurs considerably lower computational complexity than full enumeration. Furthermore, (P1) can be optimized offline, and thus the above computational time is practically tolerable. ## VI Numerical Results In this section, we provide numerical results to demonstrate the effectiveness of our proposed algorithms for joint PIRS and AIRS deployment optimization. We consider a typical indoor environment with an area of 40 m \(\times\) 40 m that is divided into \(J=16\) cells each with an area of 10 m \(\times\) 10 m, among which there exist \(I_{0}=10\) candidate locations. The BS is assumed to be equipped with \(M=10\) antennas and deployed in cell 0 to cover the maximum number of cells. In addition, the set of cells containing candidate IRS locations is set to \(\mathcal{I}_{0}=\{2,3,4,5,6,7,8,9,11,12\}\). The LoS graph \(G\) of the considered environment is shown in Fig. 3, where the nodes representing the candidate IRS locations and the BS are marked by black squares (\(\square\)), while the virtual nodes representing all possible user locations in each cell are marked by red triangles (\(\triangle\)). The communication system is assumed to operate at a carrier frequency of 3.5 GHz, with the wavelength \(\lambda=0.087\) m and the reference LoS path gain \(\beta_{0}=(\lambda/4\pi)^{2}=-43\) dB. The LoS path-loss exponent is set to \(\alpha=2\). The maximum transmit power of the BS is set to \(P_{0}=30\) dBm, while the maximum amplification power per AIRS reflecting element is set to \(P_{A}=-5\) dBm. The user receiver and AIRS noise power are assumed to be identical to \(\sigma^{2}=-60\) dBm. The number of reflecting elements in each passive/active tile's dimension is set to \(N=10\). The maximum number of tiles that can be deployed at each candidate location is set to \(T_{0}^{\max}=9\). The hardware costs per passive and active tile are set to \(c_{P}=1\) and \(c_{A}=3\), respectively, while the cell-use costs for PIRS and AIRS deployment are set to \(\alpha_{P,0}=5\) and \(\alpha_{A,0}=12\), respectively. In the simulation, to evaluate the performance of the proposed deployment algorithms, we consider the following three benchmarks. * _Benchmark 1_: All-PIRS deployment with tile number optimization for candidate locations. The performance of this benchmark can be obtained by setting \(\mathcal{A}=\emptyset\) in (P1). * _Benchmark 2_: All-PIRS deployment with equal tile number at all candidate locations. The performance of this benchmark can be obtained by setting \(\mathcal{A}=\emptyset\) and \(T_{p}=4,\forall p\in\mathcal{P}\) in (P1). * _Benchmark 3_: Joint PIRS and AIRS deployment with equal tile number at all candidate locations. The performance of this benchmark can be obtained by setting \(T_{p}=4,\forall p\in\mathcal{P}\) and \(T_{a}=1,\forall a\in\mathcal{A}\) in (P1). ### _Optimized Tile Numbers with Given IRS Locations_ Fig. 4 shows the total deployment cost by Algorithm 1 and full enumeration versus the SNR target \(\gamma_{0}\) with given IRS locations. Here, we consider a single AIRS deployed in cell \(3\) (i.e., \(\mathcal{A}=\{3\}\)) with \(|\mathcal{A}|=1\) and multiple PIRSs with the following two deployment designs, i.e., \(\mathcal{P}=\{2,7,8,11\}\) with \(|\mathcal{P}|=4\) and \(\mathcal{P}=\{2,6,7,8,11\}\) for \(|\mathcal{P}|=5\). Accordingly, the total cell-use cost is a constant and equal to \(c_{A,0}+4c_{P,0}=32\) and \(c_{A,0}+5c_{P,0}=37\) in the cases of \(|\mathcal{P}|=4\) and \(|\mathcal{P}|=5\), respectively. First, it is observed from Fig. 4 that the proposed algorithm achieves the same performance as full enumeration in both PIRS deployment designs. Moreover, in both PIRS deployment designs, the total deployment cost (or the hardware cost) monotonically increases with \(\gamma_{0}\), as expected. To further manifest the computational efficiency of our proposed Algorithm 1, we compare its running time (in seconds) with that of full enumeration in Table I under \(\gamma_{0}=36\) dB. It is observed that the proposed algorithm incurs much shorter computational time than full enumeration while yielding a near-optimal solution to (P2), especially in the case of \(|\mathcal{P}|=5\), which thus validates the efficiency of Algorithm 1 in solving (P2). ### _Jointly Optimized PIRS and AIRS Deployment_ Next, we show the performance of our proposed algorithm for joint IRS deployment and tile number optimization. Figs. 5 and 6 show the optimized IRS deployment solutions to (P1) under different values of \(\gamma_{0}\) and by different benchmark schemes. In both figures, the BS's located cell (cell 0) is marked by a red star (\(\star\)), while the cells deployed with PIRSs and AIRSs are marked by red circles (\(\bullet\)) and blue diamonds (\(\blacklozenge\)), respectively. Moreover, the optimized tile number at each selected candidate location (i.e., \(T_{i},i\in\mathcal{P}\cup\mathcal{A}\)) and the selected path from the BS to each cell to satisfy the SNR target are also shown. Fig. 5 shows the optimized IRS deployment solutions under different schemes with \(\gamma_{0}=15\) dB. First, it is observed from Fig. 5(a) that our proposed deployment solution deploys one AIRS (with 2 active tiles) and 4 PIRSs (with 8 passive tiles in total) in 5 cells, while leaving the other 5 candidate locations unused. This implies that the proposed deployment solution can substantially reduce the cell-use cost and thus the total deployment cost. Second, it is found in Fig. 5(a) that the selected cascaded LoS paths from the BS to all cells outside its direct LoS coverage go through the AIRS, which implies its pivotal role in terms of LoS coverage enhancement. This is because the AIRS is capable of amplifying the incident signal strength and thus is used in most LoS paths to satisfy the SNR target. Moreover, it is observed from Fig. 5(b) that Benchmark 1 replaces the AIRS in our proposed deployment solution with a PIRS, which yields a lower cell-use cost than ours. However, to satisfy the SNR target, Benchmark 1 needs to consume 25 passive tiles in total, which results in a much higher hardware cost than our proposed deployment solution. As a result, its Fig. 4: Total deployment cost versus SNR target with given IRS locations. Fig. 3: LoS graph \(G\) of the considered environment. total deployment cost is higher than ours (50 versus 46). This observation shows that the joint use of AIRSs and PIRSs may help reduce the total deployment cost by reducing the number of passive tiles thanks to signal strength enhancement by AIRSs, which is in accordance with our analysis in Section III-C. In addition, it is observed from Fig. 5(c) that compared to our proposed deployment solution, Benchmark 3 deploys AIRSs in two cells with the same active tiles in total as ours and consumes more passive tiles than ours. Hence, it incurs both higher hardware cost and cell-use cost than our proposed solution, with the total deployment cost equal to 53, which is even higher than that by Benchmark 1. This is because Benchmark 3 fails to exploit the design degree of freedom in tile number optimization. Fig. 6 compares the optimized deployment solutions by different schemes under \(\gamma_{0}=25\) dB. It is observed from Fig. 6(a) that by increasing \(\gamma_{0}\) from 15 dB to 25 dB, the optimized IRS locations by the proposed IRS deployment solution keep the same, while the optimized numbers of both active and passive tiles increase to satisfy the more stringent SNR constraints. As such, the total deployment cost increases from 46 to 54, which validates our discussion in Section III-C. Even so, its total deployment cost is still smaller than those by benchmarks 1 and 3. It is also observed from Figs. 6(a) and 6(b) that the increment of passive tile numbers in the proposed deployment solution is much smaller than that in Benchmark 1 (i.e., 5 versus 18) thanks to the use of the AIRS. Fig. 7 shows the total deployment costs by different schemes versus the SNR target, \(\gamma_{0}\). First, by comparing the total deployment costs between benchmarks 1 and 2, it is observed that a larger SNR target can be satisfied by the former. It is also observed that the performance gap between the proposed deployment solution and Benchmark 1 increases with \(\gamma_{0}\). Particularly, the former can satisfy a much larger SNR target than the latter at a much lower cost. This is because the proposed joint PIRS and AIRS deployment design can significantly reduce the use of passive tiles by deploying AIRSs at several pivotal locations and exploiting their amplification gains, as shown in Figs. 5 and 6. Furthermore, the proposed deployment solution outperforms Benchmark 3, especially in the low-to-moderate SNR regime by fully exploiting the design degree of freedom in tile number optimization. Finally, Fig. 8 shows the total deployment cost versus the hardware cost per active tile (i.e., \(c_{A}\)) with \(\gamma_{0}=25\) dB. Note Fig. 5: Jointly optimized PIRS/AIRS deployment under different schemes with \(\gamma_{0}=15\) dB. Fig. 6: Jointly optimized PIRS/AIRS deployment under different schemes with \(\gamma_{0}=25\) dB. Fig. 7: Total deployment cost versus target SNR. that the performance by Benchmark 2 is not shown in Fig. 8 as it fails to satisfy the SNR target with \(\gamma_{0}=25\) dB. It is observed that the total deployment costs by the proposed deployment solution and Benchmark 3 monotonically increase with \(c_{A}\). Nonetheless, even though \(c_{A}\) increases from 2 to 5, the proposed solution considerably outperforms Benchmark 1 without any AIRS. In addition, it is observed that the proposed deployment solution yields more significant performance gains over Benchmark 3 as \(c_{A}\) increases by optimally balancing the tile numbers at different locations. Accordingly, despite the higher hardware and cell-use costs incurred by AIRS than PIRS, incorporating AIRSs is beneficial to reduce the total deployment cost if their deployment is optimally designed. ## VII Conclusion In this paper, we studied a joint PIRS and AIRS deployment problem to enhance the communication performance in a given region by exploiting multi-PIRS/AIRS reflections. Based on the proposed graph-based system modeling, it was shown that there exist fundamental trade-offs between minimizing the total deployment cost and maximizing the SNR performance over all cells via the LoS path selection. To optimally reconcile these trade-offs, we jointly optimized the locations of PIRSs and AIRSs and the tile number at each location. Such a combinatorial problem was solved by applying a partial enumeration method. Our numerical results showed that our proposed algorithm achieves near-optimal performance without full enumeration and significantly outperforms other baseline deployment schemes. It was also shown that the joint use of AIRSs and PIRSs can reduce the total deployment cost by dispensing with a large number of passive reflecting elements for a given SNR target. This paper can be extended in various promising directions for future work. For example, the user SNRs in this paper are only calculated based on the theoretical LoS channel model. It would be more practical to design the IRS deployment based on actual channel measurements involving non-LoS channels. In addition, how to conduct cell division and find good candidate IRS locations efficiently in practice is also worth investigating in future work.
2310.20318
Formation of a rapidly rotating classical Be-star in a massive close binary system
This paper investigates the spin-up of a mass-accreting star in a close binary system passing through the first stage of mass exchange in the Hertzsprung gap. Inside an accreting star, angular momentum is carried by meridional circulation and shear turbulence. The circulation carries part of the angular momentum entering the accretor to its surface. The greater the rate of arrival of angular momentum in the accretor is, the greater this part. It is assumed that this part of the angular momentum can be removed by the disk further from the accretor. If the angular momentum in the matter entering the accretor is more than half the Keplerian value, then the angular momentum obtained by the accretor during mass exchange stage does not depend on the rate of arrival of angular momentum. The accretor may have the characteristics of a Be-star immediately after the end of mass exchange.
Evgeny Staritsin
2023-10-31T09:53:13Z
http://arxiv.org/abs/2310.20318v1
# Formation of a rapidly rotating classical Be-star in a massive close binary system ###### Abstract This paper investigates the spin-up of a mass-accreting star in a close binary system passing through the first stage of mass exchange in the Hertzsprung gap. Inside an accreting star, angular momentum is carried by meridional circulation and shear turbulence. The circulation carries part of the angular momentum entering the accretor to its surface. The greater the rate of arrival of angular momentum in the accretor is, the greater this part. It is assumed that this part of the angular momentum can be removed by the disk further from the accretor. If the angular momentum in the matter entering the accretor is more than half the Keplerian value, then the angular momentum obtained by the accretor during mass exchange stage does not depend on the rate of arrival of angular momentum. The accretor may have the characteristics of a Be-star immediately after the end of mass exchange. stars: binaries: close - stars: rotation - stars: early-type - stars: emission line, Be Vol.0 (20xx) No.0, 000-000 ## 1 Introduction Classical Be-stars include OBA stars with observed or previously observed emissions in the Balmer lines of hydrogen (Porter & Rivinius 2003). These stars are not supergiants and have large rotational velocities. Among Be-stars, there is the group of early spectral subclasses (B3-O9). The surface rotational velocities of these stars range widely. The lower range limit is 40%-60% of the Keplerian value, while the upper limit is 90%-100% (Cranmer 2005). The origin of the large rotational velocities of Be-stars is not clear. Young B-stars in the early spectral subclasses and O-stars are characterized by lower rotational velocities (Huang et al. 2010). 70% of these stars are observed in binary and multiple systems (Chini et al. 2012; Sana et al. 2012). All these stars are expected to form binary and multiple systems, considering selection effects. Mass exchange in a binary system may be the reason for the rapid rotation of the star receiving mass. The synthesis of the Be-stars population in binary systems makes it possible to reproduce the observed number of these stars in the Galaxy (Pols et al. 1991; Portegies Zwart 1995; Van Bever & Vanbeveren 1997; Shao & Li 2014; Hastings et al. 2021). A simple estimation made assuming the instantaneous redistribution of angular momentum in the stellar interior to solid-state rotation shows that a \(5\%-10\%\) increase to the star's mass due to accretion of mass with Keplerian velocity leads to a critical rotation state (Packet 1981). The question of what happens when there is continued accretion into a star close to a state of critical rotation has been discussed in Paczynski (1991), Popham & Narayan (1991), Colpi et al. (1991), and Bisnovatyi-Kogan (1993). Paczynski (1991), Popham and Narayan (1991), and Colpi et al. (1991) used various approaches. All authors agree that accretion does not stop when the star's speed of rotation reaches a critical value. Paczynski (1991) studied the whole star-boundary layer-accretion disk system for various rotations of the central star. For models rotating slightly above critical, mass accretion is accompanied by the loss of angular momentum from the star to the disk, mediated by viscous stresses. However, the solutions obtained in Paczynski (1991), Popham and Narayan (1991), and Colpi et al. (1991) are not self-consistent. The condition for a self-consistent solution for a system consisting of a star in a state of critical rotation and an accretion disk is that "the star absorbs accreted matter with a certain angular momentum, such that the star remains in a state of critical rotation" (Bisnovaty-Kogan 1993). Let \(J(M)\) be the angular momentum of a star with mass \(M\) in a state of critical rotation and let \(j_{e}^{Kep}\) be the specific Keplerian angular momentum at the equator of the star. Then \(j_{a}=dJ/dM<j_{e}^{Kep}\). A mass-accreting star can move along the sequence of stars in a state of critical rotation \(J(M)\) if the excess angular momentum of \(\triangle j=j_{e}^{Kep}-j_{a}\) is eliminated. Bisnovaty-Kogan (1993) constructed models of accretion disks that remove excess angular momentum from the surface of a star. At the same time, the speed of rotation at the star's surface remains critical. So an increase in the mass and angular momentum of a star in a critical rotation state may occur due to the removal of excess angular momentum from the star by the accretion disk (Paczynski 1991; Bisnovatyi-Kogan 1993). Physical processes, such as meridional circulation and turbulence, require finite amounts of time to transfer angular momentum (Staritsin 2019, 2021). At the very beginning of accretion, only the outer layers of the star, including the accreted mass, have a fast rotation. The star surface gains critical rotation shortly after the start of accretion. Later, at the accretion stage in a state of accretor critical rotation, the circulation carries part of the angular momentum brought along with the accreted mass from subsurface layers to the star's surface (Staritsin 2022). Thus, accreted layers can shrink, as usually happens during accretion. The angular momentum transferred by circulation to the star's surface can be removed through an accretion disk (Paczynski 1991; Bisnovatyi-Kogan 1993). Thus, the mass and angular momentum of an accretor in a state of critical rotation increase due to the removal of excess angular momentum from the accreted layers to the accretor surface and the further removal of this angular momentum from the star. In Staritsin (2022), the transfer of angular momentum in the accretor interior was carried out only by meridional circulation. The turbulence was artificially suppressed. This made it possible to elucidate the transport properties of circulation in an accreting star's interior. The role of turbulence in angular momentum transport within the accretor remained unclear. As to the angular momentum input, only one option has been considered, the effective transport of angular momentum from the disk's boundary layer to the accretor's upper layer. In this paper, we consider two mechanisms of angular momentum transfer in an accreting star interior, namely circulation and turbulence. This allows us to find the role of turbulence in the spinning up of a star. We also took into account the possible reduction of input angular momentum. This decrease can be attributed both to the transfer of angular momentum from the boundary layer to the outer parts of the disk, and to sub-Keplerian rotation in the disk. The accretor's rotation, obtained as a result of mass exchange, has been studied depending on the angular momentum introduced during mass exchange. ## 2 Basic Equations and Simplifications ### The angular momentum input The matter lost by the donor due to the filling of the Roche lobe falls into the accretor's gravitational field and swirls around it. The formation of gas structures around the accretor, in particular the formation of a disk and a velocity field in it, depends on the ratio between three factors : the size of the accretor \(R\), the minimum distance \(\omega_{min}\) from the center of the accretor to the central line of the stream of matter falling from donor point \(L_{1}\), and the distance from the accretor's center to the edge of the inviscid disk \(\omega_{d}\) (Lubow & Shu 1975). Transient disks with sub-Keplerian rotation have been found (for example, RW Tau (Kaitchuck & Honeycutt 1982) and \(\beta\) Per (Cugier & Molaro 1984, Richards 1992)) in direct-impact systems (\(\omega_{d}<R\)). Three-dimensional hydrodynamic calculations show disk formation in such systems; the rotation velocity is 80% and 60% of the Keplerian value at the inner and outer edges of the disk, respectively (Raymer 2012). Both transient disks (SW Cyg) and permanent, but variable, accretion disks (RY Gem, TT Hya, AU Mon) in grazing-impact systems (\(\omega_{min}<R<\omega_{d}\)) have been discovered. The velocity fields in the transient disk of the SW Cyg system and in the permanent disk of the RY Gem system are sub-Keplerian (Kaitchuck 1988, 1989). Asymmetric parts were found in the disks of the TT Hya and AU Mon systems; the gas in the disk's asymmetric part in the AU Mon system moves at a sub-Keplerian velocity (Richards et al. 2014). Hydrodynamic calculations also show the possibility of disk formation at sub-Keplerian velocities in these systems (Richards & Ratliff 1998). Permanent disks are found in systems with \(R<\omega_{min}\). The radial component of the matter velocity in the disk is directed towards the accretor and is 10-30 \(km/s\). The change in the tangential component with distance from the accretor may differ from the Keplerian one (Etzel et al. 1995). The aforementioned observational data and the results of hydrodynamic calculations relate to systems with the low mass of accreting components (\(M\leq 6\)\(M_{\odot}\)) and with a ratio of donor mass to accretor mass within the range of 0.2 to 0.3. The formation of Be-stars in the early spectral subclass occurs in systems with large component masses. The ratio of donor mass to accretor mass varies widely. Mass transfer in such systems is non-conservative (Van Rensbergen et al. 2011; Deschamps et al. 2015). The star receiving mass increases in volume (Benson 1970; Kippenhahn & Meyer-Hofmeister 1977). The distance between two stars depends on how the system loses mass and angular momentum. So, the formation of gas structures in the Roche lobe of an accretor depends on the loss of mass and angular momentum from the system. A quantitative theory of mass and angular momentum losses from a close binary system has not yet been developed. The formation of conditions for sub-Keplerian rotation in an accretion disk due to the loss of mass and angular momentum from the binary system cannot be ruled out. Thus, the possibility of mass accretion with sub-Keplerian velocities of rotation should be considered. At the very beginning of accretion, when accretor rotation velocity is low, the rotation velocity of disk matter decreases in the narrow boundary layer from the maximum value in the disk \(\Omega_{max}\) to the value on the star's surface \(\Omega_{s}\) (Paczynski 1991). Turbulence can remove angular momentum from the boundary layer to an accretor's upper layers at a rate of: \[\frac{dJ}{dt}\ =\ \frac{2}{3}R^{2}(\Omega_{max}-\Omega_{s})\dot{M}, \tag{1}\] where \(J\) - angular momentum of the accretor, \(t\) - time, and \(\dot{M}\) - mass accretion rate. Supersonic shear flow in the boundary layer is a source of acoustic waves. The waves can carry the angular momentum out of the boundary layer both into the accretor's outer part and the disk's outer part (Dittmann 2021, Coleman et al. 2022). In this case, the amount of angular momentum coming from the boundary layer into the accretor is less than the Keplerian one. In an earlier study (Staritsin 2022), we considered as follows: when at the stage of subcritical rotation, angular momentum enters the accretor through two channels, namely together with matter having the same rotation velocity as the accretor's surface and due to turbulence within the rate (1). This is a case of high efficiency of angular momentum transfer from the boundary layer to the accretor's upper part. The transfer of angular momentum in the accretor's interior was carried out by meridional circulation; turbulence was artificially suppressed. In the current calculations, angular momentum transfer in the accretor's interior can be carried out both by meridional circulation and turbulence. We have studied two variants for the arrival of angular momentum into the accretor. In the first variant, to clarify the influence of angular momentum transport by turbulence in the accretor's interior on the spinning up of the accretor, we calculated accretion with the same rate of the arrival of angular momentum into the accretor as in Staritsin (2022). At the stage of subcritical rotation, the parameter \(\Omega_{max}\) in the angular momentum source (1) is equal to \(\alpha\ \Omega^{Kep}\), where \(\alpha=0.8\); here, \(\Omega^{Kep}\) is the Keplerian velocity of the star's surface at the equator. After the angular velocity of the accretor's surface increases to \(\alpha\ \Omega^{Kep}\) value, the arrival of angular momentum from the boundary layer (1) stops. The angular velocity of the adding matter is set equal to \(\alpha\ \Omega^{Kep}\) for the remainder of the mass exchange. In the second variant, the case of extremely low efficiency of angular momentum transfer from the boundary layer to the accretor's upper part is considered. The angular momentum's source (1) in this case is assumed to be zero. As long as the angular velocity of the star's surface is less than \(\alpha\ \Omega^{Kep}\), the star accretes matter with the same angular velocity as that of the star's surface. After the surface angular velocity increases to the value of \(\alpha\ \Omega^{Kep}\), the angular velocity of the adding matter remains equal to \(\alpha\ \Omega^{Kep}\). In order to determine the dependence of the accretor's rotation state after the end of mass exchange on the content of angular momentum in adding mass, calculations were carried out at four values of \(\alpha\): \(0.8\), \(0.6\), \(0.4\), and \(0.2\). ### Angular momentum transfer in the accretor's interior Angular momentum transfer in the radiative layers of a star is taken into account in the framework of the shellular rotation model (Zahn 1992). In terms of this model, two mechanisms of angular momentum transfer are considered: meridional circulation and shear turbulence. The angular momentum transfer is described by the law of conservation of angular momentum (Tassoul 1978): \[\frac{\partial(\rho\varpi^{2}\Omega)}{\partial t}+\mbox{div}(\rho\varpi^{2} \Omega\mathbf{u})=\mbox{div}(\rho\nu_{\nu}\varpi^{2}\mbox{grad}\Omega).\] The meridional circulation velocity \(\mathbf{u}\) is determined from the law of conservation of energy in a stationary form (Maeder & Zahn 1998): \[\rho T\mathbf{u}\mbox{grad}s=\rho\varepsilon_{n}+\mbox{div}(\chi\mbox{grad}T )-\mbox{div}\mathbf{F}_{h}.\] In these equations, \(\rho\) - density, \(\varpi\) - distance to the axis of rotation, \(\Omega\) - angular velocity \(\nu_{\nu}\) - turbulent viscosity in the vertical direction, \(T\) - temperature, \(s\) - specific entropy, \(\varepsilon_{n}\) - nuclear energy release rate, \(\chi\) - thermal conductivity, \(\mathbf{F}_{h}\) - turbulent enthalpy flow in the horizontal direction: \(\mathbf{F}_{h}=-\nu_{h}\rho T\partial s/\partial\mathbf{i}_{\theta}\) and \(\nu_{h}\)- turbulent viscosity in the horizontal direction. The coefficients of turbulent viscosity were determined by Talon and Zahn (1997), Maeder (2003), and Mathis et al. (2004). The convective core rotates solid-state. These equations are solved together with equations related to the structure and evolution of stars. We used a set of programs from Paczynski (1970) modified to calculate the evolution of rotating stars (Staritsin 1999, 2005, 2007, 2009, 2014). ## 3 Calculation Results ### Binary system parameters We consider mass exchange in a binary system with the component masses of 13.4 \(M_{\odot}\) and 10.7 \(M_{\odot}\) and the period \(P=35^{d}\) as in Staritsin (2022). By the beginning of mass exchange, star rotation with a mass of 10.7 \(M_{\odot}\) is synchronized with orbital motion. The star angular momentum is equal to \(1.3\times 10^{51}\,g\cdot cm^{2}s^{-1}\). A star with a mass of 13.4 \(M_{\odot}\) loses 10.5 \(M_{\odot}\) for 12,000 years. After that, the star decouples its Roche lobe and the mass exchange stage ceases. The second star accretes 5.3 \(M_{\odot}\). The final mass of the accretor is 16.0 \(M_{\odot}\). The accretion rate was set constant, equal to the average value of \(\sim 4.4\times 10^{-4}\,M_{\odot}/year\). We consider a case when the entropy of the added matter is the same as the surface layers of the second star. The thermal timescale of the second star is longer than mass exchange duration. The star's reaction to the increase in mass in this case is well understood (Benson 1970; Flannery and Ulrich 1977; Neo et al. 1977). The second star is driven out of thermal equilibrium by mass accretion. Nuclear power output in the center of the second star increases, and some of the nuclear energy release is spent on an increase in entropy in the second star's central parts. Gravitational energy release in the surface layers is added to nuclear energy release in the center. The typical luminosity distribution in the second star's interior is shown in Staritsin (2022) (see Fig. 4). The remaining part of the mass lost by the first star leaves the system. The tidal interaction between the two stars is unable to synchronize the accreting star with the orbit due to the long period of the system and the short accretion timescale. The accretion of matter and angular momentum, as well as transport processes inside the accretor and in the disk, determine the accretor's angular momentum. The case of the high efficiency of angular momentum transfer from the boundary layer to the accretor's upper part With the beginning of mass exchange, a circulation cell is formed in the subsurface layer of the accretor, in which the circulation carries the incoming angular momentum downwards. The cell consists of accreted layers and the swirled layers of the accretor located below. In the cell's upper part, angular ve Figure 1: Angular velocity at the bottom of the outer cell of the meridional circulation at the beginning of mass exchange. locity has an almost constant value, but near the bottom of the cell, it sharply reduces to the initial value (Fig. 1). Therefore, in the lower part of the cell, the contribution of turbulence to angular momentum transfer is greater and exceeds the contribution of meridional circulation (Fig. 2). The bottom of the cell \begin{table} \begin{tabular}{l c c c c c c} \hline No & Case 1 & Case 2 & Case 3 & Case 4 & Case 5 & Case 6 \\ \hline \(J_{1}\) & 17.2 & 17.2 & 13.3 & 10.3 & 6.9 & 3.7 \\ \(J_{2}\) & 11.2 & 9.7 & 6.0 & 3.0 & 0.0 & 0.0 \\ \(J_{3}\) & 5.2 & 5.4 & 5.3 & 5.3 & 5.1 & 2.9 \\ \(J_{4}\) & 0.8 & 2.1 & 2.0 & 2.0 & 1.8 & 0.8 \\ \(J_{5}\) & 6.1 & 7.6 & 7.4 & 7.4 & 7.0 & 3.8 \\ \hline \end{tabular} The rows show as follows: (\(J_{1}\)) the angular momentum that entered the accretor during mass exchange; (\(J_{2}\)) the angular momentum removed from the accretor during mass exchange; (\(J_{3}\)) the angular momentum remaining in the accreted mass; (\(J_{4}\)) the angular momentum transferred to the part of the accretor that made up the star initially; (\(J_{5}\)) the angular momentum of the accretor after mass exchange, the angular momentum is given in units of \(10^{52}\ g\cdot cm^{2}s^{-1}\). The columns show the results for the following cases: (Case 1) angular momentum transfer from the boundary layer is considered, turbulence in the accretor’s interior is artificially suppressed (Staritsin 2022); (Case 2) angular momentum transfer from the boundary layer is considered, turbulence in the accretor interior is considered; (Case 3), (Case 4), (Case 5), and (Case 6) angular momentum transfer from the boundary layer is not considered, turbulence in the accretor interior is present, and \(\alpha\) is equal to 0.8, 0.6, 0.4, and 0.2 respectively. \end{table} Table 1: Angular momentum balance. Figure 2: Turbulent (dashed-line), advective (dot-and-dashed line), and total (solid line) angular momentum flux in the accretor’s interior at the stage of subcritical rotation. goes down into the star faster than when turbulence is artificially suppressed. The angular momentum entering the accretor is distributed over a larger mass of matter than in the case of suppressed turbulence. The rotation of the accretor's surface becomes critical when its mass increases to 11.3 \(M_{\odot}\) (in the case of suppressed turbulence - up to 11.0 \(M_{\odot}\) (Staritsin 2022)). The distribution of angular velocity in the accretor's interior at this moment is shown in Fig. 3. At the stage of critical rotation, the mass of the accretor increases by another 4.7 \(M_{\odot}\). Another circulation cell is formed in the accreted matter. In this cell, the circulation transfers the some part of the angular momentum that came along with the accreted mass to the surface of the accretor (Fig. 4). It is assumed that this part of the angular momentum is removed from the accretor by the accretion disk (Paczynski 1991; Bisnovatyi-Kogan 1993). As a result of a decrease in angular momentum, the accreted layers are contracted. The velocity of their rotation is permanently lower than the Keplerian velocity. In the circulation cell formed at the beginning of mass exchange, the transfer of angular momentum inside the accretor continues. The mass of the matter in this cell increases as the upper boundary of the cell moves up along the accretor mass, and the bottom of the cell moves down. The bottom of the cell goes down to the convective core when the accretor mass increases to 11.9 \(M_{\odot}\) (in the case of suppressed turbulence - up to 15 \(M_{\odot}\) (Staritsin 2022)). The role of turbulence lies in the rapid lowering of the bottom of the circulation cell, in which the circulation carries angular momentum into the star's interior. The amount of angular momentum removed from the accretor during mass exchange depends slightly on processes of angular momentum transfer within the accretor (Fig. 5). When turbulence is present, the amount of angular momentum transferred to the accretor's inner layers increases, and the amount of angular momentum carried to the accretor's surface decreases compared to the case of suppressed turbulence (Table 1). The angular momentum brought into the accretor during mass exchange is \(1.72\times 10^{53}\,g\cdot cm^{2}s^{-1}\). 12% of this value enters the inner layers that made up the accretor initially, Figure 3: Angular velocity when the rotation of the accretor’s surface becomes critical and when active turbulence (solid line) and artificially suppressed turbulence are present (Staritsin 2022) (dashed line). 31% remains in the accreted mass, and 57% is carried to the accretor's surface and is removed by the disk. In the case of suppressed turbulence, the corresponding values are 5%, 30%, and 65% (Staritsin 2022). After the end of mass exchange, the accretor's angular momentum is greater when turbulence is present (Table 1). The case of extremely low efficiency of angular momentum transfer from the boundary layer to the accretor's upper part At the beginning of mass exchange, the rotation velocity of the incoming mass and the accretor's surface coincide. The rate of angular momentum arrival into the accretor is significantly less than when turbulence and/or waves transfer angular momentum from the boundary layer to the accretor's outer part. Due to the low rate at which angular momentum enters the accretor, the accretor's angular momentum increases slowly at the beginning of accretion (Fig. 5). The general picture of angular momentum transfer in the accretor's interior at \(\alpha\) equal to 0.8 and 0.6 is the same as when angular momentum goes from the boundary layer to the accretor's upper part.The difference is that the total amount of angular momentum that has entered the accretor during mass exchange decreases (Table 1). Reasons for the decrease are associated with the absence of a source (1) and a decrease in parameter \(\alpha\). However, with \(\alpha\) equal to 0.8 and 0.6, the accretor surface's rotation velocity increases to a critical value. This occurs when accretor mass increases to 12.9 \(M_{\odot}\) and 13.3 \(M_{\odot}\), respectively (Fig. 6). In these cases, a circulation cell is formed in the accretor's outer layer, in which the circulation transfers part of the angular momentum of the accreted layers to the accretor's surface. The amount of angular momentum removed from the accreted layers and lost by the accretor in these cases is less than in calculations with a source (1) (Table 1). The state of accretor rotation once mass exchange finishes is approximately the same as when the angular momentum's arrival from the boundary layer to Figure 4: Angular momentum flux in the accretor’s interior when its mass is equal to 12 \(M_{\odot}\), 14 \(M_{\odot}\), and 16 \(M_{\odot}\). the accretor's upper part was considered. A decrease in angular momentum entering the accretor only results in decreases in the angular momentum taken out of the accretor at the stage of accretion during accretor critical rotation. At \(\alpha\) equal to 0.4 and 0.2, a smaller amount of angular momentum enters the accretor (Table 1). The rotation velocity of the accretor's surface remains subcritical throughout the entire mass exchange stage; at \(\alpha\) equal to 0.4, it approaches the critical value by the end of this stage. In both cases, at the beginning of mass exchange, a circulation cell is formed in the accretor's subsurface layer, in which the angular momentum of the accreted matter is transferred inside the accretor. The bottom of the cell goes down to the convective core when the mass of the accretor increases to 13.1 \(M_{\odot}\) at \(\alpha\) equal to 0.4 and up to 13.9 \(M_{\odot}\) at \(\alpha\) equal to 0.2. In both cases, the angular momentum of the accreted mass is transferred inside the star throughout the mass exchange stage. The accretor retains all the angular momentum received with the accreted mass (Fig. 6). Once mass exchange finishes, the angular momentum of the accretor at \(\alpha\) equal to 0.4 is little less than at \(\alpha\) equal to 0.6 and 0.8, and at \(\alpha\) equal to 0.2 is significantly less (Table 1). ### Accretor rotation state after mass exchange The distribution of the angular velocity of rotation in the accretor's interior immediately after the end of mass exchange is shown in Fig. 7. In all cases, angular velocity decreases rapidly in a layer of variable chemical composition located between the chemically homogeneous part of the radiative envelope and the convective core. A similar jump is formed in cases where angular momentum enters the accretor in a short time - in the donor's thermal timescale or faster (Staritsin 2021). The thermal timescale of Figure 5: The angular momentum amount entering the accretor (dashed line), the angular momentum of the accretor (solid line) depending on its mass at \(\alpha=0.8\) when the source of the angular momentum (1) is considered (blue color) and is not taken into account (black color).The case of artificially suppressed turbulence (Staritsin 2022) is also shown (dot-and-dashed line). the accretor is longer than that of the donor in the cases considered in Staritsin (2021, 2022). After the end of mass exchange, the jump gradually decreases and disappears during the thermal timescale of the accretor (see, for example, Fig. 3 in Staritsin (2021)). The angular velocity in the accretor's interior after mass exchange when \(\alpha\) is equal to 0.8 and \(\alpha\) is equal to 0.6 almost does not depend on what the content of the angular momentum was in the adding mass and on whether angular momentum is transferred from the boundary layer to the accretor's upper layer (Fig. 7). In these cases, the accretor's angular momentum is almost equal to \(\sim 7.5\times 10^{52}\,g\cdot cm^{2}s^{-1}\) (Table 1) after the mass exchange. An isolated star with a mass of 16 \(M_{\odot}\) has a critical rotation throughout the stage of hydrogen burning in the core with this angular momentum value (Staritsin 2007). Consequently, due to the exchange of mass, the accretor receives a rotation typical for Be-stars. At \(\alpha\) equal to 0.4, the accretor receives almost the same angular momentum with accreted mass as the angular momentum that remains in the accretor when \(\alpha\) is 0.8 and \(\alpha\) is 0.6 (Table 1). Therefore, at \(\alpha\) equal to 0.4, the accretor also has a rotation typical for Be-stars. At \(\alpha\) equal to 0.2, the accreted mass brings a much lower angular momentum (Table 1). The angular velocity in the accretor's interior is lower than in other cases (Fig. 7). The rotation of the accretor's surface immediately after mass exchange in this case is lower than that of Be-type stars. In an isolated star with the same mass and angular momentum as the accretor, the removal of angular momentum from the inner layers to the outside occurs intensively at the stage of hydrogen burning in the core (Staritsin 2007, 2009). The angular velocity of the star's surface, expressed in Keplerian angular velocity, increases; at the last steps of this stage, the star acquires a rotation typical for Be-stars of the early spectral subclass. If tidal interaction is low, then even in this case the accretor can obtain the characteristics of a Be-star Figure 6: The angular momentum amount entering the accretor (dashed line), the angular momentum of the accretor (solid line) depending on its mass at \(\alpha=0.8\) (black), \(\alpha=0.6\) (red), \(\alpha=0.4\) (green), and \(\alpha=0.2\) (orange) when the source of angular momentum (1) is not considered. At \(\alpha=0.4\) and \(\alpha=0.2\), the accretor retains the entire angular momentum obtained with the accreted mass, so the dashed lines coincide with the solid ones. after the end of mass exchange, but only after a long period of time on the order of part of the hydrogen burning stage in the core. ## 4 Conclusions Meridional circulation is a flexible mechanism for the transfer of angular momentum in the stellar interior of a rotating star. The direction and rate of angular momentum transfer by circulation may vary widely at the stage of mass accretion, depending on the star rotation state and the rate of angular momentum arrival along with the accreted mass and waves and/or due to turbulence. The two main circulation cells are formed due to the accretion of mass and angular momentum. In the cell, which is formed at the stage of subcritical rotation of the accretor, circulation transfers the angular momentum inside the accretor. Only in the lower part of this cell does turbulence make the main contribution to the transfer of angular momentum. Due to turbulence, the cell bottom quickly goes downwards into the accretor's interior. In a cell formed at the stage of critical rotation, the circulation transfers part of the angular momentum of the accreted mass to the surface of the star; the more the content of the angular momentum in the entering matter is, the greater this part. We have considered the case of mass exchange in a binary system, when half of the mass lost by the donor falls into the accretor. If the angular momentum of the mass falling into the accretor exceeds half the Keplerian value at the boundary of the accretor, the state of rotation of the accretor after the end of mass exchange does not depend on the angular momentum entering the accretor. In other words, processes that could reduce the angular momentum of the matter located around the accretor to no more than half the Keplerian value do not affect the angular momentum and the state of rotation that the accretor receives by the end of the mass exchange stage. These processes impact on the amount of Figure 7: The distribution of angular velocity in the accretor’s interior after the end of mass exchange in cases when the source of angular momentum (1) is considered (blue) and not considered at \(\alpha=0.8\) (black), \(\alpha=0.6\) (red), \(\alpha=0.4\) (green), and \(\alpha=0.2\) (orange). angular momentum removed by circulation from the accreted mass to the accretor's surface and removed further from the accretor by a disk or other processes. In the considered system with the initial component masses of 13.4 \(M_{\odot}\) and 10.7 \(M_{\odot}\), the accretor has a rotation typical for Be-stars immediately after the end of mass exchange, if during mass exchange the angular momentum of the mass added to the accretor exceeded 40% of the Keplerian value. ###### Acknowledgements. This work was supported by the Ministry of Science and Education, FEUZ-2023-0019
2309.13869
PRiSM: Enhancing Low-Resource Document-Level Relation Extraction with Relation-Aware Score Calibration
Document-level relation extraction (DocRE) aims to extract relations of all entity pairs in a document. A key challenge in DocRE is the cost of annotating such data which requires intensive human effort. Thus, we investigate the case of DocRE in a low-resource setting, and we find that existing models trained on low data overestimate the NA ("no relation") label, causing limited performance. In this work, we approach the problem from a calibration perspective and propose PRiSM, which learns to adapt logits based on relation semantic information. We evaluate our method on three DocRE datasets and demonstrate that integrating existing models with PRiSM improves performance by as much as 26.38 F1 score, while the calibration error drops as much as 36 times when trained with about 3% of data. The code is publicly available at https://github.com/brightjade/PRiSM.
Minseok Choi, Hyesu Lim, Jaegul Choo
2023-09-25T04:42:39Z
http://arxiv.org/abs/2309.13869v1
# PRiSM: Enhancing Low-Resource Document-Level Relation Extraction ###### Abstract Document-level relation extraction (DocRE) aims to extract relations of all entity pairs in a document. A key challenge in DocRE is the cost of annotating such data which requires intensive human effort. Thus, we investigate the case of DocRE in a low-resource setting, and we find that existing models trained on low data overestimate the NA ("no relation") label, causing limited performance. In this work, we approach the problem from a calibration perspective and propose PRiSM, which learns to adapt logits based on relation semantic information. We evaluate our method on three DocRE datasets and demonstrate that integrating existing models with PRiSM improves performance by as much as 26.38 F1 score, while the calibration error drops as much as 36 times when trained with about 3% of data. The code is publicly available at [https://github.com/brightjade/PRiSM](https://github.com/brightjade/PRiSM). ## 1 Introduction Document-level relation extraction (DocRE) is a fundamental task in natural language understanding, which aims to identify relations between entities that exist in a document. A major challenge in DocRE is the cost of annotating such documents, requiring annotators to consider relations of all possible entity combinations Yao et al. (2019); Zaporojets et al. (2021); Tan et al. (2022). However, there is a lack of ongoing studies investigating the low-resource setting in DocRE Zhou et al. (2023), and we discover that most of the current DocRE models show subpar performance when trained with a small set of data. We argue that the reason is two-fold. First, the long-tailed distribution of DocRE data encourages models to be overly confident in predicting frequent relations and less sure about infrequent ones Du et al. (2022); Tan et al. (2022). Out of the 96 relations in DocRED Yao et al. (2019), a widely-used DocRE dataset, the 7 most frequent relations account for 55% of the total relation triples. Under the low-resource setting, chances to observe infrequent relations become much harder. Second, DocRE models predict the NA ("no relation") label if an entity pair does not express any relation. In DocRED, about 97% of all entity pairs have the NA label. With limited data, there is a much less signal for ground-truth (GT) labels during training, resulting in models overpredicting the NA label instead. High confidence in common relations and the NA label and low confidence in rare relations suggest that models may be miscalibrated. We hypothesize that lowering the former and raising the latter would improve the overall RE performance. At a high level, we wish to penalize logits of frequent labels (including NA) and supplement logits of infrequent labels such that models are able to predict them without seeing them much during training. To implement such behavior, we leverage relation semantic information, which has proved to be effective in low-resource sentence-level RE Yang et al. (2020); Dong et al. (2021); Zhang and Lu (2022). In this work, we propose the **P**air-**R**elation **S**imilarity **M**odule (PRiSM) that learns to adapt logits by exploiting semantic information from label descriptions, as depicted in Figure 1. Specifically, we compute a similarity function for each entity pair embedding, constructed from two entities of interest, with relation embeddings, built from corresponding label descriptions. PRiSM then learns re Figure 1: An overview of our proposed method. Top represents the original DocRE framework. PRiSM (bottom) leverages relation descriptions to compute scores for each relation triple. These scores are then used to reweight the prediction logits. lation representations to output adaptive scores for each relation triple. Note that previous work mostly utilized relation representations for self-supervised learning Dong et al. (2021); Du et al. (2022); Zhou et al. (2023), whereas PRiSM uses them to directly adjust logits, which brings a calibration effect. To elaborate further, let us say that classification logits are statistical scores and similarities are semantic scores. We have four scenarios: 1) relation is common and GT, 2) relation is common but not GT, 3) relation is uncommon but GT, and 4) relation is uncommon and not GT. In Cases 1 and 4, both statistical and semantic scores are either high or low, and thus, appending PRiSM mostly would not affect the original RE predictions. In Case 2, the statistical score is high, but the semantic score is low, possibly negative to penalize the statistical score. This is the case of PRiSM decreasing the confidence of common relations and NA label. In Case 3, the statistical score is low, but the semantic score is high, which is the case of PRiSM increasing the confidence of uncommon relations. As such, PRiSM incorporates both statistical and semantic scores such that the confidence is adjusted regardless of the relation frequency. Our technical contributions are three-fold. First, we propose PRiSM, a relation-aware calibration technique that improves model performance and adjusts model confidence on low-resource DocRE. Second, we demonstrate the performance improvement across various state-of-the-art models integrated with PRiSM. Third, we validate the effectiveness of our method on widely-used long-tailed DocRE datasets and calibration metrics. ## 2 Methodology ### Problem Formulation Given a document \(d\), a set of \(n\) annotated entities \(\mathcal{E}=\{e_{i}\}_{i=1}^{n}\), and a pre-defined set of relations \(\mathcal{R}\cup\{\texttt{NA}\}\), the task of DocRE is to extract the relation triple set \(\{(e_{h},r,e_{t})|e_{h}\in\mathcal{E},r\in\mathcal{R},e_{t}\in\mathcal{E}\} \subseteq\mathcal{E}\times\mathcal{R}\times\mathcal{E}\) from all possible relation triples, where \((e_{h},r,e_{t})\) denotes that a relation \(r\) holds between head entity \(e_{h}\) and tail entity \(e_{t}\). An entity \(e_{i}\) may appear \(k\) times in the document in which we denote corresponding instances as entity mentions \(\{m_{ij}\}_{j=1}^{k}\). A relation \(r\) exists between an entity pair \((e_{h},e_{t})\) if any pair of their mentions express the relation, and if they do not express any relation, the entity pair is then labeled as NA. ### Document-Level Relation Extraction Given a document \(d\) as an input token sequence \(\mathbf{x}=[x_{t}]_{t=1}^{l}\), where \(l\) is the length of the token sequence, we explicitly locate the position of entity mentions by inserting a special token "*" before and after each mention. The presence of the entity marker has proved to be effective from previous studies Zhang et al. (2017); Shi and Lin (2019); Baldini Soares et al. (2019). The entity-marked document is then fed into a pre-trained language model (PLM) encoder, which outputs the contextual embeddings: \([\mathbf{h}_{1},\mathbf{h}_{2},...,\mathbf{h}_{l}]=\text{Encoder}(\mathbf{x})\). We take the embedding of "*" at the start of each mention as its mention-level representation \(\mathbf{h}_{m_{ij}}\) of the entity \(e_{i}\). For extracting the entity-level representation, we apply the logsumexp pooling over all mentions \(\{m_{ij}\}_{j=1}^{k}\) of the entity \(e_{i}\): \[\mathbf{h}_{e_{i}}=\log\sum_{j=1}^{k}\exp\left(\mathbf{h}_{m_{ij}}\right). \tag{1}\] The logsumexp pooling is a smooth version of max pooling and has been shown to accumulate weak signals from each different mention representation, which results in a better performance Jia et al. (2019). We pass the embeddings of head and tail entities through a linear layer followed by non-linear activation to obtain the hidden representations: \(\mathbf{z}_{h}=\tanh(\mathbf{W}_{h}\mathbf{h}_{e_{h}}+\mathbf{b}_{h})\) and \(\mathbf{z}_{t}=\tanh(\mathbf{W}_{t}\mathbf{h}_{e_{t}}+\mathbf{b}_{t})\), where \(\mathbf{W}_{h},\mathbf{W}_{t},\mathbf{b}_{h},\mathbf{b}_{t}\) are learnable parameters. Then we calculate a score for relation \(r\) between entities \(h\) and \(t\) by taking a bilinear function: \[s_{(h,r,t)}=\mathbf{z}_{h}^{\top}\mathbf{W}_{r}\mathbf{z}_{t}+\mathbf{b}_{r}, \tag{2}\] where \(\mathbf{W}_{r},\mathbf{b}_{r}\) are learnable parameters. ### Prism Following previous work Zhang and Lu (2022), we feed relation descriptions to a PLM encoder to obtain the relation embedding \(\mathbf{z}_{r}\) for relation \(r\). The details of the relation descriptions used can be found in Appendix A.4. We then construct the entity pair-level representation \(\mathbf{z}_{(h,t)}\) by mapping the head and tail embeddings to a linear layer followed by non-linear activation: \(\mathbf{z}_{(h,t)}=\tanh(\mathbf{W}_{(h,t)}[\mathbf{z}_{h};\mathbf{z}_{t}]+\mathbf{b}_{(h,t)})\), where \(\mathbf{z}_{h}\), \(\mathbf{z}_{t}\) are concatenated and \(\mathbf{W}_{(h,t)}\), \(\mathbf{b}_{(h,t)}\) are learnable parameters. An adaptive score for relation \(r\) between entities \(h\) and \(t\) is computed by taking a similarity function between the entity pair embedding and relation embedding: \(s^{\prime}_{(h,r,t)}=sim(\mathbf{z}_{(h,t)},\mathbf{z}_{r})\), where \(sim(\cdot)\) is cosine similarity. Formally, the probability of relation \(r\) between entities \(h\) and \(t\) is simply an addition of two scores followed by sigmoid activation: \[P(r\mid e_{h},e_{t})=\sigma(s_{(h,r,t)}+\lambda s^{\prime}_{(h,r,t)}), \tag{3}\] where \(\lambda\) is the scale factor. Finally, we optimize our model with the binary cross-entropy (BCE) loss: \[\mathcal{L}=-\frac{1}{T}\sum_{<h,t>}\sum_{r}\texttt{BCE}(P(r|e_{h},e_{t}), \bar{y}_{(h,r,t)}), \tag{4}\] where \(\bar{y}\) is the target label and \(T\) is the total number of relation triples. ## 3 Experiments ### Dataset We evaluate our framework on three public DocRE datasets. DocRED Yao et al. (2019) is a widely-used human-annotated DocRE dataset constructed from Wikipedia and Wikidata. Re-DocRED Tan et al. (2022b) is a revised dataset from DocRED, addressing the incomplete annotation problem. DWIE Zaporojets et al. (2021) is a multi-task document-level information extraction dataset consisting of news articles collected from Deutsche Welle. Dataset statistics are shown in Table 5. ### Implementation Details Our framework is built on PyTorch and Huggingface's Transformers library Wolf et al. (2020). We use the cased BERT Devlin et al. (2019) and RoBERTa Liu et al. (2019) for encoding the text and optimize their weights with AdamW Loshchilov and Hutter (2019). We tune our hyperparameters to maximize the \(F_{1}\) score on the development set. The additional implementation details are included in Appendix B. During inference, we predict all relation triples that have probabilities higher than the F1-maximizing threshold found in the development set. We conduct our experiments with three different random seeds and report the averaged results. Following Yao et al. \begin{table} \begin{tabular}{l c c c|c c c c} \hline \hline \multicolumn{2}{c}{**DocRED**} & \multicolumn{3}{c}{**Re-DocRED**} & \multicolumn{3}{c}{**Test**} \\ \multicolumn{2}{c}{**Model**} & \multicolumn{1}{c}{ \begin{tabular}{c} Ign \(F_{1}\) \\ \end{tabular} } & \(F_{1}\) & \(\text{lgn}\,F_{1}\) & \(F_{1}\) & \(\text{lgn}\,F_{1}\) & \(F_{1}\) & \(\text{lgn}\,F_{1}\) \\ \hline BERT\({}_{\text{max}}\) & \(10.27\pm 1.82\) & \(10.44\pm 1.90\) & \(11.36\pm 11.50\) & \(28.65\pm 2.87\) & \(29.40\pm 3.19\) & \(28.77\pm 3.34\) & \(29.44\pm 3.67\) \\ BERT\({}_{\text{max}}\) + **PRISM** & \(\mathbf{35.06}\pm 0.94\) & \(\mathbf{37.02}\pm 0.88\) & \(\mathbf{35.79}\) & \(\mathbf{37.88}\) & \(\mathbf{47.39}\pm 0.79\) & \(\mathbf{49.09}\pm 0.90\) & \(\mathbf{46.90}\pm 1.59\) & \(\mathbf{48.57}\pm 1.73\) \\ RoBERT\({}_{\text{max}}\) & \(20.70\pm 1.91\) & \(21.31\pm 18.27\) & \(21.74\pm 22.25\) & \(39.66\pm 2.25\) & \(40.74\pm 1.89\) & \(39.42\pm 2.80\) & \(40.53\pm 2.43\) \\ RoBERT\({}_{\text{max}}\) + **PRISM** & \(\mathbf{32.40}\pm 0.55\) & \(\mathbf{34.49}\pm 0.76\) & \(23.20\pm **34.32\)** & \(\mathbf{47.71}\pm 1.03\) & \(\mathbf{49.40}\pm 1.14\) & \(\mathbf{47.31}\pm 0.96\) & \(\mathbf{49.04}\pm 1.05\) \\ SSA-BERT\({}_{\text{max}}\) & \(10.92\pm 0.88\) & \(11.18\pm 0.89\) & \(11.93\pm 12.16\) & \(28.89\pm 1.68\) & \(29.01\pm 1.69\) & \(28.64\pm 1.89\) & \(29.29\pm 1.94\) \\ SSA-BERT\({}_{\text{max}}\) + **PRISM** & \(\mathbf{32.86}\pm 2.35\) & \(\mathbf{34.76}\pm 2.50\) & \(\mathbf{34.00}\) & \(\mathbf{36.03}\) & \(\mathbf{46.49}\pm 1.16\) & \(\mathbf{48.51}\pm 1.14\) & \(\mathbf{46.51}\pm 1.17\) & \(\mathbf{48.11}\pm 2.00\) \\ AIDP-BERT\({}_{\text{MSE}}\) & \(38.99\pm 2.30\) & \(40.50\pm 2.07\) & \(0.488\) & \(42.37\pm 49.45\) & \(29.45\pm 2.09\) & \(50.60\pm 1.95\) & \(92.42\pm 2.25\) & \(50.32\pm 2.13\) \\ ATIDO-BERT\({}_{\text{max}}\) + **PRISM** & \(\mathbf{40.59}\pm 0.68\) & \(\mathbf{42.09}\pm 0.66\) & \(\mathbf{40.94}\) & \(\mathbf{42.43}\) & \(\mathbf{50.10}\pm 0.53\) & \(\mathbf{51.12}\pm 0.64\) & \(\mathbf{50.15}\pm 1.11\) & \(\mathbf{51.14}\pm 1.17\) \\ \hline _70\% training examples_ (\(N=305\)) & \(39.84\pm 0.92\) & \(41.55\pm 0.99\) & \(40.38\) & \(42.98\) & \(52.34\pm 0.66\) & \(53.54\pm 0.80\) & \(52.34\pm 0.68\) & \(53.54\pm 0.84\) \\ BERT\({}_{\text{max}}\) + **PRISM** & \(\mathbf{46.01}\pm 0.12\) & \(\mathbf{48.02}\pm 0.10\) & \(\mathbf{45.52}\) & \(\mathbf{47.83}\) & \(\mathbf{58.10}\pm 0.31\) & \(\mathbf{59.86}\pm 0.27\) & \(\mathbf{57.75}\pm 0.65\) & \(\mathbf{59.53}\pm 0.51\) \\ RoBERT\({}_{\text{max}}\) & \(43.42\pm 1.09\) & \(45.20\pm 1.09\) & \(43.78\) & \(45.63\) & \(54.82\pm 1.85\) & \(56.10\pm 1.80\) & \(53.36\pm 2.18\) & \(56.67\pm 2.06\) \\ RoBERT\({}_{\text{max}}\) + **PRISM** & \(\mathbf{46.60}\pm 0.20\) & \(\mathbf{48.57}\pm 0.29\) & \(\mathbf{47.02}\) & \(\mathbf{49.22}\) & \(\mathbf{59.51}\pm 0.36\) & \(\mathbf{61.19}\pm 0.32\) & \(\mathbf{59.08}\pm 0.61\) & \(\mathbf{60.80}\pm 0.52\) \\ SSA-BERT\({}_{\text{max}}\) & \(40.00\pm 1.62\) & \(41.65\pm 1.63\) & \(41.11\pm 40.33\) & \(53.37\pm 0.53\) & \(54.86\pm 0.81\) & \(53.67\pm 1.55\) & \(54.94\pm 1.52\) \\ SSA-BERT\({}_{\text{max}}\) + **PRISM** & \(\mathbf{46.14}\pm 0.15\) & \(\mathbf{48.18}\pm 0.09\) & \(\mathbf{45.48}\) & \(\mathbf{47.72}\) & \(\mathbf{58.47}\pm 0.39\) & \(\mathbf{60.17}\pm 0.36\) & \(\mathbf{58.21}\pm 0.31\) & \(\mathbf{59.93}\pm 0.19\) \\ ATIDP-BERT\({}_{\text{max}}\) & \(49.93\pm 1.11\) & \(51.61\pm 1.16\) & \(50.04\) & \(51.85\) & \(60.38\pm 0.46\) & \(61.52\pm 0.29\) & \(60.46\pm 0.55\) & \(61.54\pm 0.29\) \\ ATIDP-BERT\({}_{\text{max}}\) + **PRISM** & \(\mathbf{50.20}\pm 0.68\) & \(\mathbf{51.83}\pm 0.64\) & \(\mathbf{50.29}\) (2019), all models are evaluated on \(F_{1}\) and Ign \(F_{1}\), where Ign \(F_{1}\) excludes the relations shared by the training and development/test sets. Moreover, we measure **Macro**, which computes the average of per-class \(F_{1}\), and **Macro@500**, **Macro@200**, and **Macro@100**, targeting rare relations where the frequency count in the training dataset is less than 500, 200, and 100, respectively. ### Experimental Results To simulate the low-data setting, we reduce the number of training documents \(N\) to 100 and 305, which is about 3% and 10% of the original data. To create each of the settings, we repeat random sampling until the label distribution resembles that of the full data. As shown in Table 1, we observe that performance increases consistently across different models when appended with PRiSM. Particularly, PRiSM improves performance by a large margin when trained with just 3% of data, as much as 24.43 Ign \(F_{1}\) and 26.38 \(F_{1}\) on the test set of DocRED for BERTBASE. We also test PRiSM on RoBERTBASE and two state-of-the-art models SSAN (Xu et al., 2021) and ATLOP (Zhou et al., 2021) and notice a similar trend, indicating that our method is effective on various existing models. We additionally evaluate PRiSM using macro metrics in Table 2 and observe that adding PRiSM improves performance on infrequent relations, especially in the low-data setting. Lastly, we validate our method on a different dataset DWIE, as illustrated in Table 3. ### Calibration Evaluation We measure model calibration on two metrics: expected calibration error (ECE) (Naeini et al., 2015) and adaptive calibration error (ACE) (Nixon et al., 2019). ECE partitions predictions into a fixed number of bins and computes a weighted average of the difference between accuracy and confidence over the bins, while ACE puts the same number of predictions in each bin. We compare with general calibration methods such as temperature scaling (TS) (Guo et al., 2017) and class-distribution-aware TS (CDA-TS) (Islam et al., 2021). As reported in Table 4, PRiSM outperforms other methods in both metrics, while also maintaining a comparable RE performance. We also visualize with a reliability diagram (DeGroot and Fienberg, 1983; Niculescu-Mizil and Caruana, 2005) in Figure 2. We observe that PRiSM effectively lowers the confidence of the NA label and raises the confidence of low-frequency relations (bottom 89). For high-frequency relations (top 7), confidence is adjusted in both ways. In any case, PRiSM displays the most stable, closest line to the perfect calibration (blue line). ## 4 Related Work With the introduction of DocRED (Yao et al., 2019), many approaches were proposed to extract relations from a document (Wang et al., 2019; Ye et al., 2020; Zhang et al., 2021; Xu et al., 2021; Zhou et al., 2021; Xie et al., 2022). The long-tailed data problem of DocRE has been addressed in some \begin{table} \begin{tabular}{l|c c c|c c} \hline \hline & \multicolumn{3}{c|}{\(N=100\)} & \multicolumn{3}{c}{\(N=305\)} \\ \hline **Method** & \(F_{1}\)(\(\uparrow\)) & **ECE(\(\downarrow\))** & **ACE(\(\downarrow\))** & \(F_{1}\)(\(\uparrow\)) & **ECE(\(\downarrow\))** & **ACE(\(\downarrow\))** \\ \hline Uncalibrated & 10.82 & 0.359\% & 0.379\% & 42.56 & 0.137\% & 0.164\% \\ TS & **38.19** & 0.144\% & 0.173\% & 48.49 & 0.053\% & 0.062\% \\ CDA-TS & 37.82 & 0.139\% & 0.167\% & **48.54** & 0.057\% & 0.078\% \\ PRiSM (ours) & 37.84 & **0.010\%** & **0.020\%** & 48.10 & **0.023\%** & **0.020\%** \\ \hline \hline \end{tabular} \end{table} Table 4: Comparison of calibration errors (with 10 bins) under a low-resource setting of DocRED. \begin{table} \begin{tabular}{l c c c c} \hline \hline & \multicolumn{3}{c|}{**Dev**} & \multicolumn{3}{c}{**Total**} \\ \hline **Model** & \(F_{1}\) & **Macro** & \(F_{1}\) & **Macro** \\ \hline \(N=100\) & 11.77 \(\pm\) 1.18 & 1.79 \(\pm\) 0.27 & 1.21 \(\pm\) 2.9 & 1.87 \(\pm\) 0.32 \\ \hline BERTBASE & **45.20 \(\pm\) 1.60** & **10.34 \(\pm\) 1.91** & **44.31 \(\pm\) 1.47** & **9.47 \(\pm\) 1.66** \\ \hline RBERTBASE & **50.57 \(\pm\) 1.57** & 8.55 \(\pm\) 0.56 & 48.29 \(\pm\) 1.74 & 8.59 \(\pm\) 0.65 \\ RBERTBASE & **50.51 \(\pm\) 1.11** & **12.76 \(\pm\) 2.00** & **54.28 \(\pm\) 1.24** & **1.56 \(\pm\) 0.64** \\ \hline \hline \(N=305\) & 25.98 \(\pm\) 0.76 & 15.68 \(\pm\) 1.71 & 25.00 \(\pm\) 0.00 & 1.80 \(\pm\) 0.03 \\ \hline BERTBASE & **58.23 \(\pm\) 0.40** & **24.62 \(\pm\) 0.09** & **57.05 \(\pm\) 0.23** & **22.43 \(\pm\) 0.38** \\ \hline RBERTBASE & **64.55 \(\pm\) 1.94** & **21.29 \(\pm\) 2.99** & **62.39 \(\pm\) 1.29** & 20.39 \(\pm\) 2.76 \\ \hline RBERTBASE & **71.18 \(\pm\) 1.09** & **29.80 \(\pm\) 1.53** & **67.12 \(\pm\) 0.20** & **25.52 \(\pm\) 0.34** \\ \hline \hline \(N=507\) & 60.20 \(\pm\) 0.33 & 12.17 \(\pm\) 0.37 & 0.05 \(\pm\) 0.22 & 27.50 \(\pm\) 0.40 \\ \hline RBERTBASE & **68.18 \(\pm\) 0.56** & **28.17 \(\pm\) 0.54** & **66.65 \(\pm\) 0.52** & **29.31 \(\pm\) 1.13** \\ \hline RBERTBASE & 76.23 \(\pm\) 0.72 & 31.71 \(\pm\) 0.13 & 7.407 \(\pm\) 0.77 & 28.72 \(\pm\) 1.54 \\ \hline RBERTBASE & **78.43 \(\pm\) 0.12** & **32.85 \(\pm\) 0.37** & **78.13 \(\pm\) 0.61** & **33.66 \(\pm\) 1.24** \\ \hline \hline \end{tabular} \end{table} Table 3: Performance (%) on the DWIE dataset. Figure 2: Reliability diagram for BERTBASE studies Du et al. (2022); Tan et al. (2022), as well as low-resource DocRE Zhou et al. (2023); however, most require additional pretraining, which is compute- and cost-intensive, while PRiSM only requires adjusting logits in existing models. Low-resource RE has been extensively studied at the sentence level, and we specifically focus on leveraging label information Yang et al. (2020); Dong et al. (2021); Zhang and Lu (2022) in which PRiSM applies it to the document level. In contrast to prior work in calibration Guo et al. (2017); Islam et al. (2021), our approach is relation-aware, updating logits at a much finer granularity. ## 5 Conclusion and Future Work In this work, we propose a simple modular framework PRiSM, which exploits relation semantics to update logits. We empirically demonstrate that our method effectively improves and calibrates DocRE models where the data is long-tailed and the NA label is overestimated. For future work, we can apply PRiSM to more tasks such as event extraction and dialogue state tracking, which also enclose long-tailed data and overestimation of "null" labels. ## Limitations Although our approach is resilient to data scarcity, quite a few annotated documents are still required for the model to learn the pattern. The ultimate goal of DocRE is undoubtedly to build a model that is able to perform well on zero-shot, but we believe our approach takes a step toward that direction. Moreover, we process the long documents (> 512 tokens) in a very naive way, as described in Appendix A.3, and we think that exploration of long-sequence modeling on longer document data could further enrich the field of DocRE. ## Acknowledgements This work was supported by Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (No.2019-0-00075, Artificial Intelligence Graduate School Program (KAIST)), the National Supercomputing Center with supercomputing resources including technical support (KSC-2022-CRE-0312), and Samsung Electronics Co., Ltd. We thank the anonymous reviewers for their constructive feedback.
2304.00143
Regression and Classification of Compositional Data via a novel Supervised Log Ratio Method
Compositional data in which only the relative abundances of variables are measured are ubiquitous. In the context of health and medical compositional data, an important class of biomarkers is the log ratios between groups of variables. However, selecting log ratios that are predictive of a response variable is a combinatorial problem. Existing greedy-search based methods are time-consuming, which hinders their application to high-dimensional data sets. We propose a novel selection approach called the supervised log ratio method that can efficiently select predictive log ratios in high-dimensional settings. The proposed method is motivated by a latent variable model and we show that the log ratio biomarker can be selected via simple clustering after supervised feature screening. The supervised log ratio method is implemented in an R package, which is publicly available at \url{https://github.com/drjingma/slr}. We illustrate the merits of our approach through simulation studies and analysis of a microbiome data set on HIV infection.
Jing Ma, Kristyn Pantoja, David E. Jones
2023-03-31T21:41:21Z
http://arxiv.org/abs/2304.00143v1
# Regression and Classification of Compositional Data via a novel Supervised Log Ratio Method ###### Abstract Compositional data in which only the relative abundances of variables are measured are ubiquitous. In the context of health and medical compositional data, an important class of biomarkers is the log ratios between groups of variables. However, selecting log ratios that are predictive of a response variable is a combinatorial problem. Existing greedy-search based methods are time-consuming, which hinders their application to high-dimensional data sets. We propose a novel selection approach called the supervised log ratio method that can efficiently select predictive log ratios in high-dimensional settings. The proposed method is motivated by a latent variable model and we show that the log ratio biomarker can be selected via simple clustering after supervised feature screening. The supervised log ratio method is implemented in an R package, which is publicly available at [https://github.com/drjingma/slr](https://github.com/drjingma/slr). We illustrate the merits of our approach through simulation studies and analysis of a microbiome data set on HIV infection. _Keywords:_ balances; clustering; compositional data; log ratios; supervised learning; variable screening Introduction In many high-throughput sequencing studies, it is often costly to measure the absolute abundance of each feature, e.g. microbial taxa, metabolites, or genes (Barlow et al., 2020). In contrast, feature relative abundances are easily generated in metagenomics, metabolomics, and single-cell transcriptomics, resulting in compositional data. A key objective of these studies is to identify interpretable biomarkers that can predict a health outcome (Sepich-Poore et al., 2021). Methods for predictive modeling of compositional data date back to the pioneering work of Aitchison and Bacon-Shone (1984) which proposed the log contrast regression model. The log contrast model is invariant to the scale of compositional data and has recently been extended to high-dimensional settings by incorporating \(\ell_{1}\)(Lin et al., 2014; Shi et al., 2016) or tree-based regularization (Wang and Zhao, 2017, 2017; Bien et al., 2021). However, the log contrast model selects individual features as biomarkers which can be difficult to interpret because the underlying set of features is subject to the unit-sum constraint. Alternatively, one can select interpretable biomarkers in the form of log ratios (Rivera-Pinto et al., 2018; Quinn and Erb, 2020). Ratios circumvent the limitation of not knowing the absolute abundances (Morton et al., 2019) and also offer robustness against multiplicative bias that arises from the sequencing process (McLaren et al., 2019). Log ratios enforce symmetry around zero and constitute interpretable biomarkers that preserve the principles of compositional data analysis, e.g. scale-invariance. However, selecting log ratio biomarkers ers is a combinatorial problem. The greedy search algorithm (Rivera-Pinto et al., 2018) is computationally prohibitive and scales poorly to high-dimensional settings. Another work by Bates and Tibshirani (2019) considers pairwise log ratios, which unfortunately has identifiability issues. In this paper, we propose a new approach, called the _supervised log ratio_ (SLR) method, for selecting log ratio biomarkers in high-dimensional regression and classification problems. SLR is motivated by a latent variable model where the latent variable is directly associated with both the response and a sparse set of predictors. The problem of selecting the log ratio biomarker thus reduces to inference for the latent variable. The implementation of SLR consists of two main steps. In the first step, SLR screens for active variables by performing univariate regression of the response on each predictor after a log ratio transformation. In the second step, SLR clusters active variables into two groups on a suitably defined dissimilarity measure and defines a biomarker using the log ratio between the two groups. Intuitively, the screening step removes spurious variables such that a simple clustering can effectively define the log ratio biomarker. As a result, SLR is able to select sparse and interpretable log ratios. We compare SLR with several existing methods in simulation studies, and show that SLR outperforms the competing approaches in both prediction of the response and variable selection. When applied to two microbiome data sets, SLR yields more robust variable selection than existing methods (Rivera-Pinto et al., 2018; Gordon-Rodriguez et al., 2022). The rest of the paper is organized as follows. Section 2 reviews existing models for regression and classification of compositional data. In Section 3, we introduce the proposed method and discuss its properties. We then illustrate the merits of SLR via simulation studies in Section 4 and two real data analyses in Section 5. Discussion can be found in Section 6. ## 2 Regression Analysis of Compositional Data Suppose we have \(n\) independent and identically distributed observations \((\mathbf{x}_{i},y_{i})\) for \(i=1,\ldots,n\), where \(\mathbf{x}_{i}=(x_{i,1},\ldots,x_{i,p})^{\intercal}\in\mathcal{S}^{p}\) is a vector of relative abundances and \(y_{i}\in\mathbb{R}\) is a continuous response variable. Here \(\mathcal{S}^{p}=\{(x_{1},\ldots,x_{p})^{\intercal}:x_{j}\geq 0\ (j=1,\ldots,p),\ x_{1}+ \ldots+x_{p}=1\}\) denotes the \(p\)-dimensional simplex. Our goal is to identify a set of biomarkers, defined as functions of \(\mathbf{x}_{i}\), that can predict the response \(y_{i}\). We begin by reviewing existing models that are scale-invariant and hence suitable for compositional data. ### Linear log contrast model The linear log contrast model (Aitchison and Bacon-Shone, 1984) transforms the composition \(\mathbf{x}_{i}\) in \(\mathcal{S}^{p}\) into a \((p-1)\)-dimensional real space via the additive log ratio (alr) transformation \(z_{i,j}=\log(x_{i,j}/x_{i,p})\) for \(j=1,\ldots,p-1\). Under the linear log contrast model, the mean of the response \(y_{i}\) is given by \[\eta(\mathbf{x}_{i})=\sum_{j=1}^{p-1}\beta_{j}z_{i,j}. \tag{1}\] The model in (1) requires the choice of a reference component, but this requirement can be relaxed by incorporating a zero sum constraint on the coefficients \[\eta(\mathbf{x}_{i})=\sum_{j=1}^{p}\beta_{j}\log(x_{i,j})\quad(\beta_{1}+\ldots+ \beta_{p}=0). \tag{2}\] In high-dimensional settings, the model in (2) can be solved by the constrained Lasso algorithm (Lin et al., 2014). Properties of the constrained Lasso and its variations have been studied extensively in the literature (Shi et al., 2016; Wang and Zhao, 2017, 2017, 2021). The linear log contrast model has also been studied in the context of classification problems (Lu et al., 2019). Lasso penalized linear log contrast regression with the unit-sum constraint on the coefficients is referred to as _coda-lasso_ in Susin et al. (2020). While coda-lasso emphasizes the selection of individual variables, Susin et al. (2020) pointed out that its output can also be understood as the log ratio between two weighted geometric means. A closely related method that is very popular in the applied literature is to perform Lasso penalized regression on centered log ratio transformed predictors without the unit-sum constraint, referred to as _clr-lasso_ in Susin et al. (2020). For a compositional vector \(\mathbf{x}_{i}\), the centered log ratio (clr) transformation is defined as \(z_{i,j}=\log(x_{i,j}/g(\mathbf{x}_{i}))\) (\(j=1,\ldots,p\)), where \(g(\cdot)\) is the geometric mean function. However, variables selected by clr-lasso are difficult to interpret because the reference, which is the geometric mean, is defined by all variables. ### Pairwise log ratio model Log ratio based regression models have been proposed to address the interpretability limitation of coda-lasso and clr-lasso. Pairwise log ratio regression (Bates and Tibshirani, 2019) considers the following linear model for the expected response \[\eta(\mathbf{x}_{i})=\sum_{1\leq j<k\leq p}\theta_{j,k}^{\mathrm{plr}}\log\frac{x_{i,j}}{x_{i,k}}. \tag{3}\] The coefficient vector \(\mathbf{\theta}^{\mathrm{plr}}=(\theta_{1,2}^{\mathrm{plr}},\ldots,\theta_{p-1,p}^ {\mathrm{plr}})^{\intercal}\) is connected to the log contrast coefficient vector in (2) by the relation \(\mathbf{\beta}=C^{\intercal}\mathbf{\theta}^{\mathrm{plr}}\), where \(C\in\mathbb{R}^{p^{*}\times p}\) (\(p^{*}=p(p-1)/2\)) is a matrix with entries belonging to \(\{-1,0,1\}\). For example, when \(p=4\), we have \[C^{\intercal}=\left(\begin{array}{cccccc}1&1&1&0&0&0\\ -1&0&0&1&1&0\\ 0&-1&0&-1&0&1\\ 0&0&-1&0&-1&-1\end{array}\right).\] The relationship \(\mathbf{\beta}=C^{\intercal}\mathbf{\theta}^{\mathrm{plr}}\) implies that \(\mathbf{\beta}\) satisfies the constraint \(\sum_{j=1}^{p}\beta_{j}=0\) automatically. Although scale-invariant and interpretable, \(\mathbf{\theta}^{\mathrm{plr}}\) is not identifiable even when the sample size \(n>p^{*}\), because the design matrix with \(p^{*}\) pairwise log ratios is not of full column rank. As illustrated in Bates and Tibshirani (2019), an addition of the \(\ell_{1}\) penalty on \(\mathbf{\theta}^{\mathrm{plr}}\) is not sufficient to guarantee uniqueness of the solution. ### 2.3 Balance regression model Balance regression (Rivera-Pinto et al., 2018) seeks to find the subsets \(I_{+}\) and \(I_{-}\) such that the mean of the response \(y_{i}\) is given by \[\eta(\mathbf{x}_{i})=\theta_{0}+\theta_{1}B(\mathbf{x}_{i};I_{+},I_{-}), \tag{4}\] where the balance between two groups of variables is defined as \[B(X;I_{+},I_{-})=\log\frac{g(X_{I_{+}})}{g(X_{I_{-}})}=\frac{1}{|I_{+}|}\sum_{ j\in I_{+}}\log X_{j}-\frac{1}{|I_{-}|}\sum_{j\in I_{-}}\log X_{j}. \tag{5}\] Here \(|I|\) denotes the size of the subset \(I\), and \(X_{I}\) denotes the sub-vector of \(X\) whose elements are the variables indexed by \(I\). Without loss of generality, assume \(\theta_{1}\geq 0\) so that \(I_{+}\) includes variables in the numerator of the balance. The original definition of balances in Egozcue and Pawlowsky-Glahn (2005) also has a normalizing constant in front of the log ratio in (5), which is unnecessary here because it will be absorbed into the regression coefficient \(\theta_{1}\) in (4). It is straightforward to expand Equation (4) into the linear log contrast model with coefficients \(\mathbf{\beta}\) defined as \(\beta_{j}=\theta_{1}/|I_{+}|\) for \(j\in I_{+}\), \(\beta_{j}=-\theta_{1}/|I_{-}|\) for \(j\in I_{-}\), and zero elsewhere. Finding the optimal active sets \(I_{+}\) and \(I_{-}\) is a combinatorial problem. Existing work (Rivera-Pinto et al., 2018) considers greedy search for the best subsets, which is computationally expensive. More recently, Gordon-Rodriguez et al. (2022) uses a continuous relaxation to approximate the underlying combinatorial problem, which is similar in spirit to approximating the \(\ell_{0}\) penalty with the \(\ell_{1}\) penalty. The resulting procedure is computationally more efficient than greedy search, but tends to yield high false positives, as we will explore in our simulation studies. ## 3 Supervised Log Ratio Method We propose a new approach, called the supervised log ratio (SLR) method, to identify the active sets \(I_{+},I_{-}\), and hence the predictive balance. The method consists of the following steps: 1. Compute the univariate regression coefficients for clr transformed variables. Form a reduced data matrix with variables whose univariate coefficients exceed a threshold \(\tau\geq 0\) in absolute value (\(\tau\) is chosen via cross-validation). 2. Perform clustering of the variables on a suitable dissimilarity derived from the reduced data matrix to get two clusters. Use the resulting balance to predict \(y\). Step 1 of SLR reduces the dimensionality of the predictors, while Step 2 partitions active variables into two subsets which are then used to define a balance biomarker. SLR and clr-lasso both operate on clr transformed variables, but there are two important distinctions between the two methods. First, SLR uses univariate feature screening as opposed to Lasso penalization to perform dimensionality reduction. Second, SLR uses an extra clustering step to define a balance biomarker, which is more interpretable compared to biomarkers that are defined with respect to the geometric mean of all variables. We now describe the procedure in detail. Let \(\widetilde{\mathbf{x}}_{i}=\text{clr}(\mathbf{x}_{i})\) denote clr transformed version of the \(i\)-th observation. Let \(\widetilde{\mathbf{x}}^{j}=(\widetilde{x}_{1,j},\ldots,\widetilde{x}_{n,j})^{\intercal}\) denote the vector of observations for the \(j\)-th feature. Let \(\bar{\mathbf{y}}\) denote the sample mean of \(\mathbf{y}\) and \(\widehat{\psi}_{j}\) denote the univariate regression coefficient for measuring the univariate effect of \(\widetilde{\mathbf{x}}^{j}\) on \(\mathbf{y}\): \[\widehat{\psi}_{j}=\frac{(\mathbf{y}-\bar{\mathbf{y}})^{\intercal}(\widetilde{\mathbf{x} }^{j}-\widetilde{\widetilde{\mathbf{x}}^{j}})}{\|\widetilde{\mathbf{x}}^{j}- \widetilde{\widetilde{\mathbf{x}}^{j}}\|^{2}}. \tag{6}\] Note that the scale estimate \(\widehat{\sigma}\) common to all variables cancels out. Let \(C_{\tau}\) be the collection of indices such that \(|\widehat{\psi}_{j}|\geq\tau\), i.e. the variables selected by Step 1 of SLR. For Step 2, a variety of dissimilarity measures can be used to cluster the variables in \(C_{\tau}\). Since our features are proportions, we use the variation matrix (Aitchison, 1986)\(\widehat{A}(\tau)\in\mathbb{R}^{|C_{\tau}|\times|C_{\tau}|}\) defined as \[\widehat{A}(\tau)_{j,k}=\frac{1}{n}\sum_{i=1}^{n}(\log\frac{x_{i,j}}{x_{i,k}}- \frac{1}{n}\sum_{i^{\prime}=1}^{n}\log\frac{x_{i^{\prime},j}}{x_{i^{\prime},k} })^{2}\quad(j,k\in C_{\tau}).\] Given the dissimilarity \(\widehat{A}(\tau)\), variables in \(C_{\tau}\) can be partitioned into two groups by either hierarchical clustering or spectral clustering. ### A latent variable model We provide some intuition underlying SLR. Consider the following latent variable model where the response \(y\) and independent variables \(X_{j}\) are simultaneously driven by a latent variable \(u\) such that \[y =\theta_{0}+\theta_{1}u+\varepsilon, \tag{7}\] \[\log\frac{X_{j}}{X_{p}} =\alpha_{0,j}+\alpha_{1,j}u+\epsilon_{j}. \tag{8}\] Here \(X_{p}\) is an inactive variable whose index \(p\) belongs to \(I_{0}=\{1,\ldots,p\}\backslash\{I_{+}\cup I_{-}\}\). For non-negative constants \(c_{1}\) and \(c_{2}\), we assume that the coefficients \(\alpha_{1,j}\) satisfy \(\alpha_{1,j}=c_{1}\) for \(j\in I_{+},\alpha_{1,j}=-c_{2}\) for \(j\in I_{-},\alpha_{1,j}=0\) for \(j\notin I_{+}\cup I_{-}\), and \(\sum_{j=1}^{p}\alpha_{1,j}=0.\) The zero-mean errors \(\varepsilon\) and \(\epsilon_{j}\) are assumed to be independent of each other and independent of \(u\). The latent variable model in (7)-(8) can also be viewed a special case of an _errors-in-variables_ model (Griliches and Ringstad, 1970). Intuitively, we can think of \(u\) as the desired balance between the two active sets \(I_{+}\) and \(I_{-}\). Indeed, one can verify that under model (8) the balance \(B(X;I_{+},I_{-})\) is a scaled and perturbed version of the latent variable \(u\): \[B(X;I_{+},I_{-})=\widetilde{\alpha}_{0}+(c_{1}+c_{2})u+\widetilde{\epsilon},\] where \[\widetilde{\alpha}_{0}=\frac{1}{|I_{+}|}\sum_{j\in I_{+}}\alpha_{0,j}-\frac{1}{|I_ {-}|}\sum_{j\in I_{-}}\alpha_{0,j},\quad\widetilde{\epsilon}=\frac{1}{|I_{+}| }\sum_{j\in I_{+}}\epsilon_{j}-\frac{1}{|I_{-}|}\sum_{j\in I_{-}}\epsilon_{j}.\] Together with model (7), it is clear that the response \(y\) is also linear in \(B(X;I_{+},I_{-})\) \[y=\theta_{0}-\widetilde{\alpha}_{0}\frac{\theta_{1}}{c_{1}+c_{2}}+\frac{ \theta_{1}}{c_{1}+c_{2}}B(X;I_{+},I_{-})+\varepsilon-\frac{\theta_{1}}{c_{1} +c_{2}}\widetilde{\epsilon}. \tag{9}\] _Remark 1_.: Without loss of generality, the latent variable model in (8) uses the \(p\)-th variable as the reference, and hence assumes that it is not in either active set, but the reference can be any inactive variable. The coefficients \(\alpha_{1,j}\) are invariant under a change of reference. In addition, the error terms \(\epsilon_{j}\)'s are allowed to be weakly correlated, which introduces correlation between active and inactive variables. The SLR framework is inspired by the supervised principal components approach in Bair et al. (2006). However, since our goal is to select predictive balances, the approach in Bair et al. (2006) is not directly applicable. ### 3.2 Model estimation To recover \(u\), we are faced with two challenges: the number of variables \(p\) is large and the active set \(I_{+}\cup I_{-}\) is unknown. If the active set is known, the desired balance can be defined by clustering variables in the active set into two subsets. To see this, it is instructive to examine the population Aitchison variation restricted to the active set. Indeed, assuming uncorrelated noise variance \(\epsilon_{j}\)'s, we have \[\text{Var}(\log\frac{X_{j}}{X_{k}})=\begin{cases}2\sigma_{\epsilon}^{2}&(j\in I_{ +},k\in I_{+}),\\ (c_{1}+c_{2})^{2}\sigma_{u}^{2}+2\sigma_{\epsilon}^{2}&(j\in I_{+},k\in I_{-}),\\ 2\sigma_{\epsilon}^{2}&(j\in I_{-},k\in I_{-}).\end{cases}\] It is easy to see that variables within each subset \(I_{+}\) (\(I_{-}\)) are closer to each other than between the two subsets. The two subsets \(I_{+}\) and \(I_{-}\) can thus be identified by clustering the Aitchison variation. Unfortunately, the active set is not known _a priori_. A natural way to estimate the active set is to perform feature screening. Let \(\psi_{j}\) denote the univariate coefficient when regressing \(y\) onto the clr transformed proportions \(Z_{j}=\log(X_{j})-\log g(X)\). It is easy to derive that \[Z_{j}-\text{E}[Z_{j}]=\alpha_{1,j}u+\frac{1}{p}\sum_{k=1}^{p}(\epsilon_{j}- \epsilon_{k}).\] By model (7)-(8), the population coefficient \(\psi_{j}\) is nonzero if \(j\) is an active variable and zero for inactive variables. The curious reader might wonder if the desired balance can be identified by clustering the Aitchison variation of all variables directly. At first glance, this may seem reasonable because the population Aitchison variation does suggest there should be three clusters \(I_{+},I_{-},\) and \(I_{0}\). In practice, however, the observed Aitchison variation is noisy and may not correctly separate active from inactive variables. Moreover, clustering on all variables requires much more computation time compared to clustering on active variables only. ### Extensions beyond the linear model The SLR framework can be easily extended to accommodate other types of responses, where a generalized linear model can be used to perform feature screening. Consider, for example, a binary response variable \(y_{i}\in\{0,1\}\). In this case, the univariate effect \(\widehat{\psi}_{j}\) does not have an explicit form as in (6). Nonetheless, \(\widehat{\psi}_{j}\) can be estimated by fitting a simple logistic regression, e.g. using the glm function in R. ## 4 Simulation Studies We compared SLR with a number of existing approaches, including the constrained Lasso (Lin et al., 2014; Lu et al., 2019, classo), the greedy search algorithm (Rivera-Pinto et al., 2018, selbal), the log ratio Lasso (Bates and Tibshirani, 2019, lrlasso), and CoDaCoRe (Gordon-Rodriguez et al., 2022). Two versions of SLR were evaluated: SLR with spectral clustering and SLR with hierarchical clustering using complete linkage. We focus on two types of performance measures, one for prediction and another for variable selection. To evaluate the prediction performance, we simulated a training set and an independent test set. Models were fitted using the training set and prediction performance was evaluated on the test set. We computed the mean squared prediction error (MSE) defined as \(n^{-1}\sum_{i=1}^{n}(y_{i}^{\text{test}}-\hat{y}_{i})^{2}\) for continuous responses and area under the Receiver Op erating Characteristic curve (AUC) for binary responses. For each method, 10-fold cross validation with the One Standard Error Rule was used to select the optimal tuning parameter on the training set. Specifically, the One Standard Error Rule finds the model with the minimum cross validation error and then selects the most parsimonious model whose mean prediction error falls within one standard error of the minimum. For fair comparison of variable selection performance, we compared the estimated linear log contrast coefficients \(\widehat{\boldsymbol{\beta}}\) to the simulated truth because the linear log contrast coefficients can be recovered from the log ratio coefficients but not the other way around. For an estimate \(\widehat{\boldsymbol{\beta}}\) and its corresponding truth \(\boldsymbol{\beta}\), estimation error was evaluated as \(\|\widehat{\boldsymbol{\beta}}-\boldsymbol{\beta}\|_{2}=\sqrt{\sum_{j}( \widehat{\beta}_{j}-\beta_{j})^{2}}\). Variable selection was assessed by computing the false positive rate (FPR), true positive rate (TPR), and F1 score. FPR, TPR (i.e., recall), and precision are defined, respectively, as \[\text{FPR}=\frac{|\{j:\widehat{\beta}_{j}\neq 0,\beta_{j}=0\}|}{|\{j:\beta_{j}=0 \}|},\quad\text{TPR}=\text{recall}=\frac{|\{j:\widehat{\beta}_{j}\neq 0, \beta_{j}\neq 0\}|}{|\{j:\beta_{j}\neq 0\}|},\] \[\text{precision}=\frac{|\{j:\widehat{\beta}_{j}\neq 0,\beta_{j}\neq 0\}|}{|\{j: \widehat{\beta}_{j}\neq 0\}|}.\] The F1 score is the harmonic mean of the precision and recall. The F1 score takes value in the interval \([0,1]\) with larger score indicating better performance in variable selection. All comparisons were evaluated with 100 replications. We also recorded the run time of each method on a Linux machine that had an Intel Core i9 processor with 18 cores (36 threads) and 128 GB memory. The run time in each replication counts the total time elapsed during model selection and estimation. ### Simulation setup We first sampled \(n\) copies of independent \(u_{i}\) from a uniform distribution on \((-0.5,0.5)\). For \(i=1,\ldots,n\) and \(j=1,\ldots,p-1\), we sampled independent \(\epsilon_{i,j}\)'s from a normal distribution with mean \(0\) and variance \(0.01\). For given active sets \(I_{+}\) and \(I_{-}\), the coefficient vector \(\boldsymbol{\alpha}_{1}\) is defined with \(c_{1}=1/|I_{+}|\) and \(c_{2}=-1/|I_{-}|\). Given \(\boldsymbol{\alpha}_{1},u_{i}\) and \(\epsilon_{i,j}\), we sampled \(w_{i,j}\)'s from the latent variable model \(w_{i,j}=\alpha_{1,j}u_{i}+\epsilon_{i,j}\) (\(j=1,\ldots,p-1\)) and set \(w_{i,p}=0\). The compositional predictor \(\boldsymbol{x}_{i}\in\mathbb{R}^{p}\) was obtained by applying the inverse alr transformation \(x_{i,j}=e^{w_{i,j}}/(\sum_{k=1}^{p-1}e^{w_{i,k}}+1)\) (\(i=1,\ldots,n;j=1,\ldots,p\)). We sampled continuous responses from the linear model \(y_{i}=0.5u_{i}+\varepsilon_{i}\), where \(\varepsilon_{i}\)'s are independent normal random variables with mean \(0\) and variance \(0.01\). We sampled binary responses from the Bernoulli distribution \(y_{i}=\text{Bernoulli}(\pi_{i})\) with \(\pi_{i}=e^{6u_{i}}/(e^{6u_{i}}+1)\). Two types of active sets are considered: (i) \(I_{+}=\{1,2,3\}\) and \(I_{-}=\{4,5,6\}\), and (ii) \(I_{+}=\{1,2,3,4,5\}\) and \(I_{-}=\{6\}\). While the two subsets in case (i) have comparable sizes, the differing size of active sets in case (ii) leads to disparate coefficients \(c_{1}\) and \(c_{2}\), which makes it harder to select the correct balance. The sample size was \(n=100\) and number of predictors was \(p=30\). ### Results Figure 1 compares different methods on data generated from the latent variable model with a continuous response using a log ratio formed with \(I_{+}=\{1,2,3\}\) and \(I_{-}=\{4,5,6\}\). SLR with either spectral clustering or hierarchical clustering has the smallest MSE, the smallest estimation error, and the highest F1 score. In terms of run time, SLR is comparable to CoDaCoRe, and faster than selbal and log ratio Lasso (lrlasso). The constrained Lasso (classo) is the fastest, the second best in terms of F1 score, but performs among the worst in terms of MSE. CoDaCoRe and lrlasso have moderate MSE, and CoDaCoRe has smaller estimation error and higher F1 score compared to lrlasso, although CoDaCoRe has the largest false positive rates among all methods. The greedy search algorithm selbal performs the worst in prediction, parameter estimation, variable selection, and is also the slowest method in run time. When the log ratio is formed with \(I_{+}=\{1,2,3,4,5\}\) and \(I_{-}=\{6\}\), Figure 2 suggests that it is generally harder to select the variables as shown by the decreased F1 scores for all methods. The MSEs and estimation errors decrease slightly due to changes in the magnitude of the true log contrast coefficients. The relative performance of each method remains largely the same as in Figure 1 with two notable differences. First, lrlasso performs slightly better than CoDaCoRe in parameter estimation and variable selection, suggesting that CoDaCoRe may have a disadvantage when the true log contrast is formed with unbalanced subsets. Second, classo has substantial decrease in F1 score, indicating poor variable selection in the presence of unbalanced subsets. Results when data were simulated from a binary response and with the two types of active sets are shown, respectively, in Figures 3 and 4. In this case, we observed similar relative performance among the methods as in the continuous response case. Overall, SLR yields superior performance in prediction, parameter estimation and variable selection. The near constant performance of lrlasso in variable selection is due to the fact that lrlasso only selects a single pair of log ratio in most replications, though it may not be the same pair from one replication to the next. Unlike in the continuous response case, lrlasso as Figure 1: Results when data were simulated from the latent variable model with a continuous response using a balance formed with \(I_{+}=\{1,2,3\}\) and \(I_{-}=\{4,5,6\}\). MSE: mean squared prediction error on the test set; EA2: \(\ell_{2}\)-norm error \(\|\widehat{\mathbf{\beta}}-\mathbf{\beta}\|_{2}\). Red cross indicates the mean. opposed to classo is the fastest in run time. ## 5 Analysis of Microbiome Data We applied SLR to the analysis of a microbiome data set on HIV infection, which is publicly available in the selbal R package. This data set contains the counts of 60 microbial taxa at the genus taxonomy rank across \(n=155\) subjects and the proportion of zero counts is about 35%. We removed taxa that appear in less than 20% of all samples which leaves Figure 2: Results when data were simulated from the latent variable model with a continuous response using a balance formed with \(I_{+}=\{1,2,3,4,5\}\) and \(I_{-}=\{6\}\). MSE: mean squared prediction error on the test set; EA2: \(\ell_{2}\)-norm error \(\|\widehat{\boldsymbol{\beta}}-\boldsymbol{\beta}\|_{2}\). Red cross indicates the mean. \(p=57\) genera. Remaining zeros were imputed using the Geometric Bayesian multiplicative method (Martin-Fernandez et al., 2015) implemented in the zCompositions R package. The response variable is binary: 128 individuals are HIV positive and 27 are negative. We did not include the covariate, _Men who has sex with men (MSM) or not (nonMSM)_, because we wished to compare methods based on microbiome data alone. A microbiome data set with a continuous response is also analyzed and presented in the supplement. To evaluate the out-of-sample prediction performance and stability in variable selection, we randomly partitioned the full data set into 70% training and 30% test data. In the case of \(p=57\), we randomly partitioned the full data set into 70% training and 30% test data. of a binary response, we stratified the data by case and control when performing the randomized split. The out-of-sample prediction performance was evaluated using AUC. Because we do not know the ground truth log ratio biomarkers, we instead report the proportion of variables selected, as in Gordon-Rodriguez et al. (2022). For each method, 10-fold cross-validation using the One Standard Error Rule was used to select the model fit to the training set. The train/test data split procedure was repeated 20 times, and we report the selection frequencies of each variable over 20 train/test data splits. Lastly, we applied selbal, CoDaCoRe, lrlasso, and SLR to the full data set to identify the balance associated with the training set. Figure 4: Results when data were simulated from the latent variable model with a binary response using a balance formed with \(I_{+}=\{1,2,3,4,5\}\) and \(I_{-}=\{6\}\). AUC: area under the Receiver Operating Characteristic curve on the test set; EA2: \(\ell_{2}\)-norm error \(\|\widehat{\mathbf{\beta}}-\mathbf{\beta}\|_{2}\). Red cross indicates the mean. with HIV status. Although classo does not return a balance biomarker, its variable selection results were also included for completeness. Model selection was done again by 10-fold cross-validation using the One Standard Error Rule. Since our simulation results show that SLR with spectral and hierarchical clustering have almost identical performance, in this section we only present SLR with spectral clustering. Figure 5a shows that SLR outperforms CoDaCoRe based on AUC and selects a sparser model, although CoDaCoRe has the fastest run time. selbal has moderate AUC, and selects a sparse model, but is the slowest method. classo has the worst AUC among all methods with a few extreme outliers in variable selection. lrlasso does not perform well in terms of AUC either and is almost as slow as selbal. Figure 5b shows the bar plot of selection proportions over the 20 train/test data splits. Variables are colored by whether they are included in the denominator (red) or numerator (blue) of the log ratio biomarker. While selbal only has one taxon with over 50% selection frequency, SLR has five, the first three coincides with those frequently selected by CoDaCoRe. The number of taxa with a selection proportion of less than 50% is 22 for selbal, 18 for CoDaCoRe, and 12 for SLR, suggesting that SLR tends to produce more stable variable selection than selbal and CoDaCoRe. As in the simulation studies, lrlasso again selects far fewer taxa than other methods. Overall, SLR achieves a good balance between robustness in variable selection, accuracy in prediction, and computational efficiency. Table 1 shows the selected log ratio biomarker on the full data set. A '+' sign indicates a variable is included in the numerator while a '\(-\)' sign indicates a variable included in the numerator. The '\(-\)' sign indicates a variable included in the numerator. Figure 5: (a) Results on HIV status classification over the 20 train/test splits. (b) Bar plot of selection proportions. Color represents if the variables were included in the numerator (blue) or the denominator (red) of the balance. denominator. Since classo only selects two taxa, its result can also be interpreted as a log ratio biomarker thanks to the sum to zero constraint on the log contrast coefficient. The log ratio biomarkers identified by the five methods are different, but they all have _g_Bacteroides_ in the numerator of the balance. With the exception of selbal, the other methods identified _g_RC9_gut_group_ as being in the denominator of the balance. In addition, SLR and CoDaCoRe both have _g_Oribacterium_ in the denominator, although _g_Oribacterium_ is not one of the most frequently selected variables by CoDaCoRe in Figure 5b. Comparing the results in Table 1 and the selection proportions in Figure 5b, SLR is the only method that identified all three variables consistently both on the full data set and on subsampled data sets. These results further demonstrate that SLR is more robust than existing methods in selecting log ratio biomarkers. Of taxa identified by SLR, _g_RC9_gut_group_ is a member of the Rikenellaceae family while _g_Oribacterium_ belongs to the Lachnospiraceae family. Enrichment of the Erysipelotrichaceae family and depletion of the Lachnospiraceae and Rikenellaceae families in HIV patients have been reported in several studies (Vujkovic-Cvijin and Somsouk, 2019), although we caution against any causal interpretation because the current analysis did not control for potential confounding factors of HIV such as sexual behaviors. ## 6 Discussion We have introduced SLR, a new method for selecting interpretable log ratio biomarkers from high-dimensional compositional data. Unlike the greedy search algorithm selbal, SLR selects the log ratio predictor by clustering a subset of carefully screened active variables into the denominator and numerator groups. As a result, SLR achieves a balance between prediction accuracy and computational efficiency. The latent variable model underlying SLR can conveniently accommodate diverse types of response variables. Simulation studies and real data analyses also demonstrated that SLR provides more robust variable selection than existing methods including the more computationally efficient alternative CoDaCoRe. Although our examples use microbiome data, SLR can be applied to other settings where the features are compositional, e.g. high-throughput sequencing data from liquid biopsies (Gordon-Rodriguez et al., 2022). The current formulation of SLR only allows one latent variable which naturally leads to the selection of a single log ratio biomarker. It is possible to extend the current model \begin{table} \begin{tabular}{c|c|c|c|c|c} Taxa & selbal & classo & CoDaCoRe & lrlasso & SLR \\ \hline \hline g\_Bacteroides & + & + & + & + & + \\ \hline f\_Erysipelotrichaceae\_g\_unclassified & & & & & + \\ \hline g\_RC9\_gut\_group & & - & - & - & - \\ \hline f\_vadinBB60\_g\_unclassified & & & & & - \\ \hline g\_Orbacterium & & & - & & - \\ \hline f\_Ruminococcaee\_g\_Incertae\_Sedis & - & & & & \\ \end{tabular} \end{table} Table 1: Variables selected by different methods in the HIV classification data set. SLR refers to SLR with spectral clustering. to allow more than one latent variable provided that the latent variables are independent and they have different effects on the response variable. For example, if there are two independent latent variables corresponding to two log ratio biomarkers, the clustering step of SLR needs to involve hierarchical spectral clustering so as to identify in total four denominator and numerator groups. The requirement of distinct effect sizes among latent variables is to ensure identifiability. It is worth contrasting our method with the supervised log ratio approach proposed by Quinn and Erb (2020). While SLR selects a single balance biomarker by using the response to screen active variables, Quinn and Erb (2020) use the response to aid the definition of a suitable dissimilarity measure on all variables. The 2- and 3-part balances defined using leaves of the dendrogram from clustering the selected dissimilarity are then selected as the biomarkers. Unlike SLR, the Quinn and Erb (2020) method is limited to classification problems. SLR also has some limitations. Like other log ratio based methods, SLR takes as input strictly positive compositional data. This requires replacing zeros in the raw data with suitable positive values prior to applying SLR. There is considerable heterogeneity in how to deal with observed zeros in the literature (Silverman et al., 2020). Some zeros may be biological due to absence of a feature in one sample, while others may be sampling zeros arising from limited sequencing depth. Extension of SLR to zero-inflated compositional data is beyond the scope of this paper. Data and Code Availability All code needed to reproduce the results in the simulation studies and data analyses is available at [https://github.com/drjingma/LogRatioReg](https://github.com/drjingma/LogRatioReg). SLR is also available as an R package at [https://github.com/drjingma/slr](https://github.com/drjingma/slr). ## 8 Acknowledgement This work is supported by NIH grant GM145772. ## 9 Appendix: Additional data analysis ### Microbiome and sCD14 inflammation In this analysis, we used compositional microbiome data to predict soluble CD14 (sCD14) measurements, a continuous variable. sCD14 is a marker of microbial translocation and has been associated with mortality in HIV patients (Sandler et al., 2011). The number of samples was \(n=151\), and the number of genera was \(p=57\) after removing 3 rare taxa that appear in less than 20% of all samples. Remaining zeros were imputed using the Geometric Bayesian multiplicative method (Martin-Fernandez et al., 2015) implemented in the zCompositions R package. For consistency, we used the original scale of the sCD14 marker, as in selbal (Rivera-Pinto et al., 2018). Figure 6a shows the results of predicting sCD14 inflammation from genus abundances. Due to the large variance of the response variable, the MSEs from all methods are comparably large, although CoDaCoRe may have a slight advantage. In terms of variable selection, classo tends to select a very sparse model with several large outliers. CoDaCoRe and lrlasso also have large variability in variable selection. By contrast, selbal and SLR are more stable in terms of the percentage of variables selected. Computationally, CoDaCoRe is still the fastest while lrlasso is the slowest. The relative slowness of lrlasso run time is more pronounced for this analysis than for those already mentioned. A bar plot of selection proportions over the 20 train/test data splits is shown in Figure 6b. Once again due to the large variability in the response variable, all the methods have unstable variable selection. Nonetheless, the variable _g_Collinsella_ was identified as being negatively associated with sCD14 inflammation more than half of the time by all but lrlasso. It is also interesting to note that SLR identified a subset of six variables as being sometimes positively and sometimes negatively associated with sCD14 inflammation, although in almost all cases there is a dominant direction. This is because active variables obtained from the screening step cannot be reliably clustered into two subsets, leading to inconsistencies in the definition of denominators and numerators across random data splits. We suspect that such behavior was due to the weaker association between microbiome and the response variable in this data set, as evidenced by the unstable variable selection by the other methods. One possible remedy is to replace the clustering step of SLR with a greedy search of the best balance predictor. In other words, after obtaining the active variables, one can test all possible combinations of balances and select the one the yields the highest association with the response. Note this greedy search will be over a much smaller subset of combinations and hence will still be more computationally efficient than selbal. Results on variable selection on the full data set are shown in Table 2. Interestingly, classo did not select any variable when using the One Standard Error Rule for model selection. The other four methods all identified _g_Collinsella_ as being in the denominator of the log ratio biomarker, and _g_Collinsella_ is also the most frequently selected variable in Figure 6b. SLR and CoDaCoRe both selected _f_Defluviitaleaceae_g_Incertae_Sedis_, which is their second most frequently selected variable in Figure 6b, as being in the numerator of the balance. SLR, lrlasso, and selbal identified _g_Subdoligranulum_ as being included in the numerator and _f_Lachnospiraceae_g_unclassified_ in the denominator of the log ratio biomarker. Another interesting observation is that lrlasso selected a lot more variables in this data set than in the HIV data set, although none has high selection frequency in Figure 6b when subsampling the full data set. The genus _g_Subdoligranulum_ is classified into the Ruminococcaceae family. Overall, the SLR and selbal results suggest that taxa in the Ruminococcaceae and Lachnospiraceae families are implicated in mucosal inflammation as measured by sCD14. The association of these two bacterial families with HIV was also reported in Vujkovic-Cvijin and Somsouk (2019) and Vujkovic-Cvijin et al. (2020). Figure 6: (a) Results on sCD14 prediction over the 20 train/test splits. (b) Bar plot of selection proportions in the sCD14 data set. Color represents if the variables were included in the numerator (blue) or the denominator (red) of the balance.
2309.07808
What Matters to Enhance Traffic Rule Compliance of Imitation Learning for End-to-End Autonomous Driving
End-to-end autonomous driving, where the entire driving pipeline is replaced with a single neural network, has recently gained research attention because of its simpler structure and faster inference time. Despite this appealing approach largely reducing the complexity in the driving pipeline, it also leads to safety issues because the trained policy is not always compliant with the traffic rules. In this paper, we proposed P-CSG, a penalty-based imitation learning approach with contrastive-based cross semantics generation sensor fusion technologies to increase the overall performance of end-to-end autonomous driving. In this method, we introduce three penalties - red light, stop sign, and curvature speed penalty to make the agent more sensitive to traffic rules. The proposed cross semantics generation helps to align the shared information of different input modalities. We assessed our model's performance using the CARLA Leaderboard - Town 05 Long Benchmark and Longest6 Benchmark, achieving 8.5% and 2.0% driving score improvement compared to the baselines. Furthermore, we conducted robustness evaluations against adversarial attacks like FGSM and Dot attacks, revealing a substantial increase in robustness compared to other baseline models. More detailed information can be found at https://hk-zh.github.io/p-csg-plus.
Hongkuan Zhou, Wei Cao, Aifen Sui, Zhenshan Bing
2023-09-14T15:54:56Z
http://arxiv.org/abs/2309.07808v3
# What Matters to Enhance Traffic Rule Compliance of Imitation Learning for Automated Driving ###### Abstract More research attention has recently been given to end-to-end autonomous driving technologies where the entire driving pipeline is replaced with a single neural network because of its simpler structure and faster inference time. Despite this appealing approach largely reducing the components in driving pipeline, its simplicity also leads to interpretability problems and safety issues [1]. The trained policy is not always compliant with the traffic rules and it is also hard to discover the reason for the misbehavior because of the lack of intermediate outputs. Meanwhile, Sensors are also critical to autonomous driving's security and feasibility to perceive the surrounding environment under complex driving scenarios. In this paper, we proposed P-CSG, a novel penalty-based imitation learning approach with cross semantics generation sensor fusion technologies to increase the overall performance of End-to-End Autonomous Driving. We conducted an assessment of our model's performance using the Town 05 Long benchmark, achieving an impressive driving score improvement of over 15%. Furthermore, we conducted robustness evaluations against adversarial attacks like FGSM and Dot attacks, revealing a substantial increase in robustness compared to baseline models. More detailed information, such as code-based resources, ablation studies and videos can be found at [https://hk-zh.github.io/p-csg-plus](https://hk-zh.github.io/p-csg-plus). Imitation Learning, Sensor Fusion, Autonomous Driving ## I Introduction End-to-end autonomous driving integrates the perception and decision layers into one deep neural network. Enhancing the policy learning algorithm and improving perception abilities are two crucial aspects that can significantly enhance the overall performance of autonomous driving. Imitation learning [2] and reinforcement learning [3] are two main learning paradigms unutilized end-to-end Autonomous Driving Systems for the decision-making part. Although both approaches have been successful in the field of autonomous driving, they each have their own drawbacks. Imitation Learning only learns the behaviors limited to the training data. This means that the model may struggle to generalize to new and unfamiliar situations. The quality of the expert policy also has a significant impact on the performance of imitation learning. The design of the reward function plays a critical role in the effectiveness of reinforcement learning for autonomous driving systems, and creating an appropriate reward function can be a complex task. Additionally, there is a risk that reinforcement learning models may learn unexpected and potentially dangerous behaviors in such safety-critical systems. In this article, we propose a penalty-based imitation learning paradigm that introduces the "Penalty"/"Reward" concept from reinforcement learing into Imitation Learning and aims to 1) achieve higher learning efficiency compared to reinforcement learning (learning from the expert demonstration) and 2) make the agent more sensitive to the traffic rules by involving some penalties designed in a simple manner. The perception component in the neural network is used to extract essential information about the surrounding environment. Despite some Lidar-based approaches [4][5] which leverages Lidar sensor input and HD maps demonstrating impressive performance, relying on HD maps is not a feasible option as it requires substantial resources to create and maintain, and may not be universally available for all areas and regions. For this reason, an increasing number of research [6][7][8][9] recently concentrate on Multi-modality sensor fusions (e.g Lidar and Camera). Sensor fusion technologies are widely used in 2D object and 3D object detection. Classified by the types of point cloud representation of Lidar input, there exist three main approaches, namely point-based [10][11], voxel-based [12][13][14][15], and range-view-based [16][17][18][19] approaches. More recently, with the popularity of attention mechanism transformer [20] models, many researchers are trying to use transformers to integrate multimodal information [7][21][8]. Although these models exhibit fusion capabilities, their size necessitates greater computational power and also causes longer inference time. We shift our approach by abandoning the use of transformers and instead enabling the model to extract the global context with multi-modalities by extracting and aligning shared information from diverse modalities. By combining above mentioned two improvements, we propose **P**enalty-based Imitation Learning with **C**ross **S**emantics **G**eneration (**P-CSG**). We evaluate our model on Town 05 Long Benchmark and it achieves state-of-the-art performance. Our main contributions to this paper can be summarized as follows: * Our innovative multi-sensor fusion technique involves aligning shared information from various modalities, thus simplifying the process of extracting the global context across diverse modalities. * We proposed a penalty-based imitation learning approach that leverages constraint optimizations to make the Imitation Learning model more sensitive to traffic rule violations. * We analyse the performance and robustness of our autonomous driving model under two types of adversarial attacks: FGSM and Dot attacks. The results show that our model outperforms the other baselines from the perspective of safety. ## II Related Works There are currently two primary methods used in autonomous driving technology: modular and end-to-end approaches. The modular approach involves a detailed series of software modules that work together to control the vehicle. By employing a modular approach, engineering teams can focus on specific and well-defined subtasks and make enhancements independently throughout the entire system. This allows the system to remain operational as long as the intermediate outputs remain functional. On the other hand, the end-to-end approach considers the entire driving process as one comprehensive learning task. It leverages the ability of the neural network to learn task-specific features and perform task-specific actions. Contrary to modular approach, the agent learns without relying on explicit rules or hand-engineered features, namely the human-defined information bottlenecks. This means that the vehicle can adapt to different driving scenarios and environments and potentially achieve better performance than systems that rely on traditional rule-based approaches. End-to-End approaches have shown great success in computer vision, such as object detection, object tracking, and semantic segmentation. The success of these tasks builds a solid foundation for end-to-end autonomous driving. It is reasonable to assume that in the upcoming future, the end-to-end approach will also have the ability to autonomously operate a vehicle. Lately, there has been a growing trend in research to utilize behavior cloning for training agents to accomplish autonomous driving tasks. This involves the agent learning from demonstrations provided by experts in the field [7][8][6][9]. They collect the demonstration in CARLA [22], a simulation urban environment in which the researchers and developers can test and evaluate their algorithms and models for autonomous driving. They employ strategies like enhancing multi-modality sensor fusion [7][8], introducing safety module [8] to avoid dangerous behavior and improve the overall performance of autonomous driving. ## III Methodologies In this section, we will introduce a novel multi-sensor fusion approach and a penalty-based Imitation Learning method for end-to-end autonomous driving. ### _Problem Setting_ The task we concentrate on is point-to-point navigation in an urban setting where the goal is to complete a route with safe reactions to dynamic agents such as moving vehicles and pedestrians. The traffic rules like red lights and stop signs should be followed. We consider the problem of learning a goal-conditioned policy \(\pi(a_{t}|s_{t},g)\) that outputs action \(a_{t}\in A\), conditioned on the current observed state \(s_{t}\in S\) and a high-level goal location \(g\) provided by GPS. The environment we used to train and test the performance of the trained agent is CARLA. The specific environment setting we used can be characterized by the following statements: * A state space \(\mathcal{S}\in(\mathbb{R}^{H_{c}\times W_{c}\times 3},\mathbb{R}^{H_{l}\times W _{l}\times 2},\mathbb{R}^{4})\) which is the combination of the observation of camera, bird's eye view LiDAR input, and measurements of current throttle, brake, steer, and velocity. * A multidimensional action space \(\mathcal{A}\in\mathbb{R}^{3}\). The action space contains the parameters of steer, throttle, and brake to drive the agent to finish tasks. * A goal space consists of high-level goal location provided as GPS coordinates. #### Iii-1 Observation Space Similar to prior works [23, 24][7][21], the LiDAR point cloud data is transformed into a 2-bin histogram by mapping it onto a 2D BEV grid of fixed resolution. To encompass a 32m x 32m BEV grid, we limit our analysis to points located within 32m in front of the ego vehicle and 16m on each side. We partition the grid into blocks of 0.125m \(\times\) 0.125m, which gives us a resolution of 256 \(\times\) 256 pixels. The height dimension is discretized into two bins, distinguishing points that are either on/below or above the ground plane. This results in a two channels pseudo image input \(\mathbb{R}^{256\times 256\times 2}\). In terms of RGB input, we leverage three cameras positioned to face forward, 60 degrees to the left, and 60 degrees to the right. These cameras have a horizontal field of view (FOV) of 120 degrees. The image from the cameras has a resolution of \(400\times 300\) pixels. The forward image is cropped to \(300\times 160\) pixels and the 60 degrees to the left and right images are cropped to \(160\times 234\) pixels. we want to use cropping to eliminate radial distortion at the edges. We combine these three cropped images into a single image input with a resolution of \(768\times 160\) pixels. In addition, the current throttle, brake, steering, and velocity measurements are included in the observation space to provide a point of reference for the agent. #### Iii-2 Action Space Instead of directly predicting the next action of throttle, steer, and brake, we estimate the future waypoints \(\mathcal{W}\) of the ego-vehicle in bird's eye view space which is centered at the ego vehicle's current coordinate frame. The trajectory consists of a sequence of 2D waypoints \(\mathcal{W}=\{w_{t}=(x_{t},y_{t})\}_{t=1}^{T}\). In our settings, \(T=4\) which means the agent needs to predict 4 waypoints in the future for each frame. Then we use two fine-tuned PID controllers for lateral and longitudinal control to obtain steer, throttle, and brake values from the predicted waypoints. These two PID controllers do not involve in the training process. #### Iii-3 Goal Location According to CARLA [22] 0.9.10's protocol, the high-level goal locations \(G\) are provided as GPS coordinates. This goal location \(G\) is sparse (hundreds of meters apart) which can only be used as guidance. We kindly remind the readers that the goal location is different from those predicted waypoints, which are dense, and only a few meters away from each other. ### _Cross Semantics Generation_ The motivation for our approach is based on the fact that multi-modality inputs have common information and also their own unique information. For instance, the vehicle's and pedestrian's shape and location are the shared information of LiDAR and Camera input. The term unique information refers to information that is not present in other sources of input. Extracting and aligning the common information from both modalities, helps the decision network to find the logical connection between modalities and capture their global context since the extracted features are more reasonable and easier to understand. There are two questions we want to solve in this section. * How to find the shared information of these two modalities? * How to align the extracted shared information in the same space? In order to address the first question, we propose the _cross semantics generation_ approach. Our plan is to utilize the LiDAR input to produce semantic information from the camera Fig. 1: This is an overview of our proposed Penalty-based Imitation Learning with Cross Semantics Generation. Let’s start with the first feature extraction part. The top-down LiDAR pseudo image is fed into a ResNet18 architecture to extract the features from the LiDAR input. The front camera image is fed into a ResNet50 architecture to extract the features from the camera input. Then, the Cross Semantic Generation is conducted to cross-generate the front and top-down semantic segmentations in the frame of the front image and LiDAR pseudo image. We apply the contrastive loss the align the shared features of the camera and LiDAR inputs. The resampled shared embedding from latent space will be used to generate the waypoints. The features extracted from the front camera image are also leveraged to perform auxiliary tasks ’stop sign indication’ and ’red light indication’. In terms of penalty-based imitation learning, the LiDAR features, camera features, shared features, and measurements are concatenated and fed into an MLP network to reduce the dimension. A GRU structure is followed to recurrently generate the waypoints, also taking the goal location as input. The generated waypoints will be applied with the three penalties, namely, red light penalty, stop sign penalty, and speed penalty. The task of the PID controller is to map the waypoints into the action of steer, throttle, and brake. and also use the camera input to generate semantic information from the LiDAR. As Figure 1 demonstrates, the information flows for the LiDAR and camera semantic information generation is crossed since we used the information from one modality to generate the semantic segmentation of the other modality. That is the reason we name our approach _cross semantics generation_. The RGB image and the bird's eye view of the LiDAR pseudo image are fed into two residual networks [25] to extract the features of these two modalities. We utilize two linear layers to extract the shared information of the camera and LiDAR. Those from the camera input extracted features are used to generate the top-down semantic segmentation which is aligned with the LiDAR input and those from the LiDAR input extracted features are used to generate the semantic segmentation of the front image. The reason we used the information to generate the semantic segmentation is that 1) semantic segmentation has less noise compared to the original data and 2) with the semantic segmentation we can filter out the unique information such as traffic light color and traffics sign pattern. In our setup, the semantic segmentation contains 4 channels, namely the drivable area, the non-drivable area, the object in the drivable areas (vehicle and pedestrians), and others. We define \(y_{f}\) as the ground truth 3D tensor with the dimension \(160\times 768\times 4\) and \(\hat{y}_{f}\) as the output of the front view decoder with the same shape. We also define \(y_{td}\) as the ground truth 3D tensor with the dimension \(256\times 256\times 4\) and \(\hat{y}_{td}\) as the output of the top-down view decoder with the same shape. To align the extracted shared information into the same space, we leverage the contrastive loss. The primary goal of contrastive loss is to learn an embedding space where similar examples are placed closer together, while dissimilar examples are placed farther apart. This is achieved by defining a distance metric between the embeddings and then minimizing the distance between similar examples while maximizing the distance between dissimilar examples. Note that the embeddings extracted from LiDAR and camera inputs are mapped into two gaussian distributions by generating mean and variance, using MLP networks. The contrastive loss can be defined by: \[\mathcal{L}(x_{i}^{1},x_{j}^{2})=y_{ij}\mathcal{D}(x_{i}^{1},x_{j}^{2})+(1-y_{ ij})\max(0,\epsilon-\mathcal{D}(x_{i}^{1},x_{j}^{2})) \tag{1}\] where \(x_{i}^{1}\), \(x_{j}^{2}\) are the Gaussian distributions from camera input and LiDAR input, \(y_{ij}\) indicates if \(x_{i}^{1}\) and \(x_{j}^{2}\) are from the corresponding LiDAR and camera input. \(\mathcal{D}(x_{i}^{1},x_{j}^{2})\) measures the similarity of two features. In our setting, we assume the features extracted from the camera and LiDAR are correlated only if the features are from the same frame which means for each batch, \(Y\) as the matrix format of \(y_{ij}\) is exactly an identity matrix. For each batch input, the alignment loss can be formalized as: \[\mathcal{L}_{a}(X_{1},X_{2})=E\circ D+(1-E)\circ\max(0,\epsilon-D) \tag{2}\] where \(E\) is the identity matrix and \(D\) is the matrix consisting of \(d_{ij}=\mathcal{D}(x_{i}^{1},x_{j}^{2})\). In our setting, we use the Jensen-Shannon divergence [26] which is a symmetric metric to measure the distance between two gaussian distributions based on Kullback-Leibler divergence. \[\text{JSD}(P||Q)=\frac{1}{2}D_{KL}(P||M)+\frac{1}{2}D_{KL}(Q||M) \tag{3}\] where \(P\) and \(Q\) are two distributions and \(D_{KL}\) is the KL-divergence. The two gaussian distributions are merged and resampled to get the shared information embeddings. ### _Auxiliary Tasks_ Auxiliary tasks have been proven to be efficient in many learning approaches. It has two main advantages. 1) By introducing auxiliary tasks, guarantees the important information flows remain in the networks. This information flow is critical for the decision network to make decisions. 2) Auxiliary tasks can guide the direction of gradient descent during training, which can lead the neural network to a more optimal location in the weight space. In this autonomous driving task, we would introduce two extra auxiliary tasks, namely traffic light classification and stop sign classification. #### Iii-C1 Traffic Light Classification The output of the traffic light decoder should be a vector of 4 which indicates these four states red light, yellow light, green light, and none in the current frame. We then define \(y_{l}\) as the ground truth traffic light vector of length 4 and \(\hat{y}_{l}\) as the output of the traffic light decoder with the same shape. #### Iii-C2 Stop Sign Classification The output of the stop sign decoder should have a vector of 2 which indicates if a stop sign exists in the current frame. The ground truth stop sign vector of length 2 and the output of the stop sign decoder with the same shape are defined as \(y_{s}\) and \(\hat{y}_{s}\), respectively. Note that these two tasks are trained simultaneously with policy generation networks. ### _Penalty-based Imitation Learning_ Our investigation revealed that the objective function used in imitation learning and the metric used for evaluating autonomous driving performance are not consistent with each other. Hence, achieving a low loss in the objective function does not necessarily guarantee a high driving score or successful route completion. Based on our in-depth analysis, we identified two possible factors contributing to this discrepancy. * The expert agent still makes mistakes when generating the dataset. Sometimes, the expert agent runs a red light and violates the stop sign rule. * The objective function is not sensitive to serious violations of the traffic rules, i.e. the violation of red lights and stop signs. The average objective function loss may not increase too much when violating the traffic rules despite that this violation may cause serious consequences which result in a huge drop in driving score and route completion. Behavior Cloning (BC) is an imitation learning technique that seeks to replicate the behavior of an expert agent. As a result, the performance of the trained agent is inherently limited by the capabilities of the expert agent. In other words, if the expert agent makes a mistake, the trained agent will learn to replicate that mistake rather than correct it. Our goal is to modify the objective function used in imitation learning to incorporate traffic rules. Specifically, we plan to introduce additional penalties (higher loss), when the agent generates short-term future waypoints that violate traffic rules during training. Traffic rules can be expressed as constraint functions that define conditions the optimization problem must meet. In our approach, we focus on three specific aspects of driving behavior: running red lights, ignoring stop signs, and failing to slow down when turning. These are the primary issues we observed with our vanilla imitation learning method. To address these concerns, we propose three corresponding penalties that can be used to quantify and penalize these violations. #### Iii-B1 Red Light Penalty For the red light violation, we design a red light penalty as follows: \[\mathcal{P}_{\mathrm{tl}}=\mathbb{E}_{\mathcal{X}\sim\mathcal{D}}[\mathbb{1} _{\mathrm{red}}\cdot\sum_{i=1}^{t}c_{i}\cdot\max\{0,w_{i}-\overline{p}\}] \tag{4}\] In the formula, \(w_{i}\) represents the \(i\)-th predicted waypoints generated by the trained agent, while \(\overline{p}\) represents the position of the stop line at the intersection. Both \(w_{i}\) and \(\overline{p}\) are in the ego car's coordinate system. The weight parameter is denoted by \(c_{i}\), and the sum of all weight parameters is equal to one. \(\mathbb{1}_{\mathrm{red}}\) indicates the presence of a red light that may affect the agent in the current frame. \(\mathcal{X}\) represents the input for the current frame, while \(\mathcal{D}\) refers to the entire dataset. When facing red lights, a red light penalty is added based on the distances of the predicted waypoints beyond the stop line at the intersection. If the predicted waypoints fall within the stop line, the penalty remains at zero. Conversely, if the predicted waypoints exceed the stop line, the total distance between those waypoints and the stop line is computed as the red light penalty. The necessary information for calculating the red light penalty, such as traffic light information and stop line location, is pre-processed and stored in each frame of our dataset. #### Iii-B2 Stop Sign Penalty Similar to the red light penalty, a stop sign penalty is given when the predicted waypoints violate the stop sign rule. The penalty is formalized as follows: \[\mathcal{P}_{\mathrm{ss}}=\mathbb{E}_{\mathcal{X}\sim\mathcal{D}}[\mathbb{1} _{\mathrm{stopsign}}\cdot\max\{v-\epsilon,0\}] \tag{5}\] where \(v\) is the desired speed calculated by \[v=\frac{w_{0}-w_{1}}{\Delta t} \tag{6}\] The variables \(w_{0}\) and \(w_{1}\) represent the first and second predicted waypoints, respectively, while \(\Delta t\) indicates the time interval between each frame during data collection. The function \(\mathbb{1}_{\mathrm{stopsign}}\) serves as an indicator for stop sign checking. The maximum speed required to pass stop sign tests is denoted by \(\epsilon\). An upper-speed limit is established for an area affected by a stop sign, and only speeds lower than this limit are permitted for the agent to pass through. If the agent exceeds this speed limit, a penalty is imposed based on its speed. As the training network only generates predicted waypoints, the velocity estimated from the waypoints is used to compute the stop sign penalty. #### Iii-B3 Speed Penalty If the agent attempts to turn at excessive speed, a penalty will be enforced. The rationale behind this penalty is based on human driving experience, as it is commonly known that turning at high speeds can increase the risk of collisions with pedestrians or other objects due to reduced reaction time. The speed penalty is defined as follows: \[\mathcal{P}_{\mathrm{sp}}=\mathbb{E}_{\mathcal{X}\sim\mathcal{D}}[\mathbb{i} (d\theta)\cdot\max\{v-v_{\mathrm{lb}},0\}] \tag{7}\] \(d\theta\) denotes the deviation in direction between the current frame and the next frame. As with the stop sign penalty, the desired speed \(v\) is defined by Equation 6, and \(v_{\mathrm{lb}}\) represents the lower speed limit. Any speed below this limit is not subject to the speed penalty. #### Iii-B4 Waypoint Prediction We trained the network to minimize the difference between the predicted waypoints and the ground truth waypoints, applying an \(L_{1}\) loss. Assume \(w_{t}^{gt}\) represent the ground truth waypoint for time step \(t\), then the loss function can be written as: \[\mathcal{L}=\sum_{t=1}^{T}||w_{t}-w_{t}^{gt}||_{1} \tag{8}\] The total objective function of waypoint prediction in whole dataset can be written as: \[\mathcal{F}=\mathbb{E}_{(\mathcal{X},\mathcal{W})\sim\mathcal{D}}[\mathcal{L} (\mathcal{W},\pi(\mathcal{X}))] \tag{9}\] By applying the penalties, we formalize the constrained optimization: \[\begin{split}\min&\mathcal{F}\\ \text{s.t.}&\mathcal{P}_{\mathrm{tl}},\mathcal{P}_{ \mathrm{ss}},\mathcal{P}_{\mathrm{sp}}=0\end{split} \tag{10}\] where \(\mathcal{F}\) is the objective function defined in Equation 9. The Lagrange multiplier strategy can be applied here. We introduce three Lagrange Multiplier \(\lambda_{1}\), \(\lambda_{2}\), \(\lambda_{3}\) and the Lagrange function is defined by: \[\min\quad\mathcal{F}+\lambda_{1}\mathcal{P}_{\mathrm{tl}}+\lambda_{2} \mathcal{P}_{\mathrm{ss}}+\lambda_{3}\mathcal{P}_{\mathrm{sp}} \tag{11}\] This is the final objective function to optimize. For simplicity, these Lagrange multipliers \(\lambda_{1}\), \(\lambda_{2}\), \(\lambda_{3}\) are considered fixed hyper-parameters. Well-chosen \(\lambda_{1}\), \(\lambda_{2}\), \(\lambda_{3}\) are important for optimization. According to our experiments, too large \(\lambda\) influences the behaviors in other scenarios while too smaller \(\lambda\) is not powerful enough for the agent to obey the corresponding traffic rules. The right part of Figure 1 demonstrates the process of penalty-based imitation learning in detail. ## IV Experiments This section will begin by explaining the experimental setting. Afterward, we will compare our proposed model to other baseline models. Additionally, we will investigate the robustness of our model by intentionally subjecting the camera to attacks. Furthermore, we will conduct an ablation study to observe the impact of the penalty weight on overall performance and specific traffic violations which is shown in our website because of page limit. ### _Task Description_ We are focusing on a navigation task where the vehicle navigates along predefined routes in various scenarios and areas. The vehicle is guided by GPS signals and situations where GPS signals are low or not available are not being considered. The predefined routes include specific scenarios to test the agent's ability to respond to emergencies such as obstacle avoidance, other vehicles running red lights, and the sudden appearance of pedestrians on the road. The agent is required to complete the route within a specified time limit, and any time exceeding the limit is considered a failure in terms of route completion. ### _Test Results_ We use Town05 long benchmarks to evaluate our model. Town05 long benchmark contains 10 routes and all of these routes are over 2.5km. This benchmark is also used by InterFuser [8] and TransFuser [21]. **Traning Data.** Due to the challenges in obtaining real-world data, we utilize the Carla simulator [22] for gathering training data, which is preprocessed using an expert policy. The expert policy selected aligns with the one employed in the InterFuser framework. Our data collection involves executing tiny, short, and long scenarios across all towns under clear weather settings, excluding town05. **Baseline.** The other baselines we chose to compare with our model are TransFuser+, TransFuser, Geometric Fusion, and LateFusion. **TransFuser**[21] introduces the Transformer into the multi-sensor fusion architecture to achieve better end-to-end autonomous driving results. **TransFuser+**[7], as an extension of TransFuser, leverages several auxiliary losses to ensure important information flows such as traffic light and road line information in the network. **InterFuser**[8] developed a safety control module to regulate the behaviors of the agent, preventing the agent violate the traffic rules. **LAV**[9] utilizes data from both the ego vehicle and surrounding vehicles to enhance the dataset through data augmentation. This is achieved by learning a spatial intermediate representation that remains invariant to different viewpoints. The evaluation results demonstrated in Table I indicates that our model has a huge increase in driving scores and infraction penalty compared to other SOTA models. As the table demonstrates, our model adheres to traffic rules more effectively in terms of collision, red light violations, and stop sign violations. ## V Robustness Study Given the life-critical nature of autonomous driving, it becomes essential to examine the robustness of neural network models against such vulnerabilities. In this regard, we conduct a comprehensive comparative analysis to evaluate the resilience of our model. We benchmark its performance against two state-of-the-art architectures--Transfuser+ [7] and Interfuser [8]. We focus on two specific types of white-box sensor attacks. These are _Fast Gradient Sign Method (FGSM) Attack_[27] and _Dot Attack_[28]. **FGSM Attack**[27]: FGSM Attacks are distinguished by their computational expediency and low perceptibility to the human eye. The method exploits the model's loss function gradient with respect to the input data. \[\mathbf{\eta}=\epsilon\operatorname{sign}(\nabla_{\mathbf{x}}J(\mathbf{\theta},\mathbf{x},y)). \tag{12}\] \[\tilde{\mathbf{x}}=\mathbf{x}+\mathbf{\eta} \tag{13}\] Here, \(\mathbf{\theta}\) denotes model parameters, \(\mathbf{x}\) is the original input, and \(y\) is the true label. The loss function is \(J(\mathbf{\theta},\mathbf{x},y)\), and its gradient with respect to \(\mathbf{x}\) is \(\nabla_{\mathbf{x}}J(\mathbf{\theta},\mathbf{x},y)\). The hyperparameter \(\epsilon\) controls perturbation magnitude, leading to adversarial input \(\tilde{\mathbf{x}}\). The method creates subtle but impactful perturbations in autonomous driving. **Dot Attack [28]** : Dot Attack target deep learning algorithms by introducing minor, physical disruptions to camera systems. Specifically, a sticker with a designed dot pattern is placed on the camera lens. For RGB cameras, semi-transparent dots are strategically placed. Their color, given by \(\pi(x;\theta)(i,j)\) at pixel \((i,j)\), is calculated as: \[\pi(x;\theta)(i,j)=(1-\alpha(i,j))\cdot x(i,j)+\alpha(i,j)\cdot\gamma \tag{14}\] Fig. 2: **Qualitative Attack Results on P-CSG. (a) Original RGB input, (b) Dot Attack with nine trained dots, (c) FGSM Attack with \(\epsilon=0.01\). Due to the low magnitude of the adversarial perturbation, distinguishing (c) from (a) poses a challenge to human visual perception. This is also an advantage of FGSM attack.** Here, \(\alpha(i,j)\) weighs the contributions of the original color \(x(i,j)\) and a learnable color of a dot \(\gamma\). It attenuates with increasing distance \(d(i,j)\) from the center of the dot \((i_{(c)},j_{(c)})\), as formulated by: \[\alpha(i,j)=\alpha\cdot\exp\left(-d(i,j)^{\beta}\right) \tag{15}\] \[d(i,j)=\frac{(i-i_{(c)})^{2}+(j-j_{(c)})^{2}}{r^{2}} \tag{16}\] Each dot has eight learnable parameters: \[\theta=(\gamma,(i_{(c)},j_{(c)}),r,\alpha,\beta) \tag{17}\] In Fig. 2(b), the resulting dots mimic real-world lens stains, displaying a color gradient. Unlike computational attacks like FGSM, Dot Attacks are easily deployed in the real world, requiring only the application of a sticker to the camera lens. ### _Experimental Design_ We utilized CARLA as the simulation environment for our experiments and chose a predefined trajectory, denoted as Town05 Long. To guarantee a fair comparative analysis among three disparate deep learning models--Transfuser+, Interfuser, and P-CSG--an uniform PID Controller was applied to both Interfuser* and the other two models. This standardization strategy serves to isolate and rigorously evaluate the intrinsic robustness of each model when subjected to various forms of adversarial attacks. **FGSM Attacks**: We aim to induce immediate route deviations in the vehicle's path by setting target waypoints for the subsequent four timesteps to \((-1.0,-2.0)\), \((-2.0,-4.0)\), \((-3.0,-6.0)\), and \((-4.0,-8.0)\), which are relative to the vehicle's current position. These are used to formulate a dedicated loss function, from which adversarial perturbations are generated and applied to the input RGB images. We set the perturbation magnitude \(\epsilon\) to 0.01 for these experiments. **Dot Attacks**: Unlike FGSM Attacks, which necessitate per-frame computations, Dot Attacks employ a pre-trained, universally applicable Dot Mask. Our experimental procedure initiated with the Nine-dots Mask training phase, followed by its application within the CARLA simulation framework. This structured approach permitted us to evaluate the practicality and effectiveness of Dot Attacks under conditions that closely mimic real-world operational scenarios. ### _Results_ Our evaluation framework prominently features the interaction score as the principal metric, encapsulating a range of collision scenarios and traffic violations for comprehensive safety assessment. If the model is subjected to an adversarial attack, it is imperative to maximize the interaction score to enhance the safety of both passengers and pedestrians. Table II substantiates that our model, P-CSG, consistently excels under two contrasting adversarial conditions: high-intensity FGSM Attacks, evaluated on a per-frame basis, and low-intensity Dot Attacks, which are applied uniformly across all frames. Specifically, P-CSG demonstrates an unparalleled commitment to safety, achieving infraction scores that are, on average, 1.5 times higher than the nearest competing model. In the context of low-intensity Dot Attacks, P-CSG not only achieves higher route completion but also upholds elevated safety standards. Conversely, when subjected to high-intensity FGSM Attacks, P-CSG prioritizes safety over route completion, yet still outshines other models in overall performance. Most notably, across both types of attacks, P-CSG boasts the highest cumulative Driving Score, further validating its robustness and adaptability to diverse threat landscapes. ## VI Conclusion In this paper, we improves the fusion technolohies and policy learning approaches based on penalties for autonomous driving. Additionally, we compare our proposed model with other baselines under _Dot Attacks_ and _FGSM Attacks_, demonstrating outstanding performance and robustness. We aspire for our proposed penalty-based imitation learning approach to introduce a fresh perspective into the domain of end-to-end autonomous driving, aimed at enhancing the compliance of autonomous agents with traffic rules. Furthermore, additional research is necessary to enhance the robustness of autonomous driving systems, particularly in countering attacks such as _Dot Attacks_.
2309.16905
Towards a Unified Framework for Adaptable Problematic Content Detection via Continual Learning
Detecting problematic content, such as hate speech, is a multifaceted and ever-changing task, influenced by social dynamics, user populations, diversity of sources, and evolving language. There has been significant efforts, both in academia and in industry, to develop annotated resources that capture various aspects of problematic content. Due to researchers' diverse objectives, the annotations are inconsistent and hence, reports of progress on detection of problematic content are fragmented. This pattern is expected to persist unless we consolidate resources considering the dynamic nature of the problem. We propose integrating the available resources, and leveraging their dynamic nature to break this pattern. In this paper, we introduce a continual learning benchmark and framework for problematic content detection comprising over 84 related tasks encompassing 15 annotation schemas from 8 sources. Our benchmark creates a novel measure of progress: prioritizing the adaptability of classifiers to evolving tasks over excelling in specific tasks. To ensure the continuous relevance of our framework, we designed it so that new tasks can easily be integrated into the benchmark. Our baseline results demonstrate the potential of continual learning in capturing the evolving content and adapting to novel manifestations of problematic content.
Ali Omrani, Alireza S. Ziabari, Preni Golazizian, Jeffrey Sorensen, Morteza Dehghani
2023-09-29T00:14:38Z
http://arxiv.org/abs/2309.16905v1
# Towards a Unified Framework for Adaptable Problematic Content Detection via Continual Learning ###### Abstract Detecting problematic content, such as hate speech, is a multifaceted and ever-changing task, influenced by social dynamics, user populations, diversity of sources, and evolving language. There has been significant efforts, both in academia and in industry, to develop annotated resources that capture various aspects of problematic content. Due to researchers' diverse objectives, the annotations are inconsistent and hence, reports of progress on detection of problematic content are fragmented. This pattern is expected to persist unless we consolidate resources considering the dynamic nature of the problem. We propose integrating the available resources, and leveraging their dynamic nature to break this pattern. In this paper, we introduce a continual learning benchmark and framework for problematic content detection comprising over 84 related tasks encompassing 15 annotation schemas from 8 sources. Our benchmark creates a novel measure of progress: prioritizing the adaptability of classifiers to evolving tasks over excelling in specific tasks. To ensure the continuous relevance of our framework, we designed it so that new tasks can easily be integrated into the benchmark. Our baseline results demonstrate the potential of continual learning in capturing the evolving content and adapting to novel manifestations of problematic content. _Warning: this paper contains data that some readers may find offensive._ ## 1 Introduction Our social contexts continuously evolve and adapt to new situations, a trait that has enabled us to navigate through various challenges such as wars or pandemics. Peoples' expressions of hate, toxicity, and incivility, among other types of biases and prejudices, undergo adaptations in response to such changing circumstances. For instance, when there is a shift in the social or economic context, novel forms of hate speech emerge (Tahmasbi et al., 2021). In such scenarios, fear and uncertainty contribute to the proliferation of stereotypical beliefs and the attribution of blame to particular groups (Velasquez et al., 2020; Cinelli et al., 2020). Even in stable social situations, differences in countries, contexts, and perspectives shape the boundaries of what is considered problematic content (Klonick, 2017). The field of problematic content detection has produced an abundance of resources aiming to capture various aspects of this ever-changing phenomenon (Poletto et al., 2021; Vidgen and Derczynski, 2020). While the accumulation of such resources may appear to bring us closer to effectively addressing this problem, the static viewpoint adopted by each resource has resulted in heterogeneity among them, posing a significant challenge for integration of their knowledge into models. This heterogeneity has also caused fragmentation in progress reports on the automatic detection of problematic content. Therefore, it is crucial to establish a benchmark that integrates these annotated resources while capturing the dynamic nature of this problem. Such a benchmark would provide a more practical setting to test our models under stress and offer a new way to measure progress. In this paper, we introduce a continual learning benchmark and framework for problematic content detection comprising 84 related tasks encompassing 15 annotation schemas from 8 sources. By doing so, we present a novel perspective to address the problem of problematic content detection. Instead of focusing solely on specific aspects, such as the toxicity or incivility of a snapshot of a platform, we advocate for a dynamic formulation that builds on the ever-changing nature of problematic content. Further, we propose a framework for identifying problematic content in a dynamic setting which satisfies the following two objectives: First, an optimal model should have the capability to acquire and retain knowledge about various types of problematic content. This capability is particularly crucial for effectively utilizing the diverse datasets that exist for detecting problematic content. We model this capability through a continual learning formulation, drawing inspiration from previous research Robins (1995), de Masson D'Autume et al. (2019), Sun et al. (2019). Our models are designed to learn and understand the intricacies of problematic content by performing a diverse set of related tasks. Second, an optimal model should also have the ability to quickly learn and recognize new instances of problematic content, regardless of whether they appear on new platforms, in different languages, or target new groups. To assess and reward models that can adapt rapidly to emerging problematic content, we employ a few-shot evaluation benchmark on a separate set of related tasks, as suggested by recent work Jin et al. (2021). Through these objectives, we establish criteria for an ideal model that can effectively handle the dynamic nature of problematic content. We define metrics and evaluations that capture these criteria, and we create a benchmark that accurately reflects the complexities of the problem. In constructing this benchmark, we integrate existing resources in the field, leveraging their strengths to develop a comprehensive framework for studying problematic content detection. To validate the effectiveness of our proposed approach, we conduct extensive experimentation using diverse set of models and algorithms. Our evaluation focuses on detecting various types of problematic content such as hate speech, toxicity, and incivility, among others. Through these evaluations, we aim to showcase the strengths and limitations of each approach. The insights gained from our findings contribute to ongoing efforts in mitigating the harmful impact of problematic content on online platforms. This is discussed further in section 6 of our work. In sum, by addressing the dynamic Figure 1: Current static approaches (I) train and evaluate models on a fixed set of datasets. Our benchmark embraces the dynamic aspects of problematic content detection in two stages. The upstream training (II) and evaluation (III) where data is assumed to be coming in a stream, and downstream fewshot evaluation (IV) that measure models’ generalization to novel forms of problematic content. nature of problematic content and embracing its complexities, our benchmark and experiments offer valuable insights, resources, and practical solutions for combating problematic content 1. Footnote 1: Our benchmark and experiments are available at [https://github.com/USC-CSSL/Adaptable-Problematic-Content-Detection](https://github.com/USC-CSSL/Adaptable-Problematic-Content-Detection) ## 2 Background ### Problematic Content Detection Social media platforms offer individuals means to freely express themselves. However, certain features of social media, such as partial anonymity, which may promote freedom of expression, can also result in dissemination of problematic content. Researchers and social media companies recognize this issue and have developed various strategies to tackle it, including the use of automated systems to identify problematic content. Consequently, multiple definitions of problematic content have been proposed (Poletto et al., 2021), encompassing specific areas like misogyny detection (e.g., Fersini et al., 2018), to hate speech (e.g., Kennedy et al., 2022) and broader categories such as offensive language detection (e.g., Davidson et al., 2017). Ideally, in order to foster more healthy and constructive online environments, such content detection systems should possess the capability to identify undesirable content irrespective of factors such as timing, specific linguistic form, or the social media platform used. However, recent studies have revealed limited generalizability of such systems, particularly in the context of hate speech detection (Yin and Zubiaga, 2021; Ramponi and Tonelli, 2022). Yin and Zubiaga (2021) recognized that the scarcity of hate speech in sources poses a challenge to constructing datasets and models. They also acknowledged the difficulty in modeling implicit notions of problematic content. Integrating multiple datasets could potentially address both of these issues. By combining different datasets, the scarcity of problematic content would be reduced. Consequently, a model exposed to a greater variety of implicit notions would have a better ability to identify them. ### Multitask Learning for Problematic Content In recent years, multitask learning (Caruana, 1997) has gained considerable attention as a promising approach for problematic content detection Kapil and Ekbal (2021); Plaza-Del-Arco et al. (2021); Farha and Magdy (2020); Kapil and Ekbal (2020); Talat et al. (2018). Multitask learning leverages the inherent relationships and shared characteristics among related tasks (e.g., hate speech, racism, sexism detection etc. in the context problematic content) to improve performance over a model that learns the tasks individually. By jointly training on multiple related tasks, the models can benefit from knowledge transfer and information sharing across different domains. This approach has shown potential in capturing the underlying nuances and contextual cues that are crucial for effectively detecting and addressing problematic content. Kapil and Ekbal (2020) conducted a comprehensive analysis on the potential benefits of simultaneously learning multiple problematic content tasks. They reported significant improvements including a 14% and 12% enhancement in macro \(F_{1}\) over the state-of-the-art for offensive language detection (Zampieri et al., 2019) racism and sexism detection Waseem and Hovy (2016) respectively. Furthermore, empirical evidence shows the advantage of multitask learning in enhancing generalization and robustness. This advantage could potentially be due to the model's ability to learn common patterns and effectively differentiate between various forms of harmful language across different tasks (Mao et al., 2020; Zhou et al., 2019; Kapil and Ekbal, 2020). Although multitask learning has demonstrated potential in the field of problematic content detection, it is not exempt from limitations. A significant drawback is the expense involved in retraining the model whenever a new task is introduced to the existing set. As the number of tasks grows, so does the complexity and computational resources needed for retraining. This becomes particularly challenging in the context of a dynamic landscape of problematic content, where new types of hate speech or toxic behavior emerge constantly. Multitask learning encounters various other challenges apart from computational complexity. These challenges include task interference, a phenomenon wherein the acquisition of multiple tasks concurrently can exert a detrimental impact on each other's learning processes, and catastrophic forgetting, which entails the loss of previously acquired knowledge when learning new tasks (Robins, 1995; Kirkpatrick et al., 2017; Wu et al., 2023). ### Continual Learning and Few Shot Generalization Continual learning is an approach that has emerged to address challenges like task interference, computational complexity, and catastrophic forgetting faced by multitask learning; instead of simultaneously learning all the tasks, continual learning models learn new tasks over time while maintaining knowledge of previous tasks (Robins, 1995). This incremental approach allows for efficient adaptation to new tasks while preserving the knowledge acquired from the previous tasks (Parisi et al., 2019). By leveraging techniques such as parameter isolation, rehearsal, or regularization, continual learning mitigates catastrophic forgetting and ensures that the model retains its proficiency in previously learned tasks (Kirkpatrick et al., 2017; de Masson D'Autume et al., 2019; Wang et al., 2020; Schwarz et al., 2018). Moreover, the capability to incrementally update the model alleviates the computational burden associated with retraining the entire multitask model every time new tasks are added. As a result, continual learning presents a promising approach to tackle the scalability and adaptability issues inherent in multitask learning. This framework becomes particularly attractive for tasks like hate speech detection, toxicity detection, incivility detection, and similar endeavors within a rapidly changing environment of problematic content. The only work in this space is Qian et al. (2021) which applies continual learning to detect hate speech on Twitter. However, their focus is limited to a single definition of hate speech and they analyze a single snapshot of Twitter data. Consequently, their approach does not fully account for the dynamic nature of problematic content across the internet. ## 3 Continual Learning Benchmark for Problematic Content Detection ### Problem Formulation Our objective in creating this benchmark is to develop models that are not only agile in detecting new manifestations of problematic content but are also capable of accumulating knowledge from diverse instances across different time periods and platforms. Such models should possess the ability to rapidly learn and identify new manifestations of problematic content on novel platforms, even when only limited data is available. As time progresses, we anticipate a natural increase in the availability of resources for problematic content detection. Therefore, to encourage building models that leverage this increase in resources, we consider the existing resources as a continuous stream of incoming data. In this context, we make the assumption that there exists a problematic content detection model denoted as \(f\), which undergoes continual learning on a stream of problematic content detection tasks (\(T^{u}=[T^{u}_{1},\dots,T^{u}_{N_{u}}]\)) over time. We refer to this set of tasks as _upstream_ tasks. In addition to accumulating knowledge from the stream of tasks, this continual learning model should be able to rapidly generalize its knowledge to numerous related unseen tasks (Jin et al., 2021). We formulate this ability as a few-shot learning problem over a separate set of tasks \(T^{d}=[T^{d}_{1},\dots,T^{d}_{N_{d}}]\), referred to as _downstream_ tasks. ### Training and Evaluation During the continual learning stage, the model encounters a sequentially ordered list of \(N_{u}\) upstream tasks: \([T^{u}_{1},\dots,T^{u}_{N_{u}}]\), where each task has its own distinct training and test sets. To evaluate the few-shot learning capability of the sequentially trained model \(f\), we proceed to adapt it to a collection of \(N_{d}\) few-shot tasks individually represented as \(T^{d}_{i}\). In this scenario, each unseen task is associated with only a small number of training examples. For evaluation purposes, a task is considered "new" if the model hasn't been exposed to labels from that task. This applies to the \(i_{th}\) upstream task (\(T^{u}_{i}\)) in the upstream training process before the model's upstream training reaches \(T^{u}_{i}\), as well as to all downstream tasks (Figure 1) The paucity of problematic content online results in most datasets used in this work to be quite unbalanced (see supplementary materials for details). In unbalanced datasets, AUC is often preferred over \(F_{1}\) score (Bradley, 1997). Hence, we chose AUC as our primary evaluation metric for both the upstream training and downstream adaptation processes. To enable fair comparisons, we used a fixed set of held-out test data for all models. Below we outline the specific measures we employ to characterize the desired attributes of each model. **Few-Shot Performance** To assess the model's few-shot generalization ability, we evaluate the continually trained model \(f\) on unseen tasks by individually fine-tuning it for each task \(T^{v}_{i}\) using a few annotated examples. The few-shot AUC for task \(T_{i}^{u}\) is denoted as \(AUC_{i}^{FS}\), and we report the average few-shot AUC (\(AUC^{FS}\)) across all downstream tasks. **Final Performance** To assess the accumulation of knowledge in upstream tasks, we evaluate the AUC of \(f\) at the end of the continual learning over upstream tasks. This evaluation allows us to determine the extent to which model \(f\) forgets the knowledge pertaining to a specific task once it acquires the ability to solve additional tasks. We report the average AUC over all upstream tasks. **Instant Performance** To assess the effect of learning a sequence of upstream tasks on learning a new upstream task, we evaluate the AUC of \(f\) on task \(T_{i}^{u}\) right after the model is trained on \(T_{i}^{u}\). We report the average of instant performance across all upstream tasks. ### Datasets We have selected datasets for our benchmark based on the following criteria: (1) must be related to problematic content detection, (2) must be in English, and (3) must include a classification task (or a task transformable into classification). We aimed to use datasets that span different sources and time periods, and rely on different definitions of problematic content. Even though we currently focus on one language, the dynamic nature of our formulation easily allows for expansion of this benchmark to other languages (see SS8 for more details). Our benchmark currently covers data from 8 different sources, namely, Twitter, Reddit, Wikipedia, Gab, Stromfront, chat dialogues, and synthetically generated text. These datasets cover a wide range of definitions of problematic content, from focused definitions such as sexism and misogyny to broader definitions such as toxicity. For all datasets, we use the original train/test/dev splits when available, otherwise split the data 80/10/10 randomly. We briefly discuss each dataset below; [U] denotes upstream datasets and [D] is used for datasets used in downstream. **Call Me Sexists, But**[CMSB; Liakhovets et al., 2022] [D] Consists of 6,325 tweets from two sources: 1) Twitter data that was previously annotated for sexism and racism [Waseem and Hovy, 2016], and 2) Twitter data collected between 2008 and 2019 using the phrase "call me sexist, but." Each tweet in the dataset is labeled for sexist content and sexist phrasing, with both being single-choice options, using a set of guidelines derived from psychological scales. **US-election**[Grimminger and Klinger, 2021] [D] Consists of 3000 tweets, covering hate speech and offensive language, which were collected during the six weeks prior to the 2020 presidential election, until one week after the election. Each tweet was annotated for being hateful/non-hateful without considering whether the target is a group or a single person. **Misogyny Detection**[misogyny; Guest et al., 2021] [D] Contains 6567 Reddit Posts from 34 subreddits identified as misogynistic from Feb to May 2020 annotated with a three level hierarchical taxonomy. We only use the top level annotations which are binary labels for misogynistic content. **Contextual Abuse Dataset**[CAD; Vidgen et al., 2021a] [U] Cosists of 25k Reddit posts collected from 16 Subreddits more likely to contain a diverse range of abusive language, and focused on taking the context of the conversations into account. A hierarchical annotation schema is proposed which takes the context of the conversation into account; Level 1: abusive, non-abusive, and Level 2: for abusive (i) identity-directed, (ii) affiliation-directed and (iii) person-directed. In our benchmark, we use the three labels from the second level to stress test models' ability in learning variations of abuse. **Ex-Machina: Personal Attacks at Scale**[Personal attack; Wulczyn et al., 2017] [U] Includes 100k annotated comments from a public dump of Wikipedia from 2004-2015 and annotators were asked to label comments that contain personal attack or harassment in addition to some finer labels about the category of attack or harassment. We included the detecting personal attacks, quoted personal attacks (QA), and personal attack targeted at third party (TPA) as separate tasks in our benchmark. **Unhealthy Comment Corpus**[UCC; Price et al., 2020] [U] Consists of 44,355 comments collected from the Globe and Mail news site. Every comment is annotated according to a two-level hierarchy; Level 1: healthy or unhealthy. Level 2: binary labels indicating the presence or absence of six specific unhealthy subattributes: (i) hostility, (ii) antagonism, (iii) insults, (iv) provocation, (v) trolling, (vi) dismissiveness, (vii) condescension, (viii) sarcasm, and (ix) generalization. **The Gab Hate Corpus**[GHC; Kennedy et al., 2022][U] Contains 27,665 posts from Gab.com, spanning January, 2018 to October, 2018, annotated based on a typology for hate speech derived from definitions across legal precedent. Posts were annotated for Call for Violence (CV), Human degradation (HD), Vulgarity and/or Offensive language (VO) and whether a comment contains explicit or implicit language. **Stormfront**[De Gibert et al., 2018] [D] Includes a 10,568 sentences collected from 22 sub-forums of Stormfront.org spanning from 2002 to 2017. Each sentence has been classified as containing hate or not depending on whether they meet the following three premises: "a) deliberate attack, b) directed towards a specific group of people, and c) motivated by aspects of the group's identity." **Dialogue Safety**[Miller et al., 2017, Xu et al., 2021] [D] The Dialogue Safety dataset includes five datasets in the domain of dialogue safety. Three datasets, namely ParIAI single standard, ParIAI single adversarial, and ParIAI multi, are sourced from ParIAI [Miller et al., 2017]. The other two datasets, BAD2 and BAD4, are from Bot-Adversarial Dialogue [Xu et al., 2021]. The ParIAI datasets consist of 30,000 samples, while the BAD datasets consist of 5,784 samples. Conversations in the BAD dataset can span up to 14 turns, and following Xu et al. [2021], we consider the last two and four utterances of the conversation (BAD2 and BAD4) in our benchmark. All dialogue safety datasets provide toxic or safe labels. **Dygen**[Vidgen et al., 2021b] [hate U, rest D] Consists of 41,255 samples dynamically generated using the human-and-model-in-the-loop setting to train more robust hate detection models. The authors collected four rounds data using _Dynabench_[Kiela et al., 2021], and annotated each sample hierarchically; Level 1: binary hate/non-hate label, Level 2: subclasses of hate (i.e., derogation, animosity, threatening language, support for hateful entities and dehumanization) and 29 target identities (e.g., immigrant, muslim, woman, etc.). We use Level 1 for upstream training and Level 2 for downstream adaptation. **Hatecheck**[Rottger et al., 2021] [D] Contains of 3,728 synthetically generated sentences motivated by 29 hate speech detection model functionalities; 18 of these functionalities test for hateful content and cover distinct expressions of hate, and the other 11 functionalities test for non-hateful content and cover contrastive non-hate. **Multitarget-CONAN**[CONAN; Fanton et al., 2021] [D] Consists of 5003 samples of hate speech and counter-narrative pairs targeting different target groups (LGBTQ+, Migrants, Muslims, etc.) created using human-in-the-loop methodology, in which the generative language model generates new samples and, after confirmation by expert annotators, would get added to the dataset. In our benchmark we included detection of hate speech toward each target group as a separate task. **Civil-comments**[Dixon et al., 2018] [U] Includes two million comments from the Civil Comments platform which is annotated by human raters for various toxic conversational attributes. Each comment has a toxicity label and several additional toxicity subtype attributes which are severe toxicity, obscene, threat, insult, identity attack, sexual explicit. **Twitter Abusive**[Abusive; Founta et al., 2018] [U] Contains 80k tweets from March to April 2017 annotated for multiple fine-grained aspects of abuse, namely, offensiveness, abusiveness, hateful speech, aggression, cyberbullying, and spam. **Large-Scale Hate Speech Detection with Cross-Domain Transfer**[hate; Toraman et al., 2022] [U] Includes 100k tweets from 2020 and 2021, each annotated by five annotators for hate speech. Tweets are labeled as hate if "they target, incite violence against, threaten, or call for physical damage for an individual or a group of people because of some identifying trait or characteristic." ## 4 Models and Methods ### Models We represent all tasks in a consistent binary classification format and conduct our experiments using a pretrained language model, specifically **BART-Base**[Lewis et al., 2020]. In addition to fine-tuning all the model weights of BART-Base, we also explore two other variations: **BART-Adapter:** We experiment with Adapter training **Houlsby et al.**[**2019**]. In addition to the classification head, adapter training only trains parameters of Adapters, which are two-layer MultiLayer Perceptrons (MLPs) inserted after each layer of BART. **BART-HNet:** Following (Jin et al., 2021), we explore using hypernetworks (BART-HNet). The hypernetwork (\(h\)) accepts a task representation \(z\) as input and generates model parameters for a separate prediction model, denoted as \(f\), in order to address the specific task at hand. ### Upstream Training **Single Task Learning** We finetune a pretrained model on each of the upstream tasks \(T_{i}^{u}\) separately. Note that this model completely ignores the sequential order imposed to our upstream tasks and will, therefore, serve as a baseline for evaluating the performance of our base model on all upstream tasks without any knowledge transfer. **Sequential Finetuning (Vanilla)** We also finetune a pretrained model on the sequence of upstream tasks \([T_{1}^{u},\dots,T_{Nu}^{u}]\) without any continual learning algorithms. Previous research suggests that this model will suffer from catastrophic forgetting (Robins, 1995). Comparing the final performance of this model with a continual learning algorithm will give us a measure of the ability of these algorithms in knowledge accumulation. **Multitask Learning (MTL)** To assess the upper bound of knowledge accumulation on the set of upstream tasks we finetune a pretrained model with multitask learning on all upstream tasks. Multitask learning is implemented via hard parameter sharing (see supplementary materials for implementation details). **Continual Learning** Finally, we finetune a model continually on a sequence of upstream tasks \([T_{1}^{u},\dots,T_{Nu}^{u}]\). This model should ideally be able to 1. use knowledge from previous tasks to learn a new upstream task, and 2. retain knowledge of the seen upstream tasks. We experiment with two continual leanrning algorithms: **Bi-level Hypernetworks for Adapters with Regularization (BiHNet-Reg: Jin et al., 2021). This model is specifically designed to enhance the generation of adapter weights by optimizing bi-level task representations. Its primary objective is to address two important challenges: mitigating catastrophic forgetting and enhancing the overall generalizability of the model. Towards the first challenge, regularization is imposed on the generated adapters. To improve generalization this model learns two representations for each task task; one for high-resource settings and one for few-shot cases. **Elastic Weight Consolidation (EWC: Kirkpatrick et al., 2017)**: leverages the principles of Bayesian inference, suggesting a method that selectively slows down learning on the weights important for previous tasks. The model retains old knowledge by assigning a larger penalty to changes in crucial parameters, effectively making them "elastic". ### Downstream Adaptation An ideal model for problematic content detection should be able to learn its new manifestations quickly. Therefore, we evaluate our models' ability on learning unseen datasets of problematic content using only a few examples. We report the performances using \(k=16\) shots (further analysis into the effect of number of shots on model performances is provided in supplementary materials). ## 5 Experiments Most of the datasets in our benchmark include annotations for various aspects of problematic content (e.g., the UCC dataset includes labels for antagonism, insults, etc.). To ensure flexibility, we treated each label as a separate task. Our rationale behind this approach is rooted in the likely possibility that we will need to introduce additional labels to the existing set in the future. To accommodate potential future updates to the label taxonomy, it is preferable to have models that can quickly adapt and learn new labels. We include each label as a task if the training split includes at least 100 positive samples. In order to minimize the exchange of information between the upstream and downstream tasks, across all our datasets with the exception of Dygen, we categorized all tasks within the dataset as either upstream or downstream. Our selection of larger datasets for the upstream tasks was driven by both the data requirements of upstream training and the fact that larger datasets typically encompass a broader range of problematic content. This decision enables the model to accumulate knowledge on general notions of problematic content, which aligns with our objectives. Subsequently, we assigned tasks as downstream that 1. had limited labeled data, and 2. had minimal overlap (e.g., same domain or labels) with the upstream tasks. To assess the efficacy of our proposed framework in practical scenarios, we ran an additional experiment where we ordered the upstream tasks _chronologically_. Specifically, we used the earliest publication date of each dataset as the temporal reference point to order the upstream datasets. Note that each dataset consists of multiple labels (i.e., tasks). Since we don't have any information about the temporal order of tasks within datasets, we chose this order at random. This experiment allowed us to capture the evolution of the research landscape on problematic content detection, thereby providing a more nuanced understanding of the progress of model performance over time. Figure 2 shows the order of upstream tasks in this experiment. \begin{table} \begin{tabular}{l c c c} \hline \hline model & Fewshot-F1 & Fewshot-AUC & \(\Delta\) Fewshot-AUC \\ \hline BART-Adapter-Vanilla & 0.256353 & 0.764620 & - \\ BART-BiHNet-Vanilla & 0.270319 & 0.771721 & - \\ BART-BiHNet-Reg & 0.298919 & 0.818512 & +0.046791 \\ BART-BiHNet-EWC & 0.257523 & 0.765808 & -0.005913 \\ BART-Adapter-Multitask & 0.288102 & 0.816277 & +0.051657 \\ BART-BiHNet-Multitask & 0.257100 & 0.795745 & +0.024024 \\ \hline \hline \end{tabular} \end{table} Table 1: Fewshot performance for the models on the chronological experiment \begin{table} \begin{tabular}{l c c c} \hline \hline model & Final-F1 & Final-AUC \\ \hline BART-Adapter-Vanilla & 0.130313 & 0.517685 \\ BART-BiHNet-Vanilla & 0.098584 & 0.617468 \\ BART-BiHNet-Reg & 0.271746 & 0.791544 \\ BART-BiHNet-EWC & 0.074141 & 0.676287 \\ BART-Adapter-Multitask & 0.382802 & 0.872739 \\ BART-BiHNet-Multitask & 0.318443 & 0.833905 \\ \hline \hline \end{tabular} \end{table} Table 2: Final performance for the models on the chronological experiment Figure 2: Sequence of upstream tasks in the experiment with chronological task order. Note that datasets are ordered according to the earliest publication date of the data and tasks (i.e., labels) within each dataset are ordered randomly. \begin{table} \begin{tabular}{l c c c} \hline \hline model & Instant-F1 & Instant-Auc & \(\Delta\) Instant-AUC \\ \hline BART-Adapter-Vanilla & 0.402099 & 0.882196 & - \\ BART-BiHNet-Vanilla & 0.412636 & 0.878479 & - \\ BART-BiHNet-Reg & 0.399602 & 0.881692 & +0.003213 \\ BART-BiHNet-EWC & 0.403327 & 0.880995 & +0.002516 \\ \hline \hline \end{tabular} \end{table} Table 3: Instant performance for the models on the chronological experiment To show the efficacy of our proposed continual learning approach in adapting to any scenario, we have also experimented with randomly ordering all the upstream tasks. More details on this experiment can be found in the supplementary materials. ## 6 Results **Single Task Baseline:** To determine the learning capabilities of each model architecture for different tasks 4.1, we finetune a classifier from each architecture on each task. The average fewshot, final, and instant performance of BART-Adapter-Vanilla, and BART-HNet-Vanilla is presented in the first two rows of tables 1, 2, 3 respectively. We see the largest gap in performance for these models on the final performance metrics. This can be attributed to BiHNet's meta learning capabilities. The details of model performances on each task can be found in supplementary material. **Multitask Upperbound:** In scenarios where there are no adversarial tasks, multitask learning is often used as an empirical upper bound for continual learning algorithms. The last two rows of tables 1 and 2 and the few shot and final evaluation of multitask models. Note that since these models see all tasks at the same time, instant performance is not defined for them. **Does the collection of problematic content tasks help with learning new upstream tasks?** In other words, do the models benefit from upstream training when learning a new task with substantial amount of annotated data available? To answer this question, compare the instant performance of a CL model on \(T_{i}^{u}\) with a pretrained model finetuned on just \(T_{i}^{u}\). Our results (\(\Delta\) Instant AUC) show evidence of slight positive transfer, however, the magnitude of this transfer is negligible. **Does continual learning improve knowledge retention?** The final AUC values, as shown in Table 2, indicate the models' ability to retain knowledge from a sequence of tasks at the end of training. Our results suggest that all continual learning variations outperform naive training. Most notably, BiHNet-Reg outperforms BiHNet-Vanilla by almost 18% in AUC, indicating its potential to mitigate catastrophic forgetting, while falling only 4% short of the multitask counterpart. **Does upstream learning help with generalization to new manifestations of problematic content?** Comparing the single-task baselines with continual and multitask learning, our results (Table 1) demonstrate a noteworthy improvement in models' generalization ability as a result of upstream training. Specifically, BiHNet-Reg shows remarkable generalization ability in fewshot settings, outperforming the BiHNet-Vanilla by nearly 5% in AUC. ## 7 Discussion and Conclusion In conclusion, we propose a continual learning benchmark and approach for detecting problematic content, that realizes its dynamic and adaptable nature. We define essential characteristics of an ideal model and create a continual learning benchmark evaluation metrics to capture the variability in problematic content. Our benchmark has two key components: First, an upstream sequence of problematic tasks over which we measure a model's ability in accumulating knowledge, and second, a separate set of downstream few-shot tasks on which we gauge a model's agility in learning new manifestations of problematic content. Our experiments clearly demonstrate the effectiveness of this formulation, particularly in its ability to adapt to new types of problematic content. To ensure the benchmark remains dynamic and up-to-date, we have designed it with continuous updates in mind; our benchmark's flexible implementation allows for seamless repositioning of tasks as either upstream or downstream. We encourage the community to actively contribute to and expand this benchmark, as it serves as a collaborative platform for advancements in the field. ## 8 Limitation and Negative Societal Impact The social science examination of the evolution of problematic content carries its own importance and follows a dedicated line of inquiry. Due to space constraints, we have not provided an exhaustive discussion of this subject. We recommend referring to Klonick (2017), Atlantic-Council (2023) for a comprehensive overview of this area. The benchmark under discussion is currently designed only for English language content, neglecting the challenges posed by problematic content in other languages and cultures. Our design, however, allows for an easy expansion of the benchmark to include other languages. We have outlined the procedure to expand the benchmark on the accompanying repository and encourage the community to contribute to the benchmark. Though it presents a new measure of progress and baseline results, further investigations and extensive experimentation are needed to fully evaluate the potential of continual learning in detecting evolving problematic content. The study's approach, predominantly using majority label datasets, potentially leads to bias and overgeneralization in detecting problematic content, given the inherent subjectivity of such content influenced by cultural norms, individual sensitivities, and societal changes over time. The effectiveness of this benchmark could significantly vary due to the diversity of sources and annotation schemas, potentially leading to cultural bias and an overreliance on AI for content detection, thereby neglecting the importance of nuanced human moderation. Future work can explore the potential considering this subjectivity under our continual learning framework. Moreover, the benchmark opens possibilities for misuse, including training models to generate problematic content or designing adversarial attacks, where malicious actors can exploit the understanding of detection systems to craft content that evades detection. Datasets used in this benchmark may have a high prevalence of problematic content targeting certain social groups. This, in turn, can lead subsequent models to produce unfair outcomes, such as higher false positive rates for the aforementioned groups Dixon et al. (2018); Wiegand et al. (2019). Recently, various methods have been proposed to mitigate these biases, such as those by Mostafazadeh Davani et al. (2021); Kennedy et al. (2020). Future research could examine the extent of biases' influence on the model within our framework and the effectiveness of the mentioned techniques in mitigating them. Moreover, Some datasets may hold personally identifiable information or data from which individual details can be inferred. To address this concern, we suggest applying Google's DLP, a tool designed to scan and classify sensitive data, to the datasets. Another concern in research on problematic content detection is potential misuse for censorship. However, we emphasize that, in contrast to private methods concealed behind corporate doors, an open-access or academic approach to detecting problematic content fosters transparency. This allows the public to understand and critique the detection criteria. Such transparency ensures accountability, given that academic methods frequently undergo peer review and public scrutiny, thereby addressing biases and mistakes. This research was supported by NSF CAREER BCS-1846531 and DARPA INCAS HR001121C0165. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of DARPA, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein.
2309.05979
Measure preservation and integrals for Lotka--Volterra tree-systems and their Kahan discretisation
We show that any Lotka--Volterra tree-system associated with an $n$-vertex tree, as introduced in Quispel et al., J. Phys. A 56 (2023) 315201, preserves a rational measure. We also prove that the Kahan discretisation of these tree-systems factorises and preserves the same measure. As a consequence, for the Kahan maps of Lotka--Volterra systems related to the subclass of tree-systems corresponding to graphs with more than one $n$-vertex subtree, we are able to construct rational integrals.
Peter H. van der Kamp, Robert I. McLachlan, David I. McLaren, G. R. W. Quispel
2023-09-12T06:13:49Z
http://arxiv.org/abs/2309.05979v2
# Measure preservation and integrals for Lotka-Volterra \(T\)-systems and their Kahan discretisation ###### Abstract We show that any Lotka-Volterra \(T\)-system associated with an \(n\)-vertex tree \(T\) as introduced in Quispel et al., J. Phys. A 56 (2023) 315201, preserves a rational measure. We also prove that the Kahan discretisation of these \(T\)-systems factorises and preserves the same measure. As a consequence, for the Kahan maps of Lotka-Volterra systems related to the subclass of \(T\)-systems corresponding to graphs with more than one \(n\)-vertex subtree, we are able to construct rational integrals. ## 1 Introduction An (autonomous) \(n\)-dimensional Lotka-Volterra (LV) system is a system of the form \[\dot{x_{i}}=x_{i}(b_{i}+\sum_{j=1}^{n}A_{i,j}x_{j}),\qquad i=1,\ldots,n, \tag{1}\] where the vector \(\mathbf{b}\) and the matrix \(\mathbf{A}\) do not depend on \(\mathbf{x}(t)\) or \(t\). A polynomial \(P(\mathbf{x})\) is called a Darboux polynomial (DP) [2, 4, 5] for an ODE \[\dot{\mathbf{x}}=\mathbf{f}(\mathbf{x}) \tag{2}\] if there is a function \(C(\mathbf{x})\), called the cofactor of \(P(\mathbf{x})\), such that \[\dot{P}(\mathbf{x})=C(\mathbf{x})P(\mathbf{x}).\] Thus, each \(n\)-dimensional LV-system has \(n\) DPs, namely the coordinates, \(x_{i}\), themselves. Moreover, LV-systems are normal forms for quadratic ODEs with \(n\) linearly independent linear DPs; by a linear transformation such a system can be written in LV form. Some LV-systems have additional Darboux polynomials. We recall a key lemma. **Lemma 1** ([12]).: _The LV-system (1) has Darboux polynomial \(P_{i,k}:=\alpha x_{i}+\beta x_{k}\) with \(\alpha\beta\neq 0\) if and only if, for some constant \(b\) and for all \(j\not\in\{i,k\}\),_ \[A_{i,j} =A_{k,j} \tag{3}\] \[b_{i} =b_{k}=b\] \[\alpha(A_{k,k}-A_{i,k}) =\beta(A_{k,i}-A_{i,i})\] _and \((A_{k,k}-A_{i,k})(A_{k,i}-A_{i,i})\neq 0\)._ We will take such a DP to be \[P_{i,k}=(A_{k,i}-A_{i,i})x_{i}+(A_{k,k}-A_{i,k})x_{k}.\] Thus, if (3) holds for several pairs \((i,k)\), the associated LV-system has several additional DPs. In this paper we view these pairs as edges on a graph with \(n\) vertices, and we shall see that the structure of the graph determines properties of the associated LV-system and of a certain birational map associated with it. The case that the graph is a tree on \(n\) vertices was considered in [11, 12], where \((3n-2)\)-parameter families of homogeneous \(n\)-dimensional LV-systems, in one-to-one correspondence with trees on \(n\) vertices, were shown to be superintegrable. These families will be referred to as \(T\)-systems. To each edge of the tree \(T\) corresponds a DP for the LV system, and, by using the \(n\) given DPs, \(x_{i}\), one can then construct \(n-1\) integrals. We illustrate the construction with an example in section 2. An ODE (2) is measure-preserving with measure \[\frac{\mathrm{d}x_{1}\mathrm{d}x_{2}\cdots\mathrm{d}x_{n}}{d(\mathbf{x})}\] if \(d(\mathbf{x})\), the density, is a DP with cofactor equal to the divergence of \(\mathbf{f}(\mathbf{x})\), i.e., it satisfies \[\dot{d}(\mathbf{x})=\left(\,\nabla\cdot\mathbf{f}(\mathbf{x})\right)\ d( \mathbf{x}).\] In section 3, we show that Lotka-Volterra \(T\)-systems are measure-preserving, with density \[d(\mathbf{x})=\prod_{i=1}^{n}x_{i}^{2-m_{i}}\prod_{j=1}^{n-1}P_{j}, \tag{4}\] where, cf. [12, Equation (6)], \[\begin{split} P_{j}&:=P_{u_{j},v_{j}}\\ &=(A_{v_{i},u_{i}}-A_{u_{i},u_{i}})x_{u_{i}}+(A_{v_{i},v_{i}}-A_{ u_{i},v_{i}})x_{v_{i}}\\ &=(c_{i}-a_{u_{i}})x_{u_{i}}+(a_{v_{i}}-b_{i})x_{v_{i}}\end{split} \tag{5}\] is the DP obtained from the \(j\)th edge, \(e_{j}=(u_{j},v_{j})\), of the tree \(T\), and \(m_{i}\) is the number of edges connected to the vertex \(i\in T\). The Kahan discretisation [3, 6, 7, 8, 9, 10] with step size \(h\) of a homogeneous quadratic ODE \[\dot{x}_{i}=\sum_{j,k}c_{j,k}^{i}x_{j}x_{k}\] is the birational map \(\mathbf{x}\mapsto\mathbf{x}^{\prime}\) implicitly given (or defined) as follows by \[\frac{x_{i}^{\prime}-x_{i}}{h}=\sum_{j,k}c_{j,k}^{i}\frac{x_{j}^{\prime}x_{k} +x_{j}x_{k}^{\prime}}{2}.\] In section 4, we show that the Kahan discretisation of a \(T\)-system is explicitly given by \[x_{i}^{\prime}=x_{i}\frac{\prod_{j\neq i}K_{i,j}}{|\mathbf{M}|}\] with \[K_{i,j}=1-\frac{h}{2}\left((\mathbf{A}.\mathbf{x})_{j}+(A_{j,j}-A_{i,j})x_{j} \right),\] \[\mathbf{M}=\overline{\mathbf{1}}-\frac{h}{2}\left(\overline{\mathbf{x}}.\mathbf{A}+ \overline{\mathbf{A}.\overline{\mathbf{x}}.\mathbf{1}}\right).\] where \(\overline{\mathbf{x}}\) denotes the diagonal matrix with entries \(\overline{\mathbf{x}}_{ii}=x_{i}\). A _discrete Darboux polynomial_ for a rational map \(\mathbf{x}\mapsto\mathbf{x}^{\prime}:=\boldsymbol{\phi}(\mathbf{x})\), \(\mathbf{x}\in\mathbb{R}^{n}\), is a polynomial \(P\colon\mathbb{R}^{n}\to\mathbb{R}\) such that there exists a rational function \(C\colon\mathbb{R}^{n}\to\mathbb{R}\) (again called the cofactor of \(P\)) whose denominator does not have any common factors with \(P\), such that \(P^{\prime}=CP\) where \(P^{\prime}:=P\circ\boldsymbol{\phi}\). If \(P_{1},\ldots,P_{k}\) are Darboux polynomials with cofactors \(C_{1},\ldots,C_{k}\), respectively, then \(P:=\prod P_{i}^{\alpha_{i}}\) obeys \(P^{\prime}=CP\) where \(C=\prod C_{i}^{\alpha_{i}}\). (If the \(\alpha_{i}\) are not nonnegative integers, \(P\) is a Darboux 'function' rather than a Darboux polynomial.) If, in addition, \(C=1/\det D\boldsymbol{\phi}\), then \(\frac{1}{P}\mathrm{d}x_{1}\ldots\mathrm{d}x_{n}\) is an invariant measure of \(\boldsymbol{\phi}\), while if \(C=1\) then \(P\) is a first integral of \(\boldsymbol{\phi}\)[1, 13]. Linear Darboux polynomials are preserved under Kahan discretisation [2, Theorem 1]. In section 5, we show that the cofactors of the preserved discrete Darboux polynomials, \(P_{i}\), of the Kahan-discretised \(T\)-systems are given by \[L_{i}\frac{\prod_{j\neq u_{i},v_{i}}K_{u,j}}{|\mathbf{M}|}\] where \[L_{i}=1-\frac{h}{2}\left((\mathbf{A}.\mathbf{x})_{u_{i}}-(A_{u_{i},u_{i}}-A_{v _{i},u_{i}})x_{u_{i}}\right).\] In section 6, we prove that the Jacobian of the Kahan map of a \(T\)-system is given by \[J=\frac{\left(\prod_{i=1}^{n-1}L_{i}\right)\left(\prod_{i=1}^{n}\left(\prod_{ j=1}^{n}K_{i,j}\right)/K_{i,i}\right)}{|\mathbf{M}|^{n+1}}, \tag{6}\] and we prove that the expression \(d\), given by (4), is a rational Darboux function with cofactor \(J\), given by (6). Thus, Kahan-discretised \(T\)-systems are measure-preserving. In the final section, we consider \(n\)-dimensional LV-systems related to graphs \(G\) that contain more than one subgraph which is a tree on \(n\) vertices. The Kahan maps related to these so-called \(G\)-systems preserve more than one measure and this enables us to find integrals for these maps. We classify distinct classes of \(G\)-systems and explicitly provide all distinct graphs on \(4,5\) and \(6\) vertices. We show that if \(G\) contains a cycle of length \(\ell\), the Kahan map of the \(G\)-system has at least \(\ell-2\) integrals. ## 2 \(T\)-systems, an example For any tree \(T\) on \(n\) vertices, one can associate a homogeneous Lotka-Volterra system, i.e. a system of the form \[\dot{x}_{i}=x_{i}\sum_{j=1}^{n}A_{i,j}x_{j},\qquad i=1,\ldots,n \tag{7}\] with \(3n-2\) free parameters [11, 12]. The \(n\times n\) matrix \(\mathbf{A}\) is the adjacency matrix of the associated weighted complete digraph of \(T\), cf. [12, definition 3]. For the tree shown in Figure 1 the matrix \(\mathbf{A}\), with \(3\times 4-2=10\) free parameters, is \[\mathbf{A}=\begin{pmatrix}a_{1}&b_{1}&b_{2}&b_{3}\\ c_{1}&a_{2}&b_{2}&b_{3}\\ c_{1}&c_{2}&a_{3}&b_{3}\\ c_{1}&c_{3}&b_{2}&a_{4}\end{pmatrix}. \tag{8}\] We note that the cofactor of the DP \(x_{i}\) is given by \((\mathbf{A}\mathbf{x})_{i}\). The matrix \(\mathbf{A}\) has the property that for each pair of rows \(e_{i}=(u_{i},v_{i})\in\{(1,2),(2,3),(2,4)\}\) (that is, for each edge of \(T\)) we have \(A_{u_{i},k}=A_{v_{i},k}\) for all \(k\not\in e_{i}\). This property gives rise to \(n-1\) additional Darboux polynomials of the form (5), \[\begin{split} P_{1}&=\left(c_{1}-a_{1}\right)x_{1}+ \left(a_{2}-b_{1}\right)x_{2}\\ P_{2}&=\left(c_{2}-a_{2}\right)x_{2}+\left(a_{3}-b_{ 2}\right)x_{3}\\ P_{3}&=\left(c_{3}-a_{2}\right)x_{2}+\left(a_{4}-b_{ 3}\right)x_{4}.\end{split} \tag{9}\] Their cofactors are given by \((\mathbf{B}\mathbf{x})_{i}\) where \[B=\begin{pmatrix}a_{1}&a_{2}&b_{2}&b_{3}\\ c_{1}&a_{2}&a_{3}&b_{3}\\ c_{1}&a_{2}&b_{2}&a_{4}\end{pmatrix}.\] Using the rather general method [11, section 2], these additional DPs give rise to \(n-1\) integrals [12, Equation (7)], for \(i=1,2,3\): \[I_{i}=P_{i}^{|\mathbf{A}|}\prod_{k=1}^{n}x_{k}^{Z_{i,k}},\qquad\mathbf{Z}=- \mathbf{B}\mathbf{A}^{-1}|\mathbf{A}|,\] cf. [11, section 4.3.2] for explicit expressions. ## 3 Measure preservation for \(T\)-systems **Proposition 2**.: _Let \(T\) be a tree on \(n\) vertices, and let \(m_{i}\) be the degree of (number of edges which meet at) vertex \(i\). The Lotka-Volterra \(T\)-system (7) is measure preserving with density_ \[d=\prod_{i=1}^{n}x_{i}^{2-m_{i}}\prod_{j=1}^{n-1}P_{j}, \tag{10}\] _where \(P_{j}\) is the DP associated with edge \(j\)th, given by (5)._ Proof.: A product of DPs \(d=\sum_{i}p_{i}^{q_{i}}\) is a Darboux function, as \(\dot{d}=(\sum_{i}q_{i}C_{i})d\), where \(C_{i}\) is the cofactor of \(p_{i}\). Let \[p_{i}=\begin{cases}x_{i}&i=1,\ldots,n\\ P_{i-n}&i=n+1,\ldots,2n-1,\end{cases}\quad q_{i}=\begin{cases}2-m_{i}&i=1, \ldots,n\\ 1&i=n+1,\ldots,2n-1.\end{cases}\] Then the cofactors are \[C_{i}=\begin{cases}\sum_{j=1}^{n}A_{i,j}x_{j}&i=1,\ldots,n\\ \sum_{j=1}^{n}B_{i-n,j}x_{j}&i=n+1,\ldots,2n-1,\end{cases}\] Figure 1: The bushy tree on 4 vertices. where the matrix \(\mathbf{B}\) contains the coefficients of the cofactors of the additional DPs \(P_{j}\)[12, Definition 13]. We have \(\dot{x}_{i}=f_{i}=x_{i}C_{i}\) and \(\,\nabla\cdot\mathbf{f}=\Big{(}\sum_{i=1}^{n}C_{i}+A_{i,i}x_{i}\Big{)}\). Hence \[\dot{d}/d-\,\nabla\cdot\mathbf{f} =\sum_{i=1}^{2n-1}q_{i}C_{i}-\Big{(}\sum_{i=1}^{n}C_{i}+A_{i,i}x_{ i}\Big{)}\] \[=\sum_{i=1}^{n}(1-m_{i})C_{i}+\sum_{i=1}^{n-1}C_{n+i}-\sum_{i=1}^ {n}a_{i}x_{i}. \tag{11}\] We will show that the coefficient of \(x_{p}\) in the linear combination (11) vanishes for arbitrary \(p\in\{1,\ldots,n\}\), i.e., that \[\sum_{i=1}^{n}(1-m_{i})A_{i,p}+\sum_{i=1}^{n-1}B_{i,p}-A_{p,p}=0. \tag{12}\] Recall that the edges of \(T\) are given by \(e_{i}=(u_{i},v_{i})\) for \(i=1,\ldots,n-1\). For \(p\in\{1,\ldots,n\}\), let \(J^{p},K^{p}\) be sets of indices such that \[j\in J^{p}\Leftrightarrow v_{j}=p,\qquad k\in K^{p}\Leftrightarrow u_{k}=p.\] Then the union \(I^{p}=J^{p}\cup K^{p}\) has \(m_{p}\) elements. We think of the tree \(T\) as a collection of \(m_{p}\) trees, \(T^{p}_{i}\) (\(i\in I^{p}\)), connected at the vertex \(p\). We define \(z(p,i)\) to be the number of edges contained in \(T^{p}_{i}\), so that, for each \(p\), \(\sum_{i\in I^{p}}z(p,i)=n-1\), which equals the number of edges in \(T\). For any tree \(T\) on \(n\) vertices, if \(m_{i}\) is the number of edges at vertex \(i\) and \(e=n-1\) is the number of edges, then \[\sum_{i}m_{i}=2e\implies\sum_{i}(1-m_{i})=n-2e=1-e.\] Now, consider the first sum in (12). We break up the sum into \(m_{p}\) sums plus a term. \[\sum_{i=1}^{n}(1-m_{i})A_{i,p} =\Big{(}\sum_{l=1}^{m_{p}}\sum_{i\in T^{p}_{l}}(1-m_{i})A_{i,p} \Big{)}+(1-m_{p})A_{p,p}\] \[=\Big{(}\sum_{j\in J^{p}}(1-z(p,j))b_{j}\Big{)}+\Big{(}\sum_{k \in K^{p}}(1-z(p,k))c_{k}\Big{)}+(1-m_{p})a_{p}, \tag{13}\] as for each vertex \(i\neq p\in T^{p}_{j},j\in J^{p}\implies A_{i,p}=b_{j}\) and for each vertex \(i\neq p\in T^{p}_{k},k\in K^{p}\implies A_{i,p}=c_{k}\) (note \(p\in T^{p}_{i}\) does not contribute to the sum as in \(T^{p}_{i}\) only 1 edge meets in \(p\)). Next, consider the second sum in (12). We have \[\sum_{i=1}^{n-1}B_{i,p} =\Big{(}\sum_{l=1}^{m_{p}}\sum_{e_{i}\in T^{p}_{l}}B_{i,p}\Big{)}\] \[=\Big{(}\sum_{j\in J^{p}}(z(p,j)-1)b_{j}\Big{)}+\Big{(}\sum_{k\in K ^{p}}(z(p,k)-1)c_{k}\Big{)}+m_{p}a_{p}, \tag{14}\] as \(i\in I^{p}\implies B_{i,p}=a_{p}\) and \(i\not\in I^{p},e_{i}=(v,w),B_{i,p}=A_{v,p}=A_{w,p}=b_{j}\) or \(c_{k}\), depending on whether there is a \(q\in T^{p}_{l}\) such that \((q,p)=e_{j}\) or \((p,q)=e_{k}\). By substitution of (13) and (14) into (12) the result follows. We note that, due to the existence of many integrals, the ODE preserves many other measures. The measure introduced in Proposition 2 is the one that is preserved by Kahan discretisation. **Example 3**.: _The 4-dimensional \(T\)-system given by equation (7) with (8), which is connected to the bushy tree displayed in Figure 1, has divergence_ \[\nabla\cdot\mathbf{f}=\left(2\,a_{1}+3\,c_{1}\right)x_{1}+\left(2\,a_{2}+b_{1}+c _{2}+c_{3}\right)x_{2}+\left(2\,a_{3}+3\,b_{2}\right)x_{3}+\left(2\,a_{4}+3\,b _{3}\right)x_{4}.\] _The vector whose \(i\)-th component equals the degree of vertex \(i\) is \(\mathbf{m}=\left(1,3,1,1\right)\), so that \(\mathbf{2}-\mathbf{m}=\left(1,-1,1,1\right)\). According to Proposition 2 the density_ \[d=x_{1}(x_{2})^{-1}x_{3}x_{4}P_{1}P_{2}P_{3},\] _with \(P_{1},P_{2},P_{3}\) given by (9), is a rational Darboux function with cofactor \(\,\nabla\cdot\mathbf{f}\). This can be verified by differentiation, or, alternatively, as follows. We write equation (12) as \(\mathbf{K}\mathbf{x}=0\), where_ \[\mathbf{K}=\left(\mathbf{1}_{n}-\mathbf{m}\right)\cdot\mathbf{A}+\mathbf{1}_{ n-1}\cdot\mathbf{B}-\mathbf{a}.\] _Then, in our case, we only have to compute_ \[\mathbf{K} =\begin{pmatrix}0&-2&0&0\end{pmatrix}\begin{pmatrix}a_{1}&b_{1}&b _{2}&b_{3}\\ c_{1}&a_{2}&b_{2}&b_{3}\\ c_{1}&c_{2}&a_{3}&b_{3}\\ c_{1}&c_{3}&b_{2}&a_{4}\end{pmatrix}+\begin{pmatrix}1&1&1\end{pmatrix}\begin{pmatrix} a_{1}&a_{2}&b_{2}&b_{3}\\ c_{1}&a_{2}&a_{3}&b_{3}\\ c_{1}&a_{2}&b_{2}&a_{4}\end{pmatrix}-\begin{pmatrix}a_{1}&a_{2}&a_{3}&a_{4} \end{pmatrix}\] \[=-2\begin{pmatrix}c_{1}&a_{2}&b_{2}&b_{3}\end{pmatrix}+\begin{pmatrix} a_{1}+2c_{1}&3a_{2}&a_{3}+2b_{2}&a_{4}+2b_{3}\end{pmatrix}-\begin{pmatrix} a_{1}&a_{2}&a_{3}&a_{4}\end{pmatrix}\] \[=\mathbf{0}.\] ## 4Kahan discretisation of \(T\)-systems Let \(\mathfrak{K}\) be the diagonal matrix with entries \(\overline{\mathbf{x}}_{ii}=x_{i}\). The Kahan discretisation of (7), \(\mathbf{x}\mapsto\mathbf{x}^{\prime}\), satisfies \(\mathbf{M}\mathbf{x}^{\prime}=\mathbf{x}\), where \[\mathbf{M}=\overline{\mathbf{I}}-\frac{h}{2}\left(\overline{\mathbf{x}}. \mathbf{A}+\overline{\mathbf{A}.\overline{\mathbf{x}}.\mathbf{1}}\right).\] **Proposition 4**.: _The Kahan discretisation is explicitly given by_ \[x_{i}^{\prime}=x_{i}\frac{\prod_{j\neq i}K_{i,j}}{|\mathbf{M}|} \tag{15}\] _where_ \[K_{i,j}=1-\frac{h}{2}\left((\mathbf{A}.\mathbf{x})_{j}+(A_{j,j}-A_{i,j})x_{j} \right). \tag{16}\] Proof.: Let \(\mathbf{M}^{(i)}\) be the matrix obtained from \(\mathbf{M}\) by replacing the \(i\)th column by \(\mathbf{x}\). By Cramer's rule we need to show \[|\mathbf{M}^{(i)}|=x_{i}\prod_{j\neq i}K_{i,j}.\] The off-diagonal entries in \(\mathbf{M}\) are linear in \(h\). The diagonal entries are affine, of the form \(1+h(\cdots)\). Therefore we have \(|\mathbf{M}^{(i)}|=x_{i}+h(\cdots)\). Hence, if \[|\mathbf{M}^{(i)}|=cx_{i}\prod_{j\neq i}K_{i,j},\] then \(c=1\), and it suffices to show that \(x_{i}\) and \(K_{i,j}\) (\(j\neq i\)) are divisors of \(|\mathbf{M}^{(i)}|\). It follows that \(x_{i}\) is a divisor by expanding \(|\mathbf{M}^{(i)}|\) in the \(i\)th row. We prove that \(K_{i,j}\) (\(j\neq i\)) is a divisor by establishing \[K_{i,j}=0\implies|\mathbf{M}^{(i)}|=0.\] Let \(\mathbf{k}=k_{1},k_{2},\ldots,k_{m}\) be the path from \(k_{1}=i\) to \(k_{m}=j\), i.e., for all \(l\) we have that either \((k_{l},k_{l+1})\) or \((k_{l+1},k_{l})\) is an edge in \(T\). Let us create a matrix \(\mathbf{M}^{[i]}\) by dividing the \(j\)th row of \(\mathbf{M}^{(i)}\) by \(x_{j}\). The \(i\)th column of \(\mathbf{M}^{[i]}\) is \(\mathbf{1}\), and the other elements, apart from the diagonal ones, are \(\mathbf{M}^{[i]}_{k,l}=-\frac{h}{2}A_{k,l}\). Modulo \(K_{i,j}\) we have \[M^{[i]}_{j,j}\mid_{K_{i,j}=0}=\frac{1-\frac{h}{2}\big{(}(\mathbf{A.x})_{j}+A_{j,j}x_{j}\big{)}}{x_{j}}\mid_{K_{i,j}=0}=-\frac{h}{2}A_{i,j}.\] From this, and from [12, Definition 3], as \(\mathbf{k}\) is the path from \(i\) to \(j\), it follows that \[M^{[i]}_{i,j}=M^{[i]}_{k_{l},j}\text{ for all }l=1,\ldots,m. \tag{17}\] Consider the \(k_{m-1}\)st and the \(k_{m}\)th row of \(\mathbf{M}^{[i]}\). As \((k_{m},k_{m-1})\) or \((k_{m-1},k_{m})\) is an edge in \(T\), due to [12, Corollary 8] and the fact that the \(i\)th column of \(\mathbf{M}^{[i]}\) is \(\mathbf{1}\), we have \[M^{[i]}_{k_{m-1},l}=M^{[i]}_{k_{m},l}\text{ for all }l\neq k_{m-1},k_{m}. \tag{18}\] Because of (17), equation (18) also holds for \(l=k_{m}\), and thus the rows differ only in the \(k_{m-1}\)st column. We can now add a (non-zero) multiple of row \(k_{m}\) to a (non-zero) multiple of row \(k_{m-1}\) to create a new row \(k_{m-1}\) where the element in the \(k_{m-1}\)st column is replaced by a scalar quantity of choice. We choose the scalar to be \[M^{[i]}_{k_{m-1},k_{m-1}}=-\frac{h}{2}A_{i,k_{m-1}}. \tag{19}\] We repeat the argument. From (19), and from [12, Definition 3], as there is a path from \(i\) to \(k_{m-1}\), it follows that \[M^{[i]}_{i,k_{m-1}}=M^{[i]}_{k_{l},k_{m-1}}\text{ for all }l=1,\ldots,m-1. \tag{20}\] Considering the \(k_{m-2}\)nd and the \(k_{m-1}\)st row of \(\mathbf{M}^{[i]}\). As \((k_{m-1},k_{m-2})\) or \((k_{m-2},k_{m-1})\) is an edge in \(T\), we have \[M^{[i]}_{k_{m-2},l}=M^{[i]}_{k_{m-1},l}\text{ for all }l\neq k_{m-2},k_{m-1}. \tag{21}\] Because of (20), equation (21) also holds for \(l=k_{m-1}\), and thus the rows differ only in the \(k_{m-2}\)st column. We can now add a (non-zero) multiple of row \(k_{m}\) to a (non-zero) multiple of row to create a new row \(k_{m-2}\) where the element in the \(k_{m-2}\)nd column is choosen to be \[M^{[i]}_{k_{m-2},k_{m-2}}=-\frac{h}{2}A_{i,k_{m-2}}.\] We continue making these elementary row-operations until we arrive at a matrix where \[M^{[i]}_{k_{2},k_{2}}=-\frac{h}{2}A_{i,k_{2}}. \tag{22}\] Now as \((i,k_{2})\) or \((k_{2},i)\) is an edge in \(T\), we have \[M^{[i]}_{i,l}=M^{[i]}_{k_{2},l}\text{ for all }l\neq i,k_{2}. \tag{23}\] Due to (22), equation (23) also holds for \(l=k_{2}\). But as the \(i\)th column of \(\mathbf{M}^{[i]}\) equals \(\mathbf{1}\), equation (23) also holds for \(l=i\). The rows are equal, and hence the determinant vanishes. **Example 5**.: _For the bushy tree on 4 vertices the Kahan discretisation satisfies \(\mathbf{M}\mathbf{x}^{\prime}=\mathbf{x}\) with_ \[\mathbf{M}=\begin{pmatrix}1&0&0&0\\ 0&1&0&0\\ 0&0&1&0\\ 0&0&0&1\end{pmatrix}-\frac{h}{2}\begin{pmatrix}C_{1}+a_{1}x_{1}&b_{1}x_{1}&b_ {2}x_{1}&b_{3}x_{1}\\ c_{1}x_{2}&C_{2}+a_{2}x_{2}&b_{2}x_{2}&b_{3}x_{2}\\ c_{1}x_{3}&c_{2}x_{3}&C_{3}+a_{3}x_{3}&b_{3}x_{3}\\ c_{1}x_{4}&c_{3}x_{4}&b_{2}x_{4}&C_{4}+a_{4}x_{4}\end{pmatrix}, \tag{24}\] _where \(C_{i}=(\mathbf{A}\mathbf{x})_{i}\) is the cofactor of \(x_{i}\). In terms of the functions_ \[\begin{split} K_{2,1}(=K_{3,1}=K_{4,1})&=1+\frac{h} {2}(C_{1}+x_{1}(a_{1}-c_{1})),\\ K_{1,2}&=1+\frac{h}{2}(C_{2}+x_{2}(a_{2}-b_{1})),\\ K_{3,2}&=1+\frac{h}{2}(C_{2}+x_{2}(a_{2}-c_{2})),\\ K_{4,2}&=1+\frac{h}{2}(C_{2}+x_{2}(a_{2}-c_{3})),\\ K_{1,3}(=K_{2,3}=K_{4,3})&=1+\frac{h}{2}(C_{3}+x_{3} (a_{3}-b_{2})),\\ K_{1,4}(=K_{2,4}=K_{3,4})&=1+\frac{h}{2}(C_{4}+x_{4} (a_{4}-b_{3})),\end{split} \tag{25}\] _the Kahan map is explicitly given by_ \[\begin{pmatrix}x_{1}\\ x_{2}\\ x_{3}\\ x_{4}\end{pmatrix}^{\prime}=\frac{1}{|\mathbf{M}|}\begin{pmatrix}x_{1}K_{1,2}K_ {1,3}K_{1,4}\\ x_{2}K_{2,1}K_{1,3}K_{1,4}\\ x_{3}K_{2,1}K_{3,2}K_{1,4}\\ x_{4}K_{2,1}K_{4,2}K_{1,3}\end{pmatrix}. \tag{26}\] ## 5 Cofactors for the additional DPs of the Kahan map **Proposition 6**.: _The cofactor of the Darboux polynomial \(P_{i}\), which corresponds to the \(i\)-th edge \(e_{i}=(u_{i},v_{i})\) (see Eq. (5)), is explicitly given by_ \[L_{i}\frac{\prod_{j\neq u_{i},v_{i}}K_{u_{i},j}}{|\mathbf{M}|}\] _where_ \[L_{i}=1-\frac{h}{2}\left((\mathbf{A}.\mathbf{x})_{u_{i}}-(A_{u_{i},u_{i}}-A_{v _{i},u_{i}})x_{u_{i}}\right), \tag{27}\] _which is symmetric under \(u_{i}\leftrightarrow v_{i}\)._ Proof.: In order to not have to carry many indices, we fix \(i\) and denote the \(i\)-th edge by \(e_{i}=(u,v)\). The \(i\)-th DP is then given by \[P_{i}=(c_{i}-a_{u})x_{u}+(a_{v}-b_{i})x_{v}.\] We note that due to the property of matrix \(\mathbf{A}\)[12, Corollary 8], that \(j\neq u,v\implies A_{u,j}=A_{v,j}\), we have \(K_{u,j}=K_{v,j}\). Using Proposition 4, we find \[P_{i}^{\prime} =(c_{i}-a_{u})x_{u}^{\prime}+(a_{v}-b_{i})x_{v}^{\prime}\] \[=\left((c_{i}-a_{u})x_{u}\prod_{j\neq u}K_{u,j}+(a_{v}-b_{i})x_{v} \prod_{j\neq v}K_{v,j}\right)|M|^{-1}\] \[=F\prod_{j\neq u,v}K_{u,j}|M|^{-1},\] with \(F=(c_{i}-a_{u})x_{u}K_{u,v}+(a_{v}-b_{i})x_{v}K_{v,u}\). The prefactor is, using the fact that \((\mathbf{A.x})_{u}\), equal to \[F =(c_{i}-a_{u})x_{u}\left(1-\frac{h}{2}\left((\mathbf{A.x})_{v}+(a_{v }-b_{i})x_{v}\right)+(a_{v}-b_{i})x_{v}\left(1-\frac{h}{2}\left((\mathbf{A.x})_{ u}+(a_{u}-c_{i})x_{u}\right)\right)\right.\] \[=(c_{i}-a_{u})x_{u}\left(1-\frac{h}{2}\left((\mathbf{A.x})_{u}+(c _{i}-a_{u})x_{u}+2(a_{v}-b_{i})x_{v}-Z(a_{v}-b_{i})x_{v}\right)\right)\] \[\quad+(a_{v}-b_{i})x_{v}\left(1-\frac{h}{2}\left((\mathbf{A.x})_{ u}+(a_{u}-c_{i})x_{u}+Z(c_{i}-a_{u})x_{u}\right)\right)\] \[=((c_{i}-a_{u})x_{u}+(a_{v}-b_{i})x_{v})\left(1-\frac{h}{2}\left( (\mathbf{A.x})_{u}-(a_{u}-c_{i})x_{u}\right)\right)\] \[=P_{i}L_{i}.\] which holds for all \(Z\) and in particular for \(Z=2\). **Example 7**.: _In terms of the matrix (24), the functions (25) and_ \[L_{1} =1+\frac{h}{2}(C_{1}-x_{1}(a_{1}-c_{1}))=1+\frac{h}{2}(C_{2}-x_{2 }(a_{2}-b_{1}))\] \[L_{2} =1+\frac{h}{2}(C_{2}-x_{2}(a_{2}-c_{2}))=1+\frac{h}{2}(C_{3}-x_{3 }(a_{3}-b_{2}))\] \[L_{3} =1+\frac{h}{2}(C_{2}-x_{2}(a_{2}-c_{3}))=1+\frac{h}{2}(C_{4}-x_{4 }(a_{4}-b_{3})),\] _the cofactors of the preserved DPs (9), with respect to the Kahan map (26), are given by_ \[\frac{L_{1}K_{1,3}K_{1,4}}{|\mathbf{M}|},\qquad\frac{L_{2}K_{2,1}K_{1,4}}{| \mathbf{M}|},\qquad\frac{L_{3}K_{2,1}K_{1,3}}{|\mathbf{M}|}.\] ## 6 Measure preservation for the Kahan map **Lemma 8**.: _The determinant of_ \[\mathbf{Q}=\overline{\mathbf{1}}+\frac{h}{2}\left(\overline{\mathbf{x}}. \mathbf{A}-\overline{\mathbf{A}.\overline{\mathbf{x}}.\mathbf{1}}\right).\] _is equal to_ \[|\mathbf{Q}|=\prod_{i=1}^{n-1}L_{i}.\] Proof.: By the same argument as in the proof of Proposition 4, if, for some constant \(c\), we have \[|\mathbf{Q}|=c\prod_{i=1}^{N-1}L_{i},\] then \(c=1\). Furthermore, as a linear combination of the columns \(\mathbf{q}_{i}\) of \(\mathbf{Q}-\overline{\mathbf{1}}\), \[\sum_{i}x_{i}\mathbf{q}_{i}=\mathbf{0},\] vanishes, the degree of \(|\mathbf{Q}|\) in \(h\) is \(3\). Therefore, it suffices to prove that \[L_{i}=0\implies|\mathbf{Q}|=0,\] for \(i=1,\ldots,n-1\). Let \(e_{i}=(u,v)\) again. We will show that the \(u\)th row and the \(v\)th row are dependent when \(L_{i}=0\). For all \(j\neq u,v\) we have \[x_{v}Q_{u,j}=x_{v}\frac{h}{2}x_{u}A_{u,j}=x_{u}\frac{h}{2}x_{v}A_{v,j}=x_{u}Q_{ v,j}.\] When \(j=u\) we have \[x_{v}Q_{u,u}\mid_{L_{i}=0}=x_{v}\left(1-\frac{h}{2}\left(\mathbf{A}.\mathbf{x}-A_ {u,u}x_{u}\right)\right)\mid_{L_{i}=0}=x_{v}\left(\frac{h}{2}x_{u}A_{v,u}\right) =x_{u}Q_{v,u}.\] and, by interchanging \(u,v\) in the above, when \(j=v\) we have \[x_{v}Q_{u,v}=x_{u}Q_{v,v}\mid_{L_{i}=0}.\] **Example 9**.: _The matrix \(\mathbf{Q}\) does not depend on the parameters \(a_{i}\). For our running example we have_ \[\mathbf{Q} =\begin{pmatrix}1&0&0&0\\ 0&1&0&0\\ 0&0&1&0\\ 0&0&0&1\end{pmatrix}\] \[+\frac{h}{2}\begin{pmatrix}-b_{1}x_{2}-b_{2}x_{3}-b_{3}x_{4}&x_{1 }b_{1}&x_{1}b_{2}&x_{1}b_{3}\\ x_{2}c_{1}&-b_{2}x_{3}-b_{3}x_{4}-c_{1}x_{1}&x_{2}b_{2}&x_{2}b_{3}\\ x_{3}c_{1}&x_{3}c_{2}&-b_{3}x_{4}-c_{1}x_{1}-c_{2}x_{2}&x_{3}b_{3}\\ x_{4}c_{1}&x_{4}c_{3}&x_{4}b_{2}&-b_{2}x_{3}-c_{1}x_{1}-c_{3}x_{2}\end{pmatrix}.\] **Proposition 10**.: _The Jacobian determinant for the Kahan map (15) is_ \[J=\frac{\left(\prod_{i=1}^{n-1}L_{i}\right)\left(\prod_{i=1}^{n}\left(\prod_{ j=1}^{n}K_{i,j}\right)/K_{i,i}\right)}{|M|^{n+1}}. \tag{28}\] Proof.: Let us differentiate the equation \(\mathbf{M}\mathbf{x}^{\prime}=\mathbf{x}\). Denoting differentiation w.r.t. \(x_{k}\) by \({}_{;k}\), the components satisfy (using Kronecker's delta and summation convention) \[M_{i,j}x_{j;k}^{\prime}+M_{i,j;k}x_{j}^{\prime}=\delta_{i,k}.\] Rearranging and taking the determinant we find that the Jacobian determinant is given by \[J=\frac{|\overline{\mathbf{I}}-\mathbf{X}|}{|M|},\text{ where }X_{i,k}=M_{i,j;k}x_{j}^{\prime}.\] We create a matrix \(\mathbf{Y}\) by dividing, for \(i=1,\dots,n\), the \(i\)th row of \(\overline{\mathbf{I}}-\mathbf{X}\) by \(x_{i}^{\prime}/x_{i}\). Then, using (15), \[|\overline{\mathbf{I}}-\mathbf{X}|=\frac{\prod_{i=1}^{n}\left(\prod_{j=1}^{n} K_{i,j}\right)/K_{i,i}}{|M|^{n}}|\mathbf{Y}|.\] If we can show \(\mathbf{Y}=\mathbf{Q}\), then, by Lemma 15, equation (28) follows. We have \[M_{i,j;k}=\begin{cases}-\frac{h}{2}\delta_{i,k}A_{i,j}&i\neq j\\ -hA_{i,i}&i=j=k\\ -\frac{h}{2}A_{i,k}&i=j\neq k,\end{cases}\] and hence \[Y_{i,k}=\left(\delta_{i,k}-M_{i,j;k}x_{j}^{\prime}\right)x_{i}/x_{i}^{\prime} =\begin{cases}\frac{h}{2}A_{i,j}x_{i}&i\neq k\\ \left(1+\frac{h}{2}\left((\mathbf{A}.\mathbf{x}^{\prime})_{i}+A_{i,i}x_{i}^{ \prime}\right)\right)x_{i}/x_{i}^{\prime}&i=k.\end{cases}\] Now consider the \(i\)th component of \(\mathbf{M}\mathbf{x}^{\prime}=\mathbf{x}\). With \[M_{i,j}=\begin{cases}-\frac{h}{2}x_{i}A_{i,j}&i\neq j\\ 1-\frac{h}{2}\left((\mathbf{A}.\mathbf{x})_{i}+A_{i,i}x_{i}\right)&i=j\end{cases}\] we find \[x_{i}=M_{i,j}x_{j}^{\prime}=-\frac{h}{2}x_{i}\sum_{j\neq i}A_{i,j}x_{j}^{\prime}+ \left(1-\frac{h}{2}\left((\mathbf{A}.\mathbf{x})_{i}+A_{i,i}x_{i}\right)\right)x _{i}^{\prime}\] which implies \[\left(1+\frac{h}{2}\left((\mathbf{A}.\mathbf{x}^{\prime})_{i}+A_{i,i}x_{i}^{ \prime}\right)\right)x_{i}/x_{i}^{\prime}=1-\frac{h}{2}\left((\mathbf{A}. \mathbf{x})_{i}-A_{i,i}x_{i}\right),\] and hence \(\mathbf{Y}=\mathbf{Q}\). **Example 11**.: _The Jacobian determinant of the map (26) is, in terms of (16), (27), and (24),_ \[J=\frac{L_{1}L_{2}L_{3}K_{1,2}K_{3,2}K_{4,2}(K_{2,1}K_{1,3}K_{1,4})^{3}}{| \mathbf{M}|^{5}}. \tag{29}\] **Theorem 12**.: _The expression (10) is a rational Darboux function of the Kahan map (15) with cofactor \(J\) given by (28)._ Proof.: As the cofactor of a product is the product of the cofactors we find, due to \(\sum_{i=1}^{n}m_{i}=2(n-1)\), and using \(e_{i}=(u_{i},v_{i})\) for \(i=1,\ldots,n-1\), \[d^{\prime} =d\prod_{i=1}^{n}\left(\frac{\prod_{j\neq i}K_{i,j}}{|M|}\right) ^{2-m_{i}}\prod_{i=1}^{n-1}L_{i}\frac{\prod_{j\neq u_{i},v_{i}}K_{u_{i},j}}{|M|}\] \[=d\frac{H\prod_{i=1}^{n-1}L_{i}}{|M|^{n+1}},\] with \[H=\left(\prod_{i=1}^{n}\prod_{j\neq i}K_{i,j}^{2-m_{i}}\right)\left(\prod_{i=1 }^{n-1}\prod_{j\neq u_{i},v_{i}}K_{u_{i},j}\right). \tag{30}\] Let \(I_{i}\) be the index-set for which \[j\in I_{i}\Leftrightarrow\exists k\ e_{k}=(i,j)\text{ or }e_{k}=(j,i),\] so that the number of elements in \(I_{i}\) equals \(m_{i}\). For each vertex \(i\) we view \(T\) as a union of \(m_{i}\) trees \(T=\cup_{j\in I_{i}}T_{j}^{i}\) which are connected at \(i\) and we define \(z(i,j)\) to be the number of edges in \(T_{j}^{i}\). We claim that \[H =\left(\prod_{i=1}^{n-1}K_{u_{i},v_{i}}K_{v_{i},u_{i}}\right) \left(\prod_{i=1}^{n-1}K_{u_{i},v_{i}}^{z(u_{i},v_{i})-1}K_{v_{i},u_{i}}^{z(v_ {i},u_{i})-1}\right) \tag{31}\] \[=\prod_{i=1}^{n-1}K_{u_{i},v_{i}}^{z(u_{i},v_{i})}K_{v_{i},u_{i}} ^{z(v_{i},u_{i})}\] (32) \[=\prod_{i=1}^{n}\left(\prod_{j=1}^{n}K_{i,j}\right)/K_{i,i}, \tag{33}\] which would show \(J\) is the cofactor of \(d\). [12, Definition 3] states that the weight of edge \((i,j)\in T\) equals the weight of \((k,j)\) or \((j,k)\) with \(i\in T_{k}^{j}\), and [12, Proposition 7] states that if \((u,v)\) is an edge in \(T\), then for all \(w\neq u,v\) the edges \((u,w)\), \((v,w)\) carry the same weight. It is easy to see that the converse is also true, i.e., if the edges \((u,w)\), \((v,w)\) carry the same weight, for all edges \((u,v)\in T\) and \(w\neq u,v\), then the weight of edge \((i,j)\in T\) equals the weight of \((k,j)\) or \((j,k)\) with \(i\in T_{k}^{j}\). Those properties carry over to the entries of the adjacency matrix \(A_{i,j}\), cf. [12, Corollary 8], and to the functions \(K_{i,j}\) as defined by (16), cf. the proof of Proposition 6. Consider the expression (33). Let \((k,j)\) or \((j,k)\) be an edge in \(T\), then \(k\in I_{j}\). Since \(i\in T_{k}^{j}\setminus\{j\}\), iff \(K_{i,j}=K_{k,j}\) the degree of \(K_{k,j}\) in (33) is \(z(k,j)\) and hence (33) equals (32). Next, consider the first factor of (30). Because \[\Big{(}\sum_{\begin{subarray}{c}i\in T_{k}^{j}\\ i\neq j\end{subarray}}2-m_{i}\Big{)}=1,\] we have \[\prod_{i=1}^{n}\prod_{j\neq i}K_{i,j}^{2-m_{i}}=\prod_{j=1}^{n}\prod_{i\neq j }K_{i,j}^{2-m_{i}}=\prod_{j=1}^{n}\prod_{k\in I^{j}}\prod_{\begin{subarray}{c}i \in T_{k}^{j}\\ i\neq j\end{subarray}}K_{i,j}^{2-m_{i}}=\prod_{j=1}^{n}\prod_{k\in I^{j}}K_{k,j }=\prod_{i=1}^{n-1}K_{u_{i},v_{i}}K_{v_{i},u_{i}},\] which is the first factor of (31). Finally, the second factor of (30) is \[\prod_{i=1}^{n-1}\prod_{j\neq u_{i},v_{i}}K_{u_{i},j} =\prod_{j=1}^{n}\prod_{\begin{subarray}{c}i=1\\ j\neq u_{i},v_{i}\end{subarray}}^{n-1}K_{u_{i},j}=\prod_{j=1}^{n}\prod_{k\in I ^{j}}\prod_{\begin{subarray}{c}i\in T_{k}^{j}\\ i\neq j,k\end{subarray}}K_{i,j}=\prod_{j=1}^{n}\prod_{k\in I^{j}}K_{k,j}^{z(k,j )-1}\] \[=\prod_{i=1}^{n-1}K_{u_{i},v_{i}}^{z(u_{i},v_{i})-1}K_{v_{i},u_{i} }^{z(v_{i},u_{i})-1},\] which is the second factor of (31). **Example 13**.: _The expression (10) is a Darboux function of the map (26) which has cofactor \(J\), the Jacobian determinant (29)._ ## 7 Kahan discretisations of Lotka-Volterra systems on graphs A class of homogeneous Lotka-Volterra systems is associated with any graph \(G\) on \(n\) vertices; when \(G\) contains both a tree with \(n\) vertices and a cycle of length 3 or greater, we call such a system a _(Lotka-Volterra) \(G\)-system_. Each edge of the graph is associated with a DP (preserved under Kahan discretisation), and each subgraph of \(G\) that is a tree on \(n\) vertices is associated with an invariant measure (preserved under Kahan discretisation). A ratio of two invariant measures is a first integral, as illustrated in the following example. **Example 14**.: _Consider the 4D Lotka-Volterra system with matrix_ \[\mathbf{A}=\begin{pmatrix}a_{1}&b_{1}&b_{2}&b_{3}\\ c_{1}&a_{2}&b_{2}&b_{3}\\ c_{1}&c_{2}&a_{3}&b_{3}\\ c_{1}&c_{2}&b_{2}&a_{4}\end{pmatrix}.\] _obtained from (24) by taking \(c_{3}=c_{2}\). The system admits four additional DPs_ \[P_{1}=(c_{1}-a_{1})x_{1}+(a_{2}-b_{1})x_{2} P_{2}=(c_{2}-a_{2})x_{2}+(a_{3}-b_{2})x_{3}\] \[P_{3}=(c_{2}-a_{2})x_{2}+(a_{4}-b_{3})x_{4} P_{4}=(b_{2}-a_{3})x_{3}+(a_{4}-b_{3})x_{4},\] _one for each edge in the graph of Figure 2. We identify three subgraphs, as in Figure 3. Each tree in Figure 3 comes with a measure, and these have the following densities_ \[d_{1}=x_{1}x_{4}P_{1}P_{2}P_{4},\qquad d_{2}=\frac{x_{1}x_{3}x_{4}P_{1}P_{2}P_ {3}}{x_{2}},\qquad d_{3}=x_{1}x_{3}P_{1}P_{3}P_{4}.\] _Taking ratios \(K_{1}=d_{1}/d_{2}\) and \(K_{2}=d_{1}/d_{3}\) we find the following integrals_ \[K_{1}=\frac{x_{2}P_{4}}{x_{3}P_{3}},\qquad K_{2}=\frac{x_{4}P_{2}}{x_{3}P_{3}},\] _for the special case \(c_{3}=c_{2}\) of the Kahan map (26). We note that these integrals are not independent, they satisfy \((a_{2}-c_{2})K_{1}+(a_{4}-b_{3})K_{2}=a_{3}-b_{2}\)._ Not every graph gives rise to a unique class of \(G\)-systems. **Proposition 15**.: _For a graph \(G\) which contains a cycle, let \(G^{\prime}\) be the graph obtained from \(G\) by adding edges between any pair of vertices in the cycle. Every \(G\)-system is a \(G^{\prime}\)-system._ Proof.: Let the cycle have length \(\ell\) with edges (without loss of generality) \(\{(1,2),(2,3),\ldots,(\ell-1,\ell),(\ell,1)\}\). The following equations are satisfied: \[A_{t,k}=A_{t+1,k},\ \forall\ 1\leq t<\ell,\ k\neq t,t+1,\] \[A_{\ell,k}=A_{1,k},\ \forall\ k\neq\ell,1.\] We have to prove \[A_{i,k}=A_{j,k},\ \forall\ 1\leq i\neq j\leq\ell,k\neq i,j.\] For all \(k\neq i,j\), one of the paths \(i,i+1,\ldots,j\) or \(i,i-1,\ldots,j\) (where indices are taken modulo \(\ell\)) does not contain \(k\). In the first case we have \(A_{i,k}=A_{i+1,k}=\cdots=A_{j,k}\), and otherwise \(A_{i,k}=A_{i-1,k}=\cdots=A_{j,k}\). Therefore the graphs in which we are interested are those for which restriction to any cycle yields a complete graph. (See Figures 4, 5 and 6 for the graphs on \(n=4,5,6\) vertices.) The Kahan map of a \(G\)-system has an invariant measure corresponding to each subgraph which is an \(n\)-tree. There can be many such subgraphs, as many as \(\frac{1}{2}n!\) (for the complete graph on \(n\) nodes). The ratio of any two such invariant measures is an integral of the Kahan map. **Proposition 16**.: _Let \(G\) be a graph on \(n\) vertices containing a complete subgraph of size \(\ell\geq 3\). The Kahan map of a \(G\)-system has at least \(\ell-2\) functionally independent first integrals._ Proof.: Consider the subgraph of \(G\) consisting of a cycle contained in the complete subgraph together with a number of trees attached to its vertices, such that the subgraph contains \(n\) edges. Deleting any edge in the cycle yields a tree and an associated invariant measure of the Kahan map. Let \((i,j,k)\) be three adjacent vertices in the cycle. The integral given by the ratio of the densities (4) corresponding to the two trees given by (i) deleting edge \(\alpha:=(i,j)\) and (ii) deleting edge \(\beta:=(j,k)\), is \[\frac{x_{i}P_{\beta}}{x_{k}P_{\alpha}},\] the factors of (4) corresponding to all other edges and vertices cancelling. To show functional independence, consider without loss of generality the cycle with edges \(\{(1,2),\ldots,(\ell-1,\ell),(\ell,1)\}\) with edges labelled \(\{1,\ldots,\ell\}\) respectively. The \(\ell-2\) integrals \[\frac{x_{1}P_{2}}{x_{3}P_{1}},\frac{x_{2}P_{3}}{x_{4}P_{2}},\ldots,\frac{x_{ \ell-2}P_{\ell-1}}{x_{\ell}P_{\ell-2}},\] Figure 3: Three subgraphs that are trees. Figure 2: Graph on 4 vertices. here the \(i\)th integral is a rational function of \(x_{i}\), \(x_{i+1}\), and \(x_{i+2}\), are functionally independent because the Jacobian matrix of these functions is upper triangular. In dimension 4, 5, and 6 there are 2, 6, and 16 classes of \(G\)-systems, respectively. The graphs associated with each class are shown in Figures 4, 5 and 6, respectively. Numerical experiments (evaluating the rank of the Jacobian derivative matrix of the integrals associated with all ratios of the measures associated with each tree for each of these graphs with random values of the parameters and variables) indicate that the number of functionally independent first integrals of the Kahan map is \[\sum_{i}(\ell_{i}-2), \tag{34}\] where \(\ell_{1},\ell_{2},\dots\) are the sizes of the complete subgraphs. For each graph \(G\) in Figures 4, 5 and 6, the ODE of the \(G\)-system is superintegrable, while its Kahan discretisation is measure-preserving with at least one integral. The integer attached to each graph is the number of functionally independent integrals computed numerically by the method just given, consistent with formula (34). We have shown that the Kahan map for Lotka-Volterra \(G\)-systems in dimension \(n\), where \(G\) is the complete graph on \(n\) vertices, is measure-preserving and has at least \(n-2\) functionally independent integrals. If it had \(n-1\) functionally independent integrals, it would be superintegrable, and this could in principle be detected via the method of algebraic entropy or degree growth [14]. We implemented this method for \(n=4\) and observed an exponential rate of degree growth, indicating that the map does not have any integrals additional to those given above. **Acknowledgment.** This paper is dedicated to Hans Munthe-Kaas and Brynjulf Owren on the occasion of their 120th birthday. We are grateful for their friendship and collegiality over many years and in many countries. We have particularly fond memories of our very fruitful and enjoyable visits to Norway on multiple occasions. Figure 4: The 2 graphs on 4 vertices associated with distinct classes of Lotka–Volterra \(G\)-systems. Figure 5: The 6 graphs on 5 vertices associated with distinct classes of Lotka–Volterra \(G\)-systems. Figure 6: The 16 graphs on 6 vertices associated with distinct classes of Lotka–Volterra \(G\)-systems. The integer attached to each graph is the number of functionally independent integrals as computed numerically.
2309.08285
One-Class Knowledge Distillation for Spoofing Speech Detection
The detection of spoofing speech generated by unseen algorithms remains an unresolved challenge. One reason for the lack of generalization ability is traditional detecting systems follow the binary classification paradigm, which inherently assumes the possession of prior knowledge of spoofing speech. One-class methods attempt to learn the distribution of bonafide speech and are inherently suited to the task where spoofing speech exhibits significant differences. However, training a one-class system using only bonafide speech is challenging. In this paper, we introduce a teacher-student framework to provide guidance for the training of a one-class model. The proposed one-class knowledge distillation method outperforms other state-of-the-art methods on the ASVspoof 21DF dataset and InTheWild dataset, which demonstrates its superior generalization ability.
Jingze Lu, Yuxiang Zhang, Wenchao Wang, Zengqiang Shang, Pengyuan Zhang
2023-09-15T09:59:06Z
http://arxiv.org/abs/2309.08285v1
# One-Class Knowledge Distillation for Spoofing Speech Detection ###### Abstract The detection of spoofing speech generated by unseen algorithms remains an unresolved challenge. One reason for the lack of generalization ability is traditional detecting systems follow the binary classification paradigm, which inherently assumes the possession of prior knowledge of spoofing speech. One-class methods attempt to learn the distribution of bonafide speech and are inherently suited to the task where spoofing speech exhibits significant differences. However, training a one-class system using only bonafide speech is challenging. In this paper, we introduce a teacher-student framework to provide guidance for the training of a one-class model. The proposed one-class knowledge distillation method outperforms other state-of-the-art methods on the ASVspoof 21DF dataset and InTheWild dataset, which demonstrates its superior generalization ability. Jingze Lu\({}^{1,2}\), Yuxiang Zhang\({}^{1,2}\), Wenchao Wang\({}^{1}\), Zengqiang Shang\({}^{1}\), Pengyuan Zhang\({}^{1,2,*}\)+\({}^{1}\)Key Laboratory of Speech Acoustics and Content Understanding, Institute of Acoustics, Chinese Academy of Sciences, Beijing, China \({}^{2}\)University of Chinese Academy of Sciences, Beijing, China {lujingze, zhangyuxiang, wangwenchao, shangzengqiang, zhangpengyuan}@ncl.ioa.ac.cn Spoofing Detection, Knowledge Distillation, Generalization Ability, One-Class Classification Footnote †: This work is partially supported by the National Key Research and Development Program of China (No. 2021YFC33201033) + Footnote †: This work is partially supported by the National Key Research and Development Program of China (No. 2021YFC33201033) ## 1 Introduction With the development of various text-to-speech (TTS) and voice conversion (VC) algorithms, a large amount of incredibly realistic synthesized speech could be generated at low cost. In response to the potential threat posed by synthesized speech, many detection systems have been developed by the research community. The current mainstream detection scheme is a two-part paradigm, with a feature extractor front-end, and a back-end for classification. Researchers have explored different front-ends, such as STFT [1], CQCC [2], and Wav2Vec [3, 4], as well as different back-ends, such as RawNet [5] and AASIST [6], for detecting fake utterances. However, the lack of generalization ability remains an unsolved issue for current detection models. Some studies [7, 8] indicate that anti-spoofing countermeasures (CMs) suffer a significant performance decline when facing unseen spoofing attacks, channel coding, compression coding, and other situations. To improve the generalization ability, some data augmentation methods have been proposed, including RawBoost [9] and copy-synthesis method [10]. In addition, [11] introduces active learning (AL) to select useful training data to improve the generalization ability of the models. In [12], domain adversarial methods are used to eliminate the differences between the target domain and the source domain. In contrast to the above methods, in [13], the failure of anti-spoofing CMs on unseen spoofing attacks is attributed to the fact that they formulate the problem as a binary classification task. The traditional binary classification training paradigm inherently assumes that fake speech shares a similar distribution, which is not accurate. It is unrealistic to assume that prior knowledge of the spoofing attack is sufficient when potential attackers have a variety of synthesis methods. Unlike binary classification, some studies [13, 14] utilize one-class classification methods to improve the generalization ability of the anti-spoofing CMs. One-class methods are inherently suited to tasks where spoofing speech exhibits significant differences. The key point of one-class method is to learn the distribution of real utterances and set an appropriate boundary around it. Any speech outside the boundary could be considered fake. Another potential benefit of one-class method is that bonafide speech is much easier to collect compared to spoofing speech. However, training a one-class spoofing speech detection system using only bonafide speech is challenging. Speech is a coupling of multiple information. For some information, such as semantics, bonafide and spoofing speech may share the same feature space. Therefore, to obtain a distribution that adequately represents bonafide, we employ a teacher model with prior information to provide guidance for training the one-class model. In this work, inspired by one-class classification methods and teacher-student frameworks, a one-class knowledge distillation approach is proposed to improve the generalization ability of spoofing speech detection systems. Different from previous knowledge distillation frameworks for model compression, the proposed framework conforms to the paradigm of anomaly sample detection. In our framework, the teacher model is a traditional binary classification model that receives both bonafide and spoofing speech. The student model only has access to bonafide speech and focuses on learning the bonafide distribution from the teacher model. The similarity of the output features of the teacher model and the student model is used for spoofing speech detection. For real samples, the student has learned the representations from the teachers, resulting in a high similarity. When faced with a variety of synthesized utterances, the student model is ignorant of them while the teacher model has prior information, resulting in a low similarity. Experimental results on ASVspoof 21DF and InTheWild datasets have demonstrated the generalization capability of our method against unknown attack algorithms. In conclusion, in response to the generalization problem of current anti-spoofing CMs, we propose a one-class knowledge distillation (OCKD) method for detecting spoofing speech. The proposed method outperforms other state-of-the-art (SOTA) systems facing various unseen attacks by learning a distribution of bonafide speech. ## 2 Method Figure 1 shows the pipeline of the proposed method, where the teacher model and the student model have a similar structure, including a Wav2Vec 2.0 front-end and an AASIST back-end. In this section, we will introduce the motivation of designing the pipeline, and provide a detailed description of the modules and learning objectives. ### One-Class Knowledge Distillation Spoofing Speech Detection Method The key idea of one-class classification methods is to learn a dense feature space of the target class distribution and to distinguish samples far from this feature space as non-target samples. Inspired by one-class methods, we introduce a one-class knowledge distillation (OCKD) method to improve the generalization ability of speech anti-spoofing CMs. The proposed OCKD method can be mainly divided into two parts, a teacher model and a student model. The teacher model is trained with both bonafide and spoof speech, while the student model is trained using only bonafide speech. With such a design, on the one hand, the teacher model can learn the differences between bonafide and spoof samples, and quickly learn a feature space of bonafide samples. The student model, on the other hand, is not disturbed by the InDistribution (ID) spoofing samples and focuses on learning the feature space of the bonafide speech. The training of the teacher model and the student model are mutually independent. When training the student model, the parameters of the teacher model are not updated. For the teacher model, we follow the traditional structure of spoofing speech detection, a feature extractor front-end and a classification back-end. We adopt the same model architecture as in [4], with a Wav2Vec 2.0 front-end and an AASIST back-end. The motivation for choosing such model Figure 1: The pipeline of our proposed one-class knowledge distillation (OCKD) spoofing speech detection method. The teacher model contains a 24-layer Wav2vec feature extractor and an AASIST backend. The student model has a similar structure to the teacher model, but the Transformer layer of the feature extractor is compressed to 8 layers. During the training phase of the student model, the parameters of the teacher model are frozen. structure as the teacher model is twofold. Firstly, Wav2Vec 2.0 is a self-supervised-learning (SSL)-based feature extractor. During the training of Wav2Vec, only a large amount of bonafide speech is used, without the use of spoofing speech. Therefore, fine-tuning the pre-trained Wav2Vec on the task of spoofing speech detection can achieve a reliable feature space of bonafide speech. In addition, AASIST is an end-to-end, integrated spectrotemporal graph attention network. The same structure used in [4] achieves state-of-the-art performance on unseen attacks. We believe that a teacher model with greater generalization ability for unseen attacks is more helpful for the training of the student model. The student model has a similar structure as the teacher model, while the number of Transformer layers of Wav2Vec is compressed from 24 to 8. There are two purposes for compressing the student model. Firstly, compressing the model can accelerate the training of the student model. In addition, the student model receives less data compared to the teacher model. Compressing the model can prevent overfitting. ### Learning Objective The learning objective of the student model is to generate representations of bonafide speech similar to the teacher model. To achieve this objective, for a bonafide utterance, the embedding output by the student model should be close to that of the teacher model. In the training process of the student model, we construct the loss function whose target is the output embedding of the teacher model. We use the mean square error (MSE) loss \(\mathcal{L}_{mse}\) to measure the distance between embeddings. However, the MSE loss is very sensitive to outliers and is more difficult to train for models with a large number of parameters. Therefore, we also introduce a loss function \(\mathcal{L}_{cos}\) based on cosine similarity. For a training set \(\mathcal{B}\) of all bonafide speech, the total loss function could be expressed as, \[\mathcal{L}_{total}=\mathcal{L}_{cos}+\lambda\mathcal{L}_{mse} \tag{1}\] where \(\lambda\) is a constant value, which should be set according to the loss value of \(\mathcal{L}_{cos}\) and \(\mathcal{L}_{mse}\) to ensure they are in the same order of magnitude. \(\mathcal{L}_{mse}\) can be expressed as \(\mathcal{L}_{mse}=\frac{1}{N}\sum_{b_{i}\in\mathcal{B}}(T(b_{i})-S(b_{i}))^{2}\). \(\mathcal{L}_{cos}\) can be expressed as \(\mathcal{L}_{cos}=\frac{1}{N}\sum_{b_{i}\in\mathcal{B}}(1-\frac{T(b_{i})S(b_{i }))}{\|T(b_{i})\|_{2}\|S(b_{i})\|_{2}})\), where \(\langle\cdot\rangle\) donates the inner product, and \(\|\cdot\|_{2}\) denotes the computation of the 2-Norm. [15] indicates that for the task of spoofing speech detection, different Transformer layers of Wav2Vec play different roles. Therefore, the student model should learn from the teacher model at different levels. The student model of the proposed OCKD has 8 Transformer layers, which is only one-third of the teacher model. We use the hidden embedding \(\{s_{2},s_{4},s_{6},s_{8}\}\) to learn from \(\{t_{6},t_{12},t_{18},t_{24}\}\), where \(t_{i}\) and \(s_{i}\) are the output of the \(i\)th Transformer layer of teacher and student, respectively. The back-end of the teacher model is also important for spoofing speech detection, so \(t_{A}\) is also set to be the learning target of \(s_{A}\), where \(t_{A}\) and \(s_{A}\) are the hidden embedding output by AASTST back-ends of teacher and student model, respectively. Both the teacher model and the student model are used to develope the inference model. During the testing phase, for an unknown utterance \(x\), similarity between \(\{s_{2},s_{4},s_{6},s_{8},s_{A}\}\) and \(\{t_{6},t_{12},t_{18},t_{24},t_{A}\}\) is used for inference. For bonafide utterances, the student model has learned the representations from the teachers, resulting in a high similarity in the embeddings. When confronted with various unseen attack algorithms, the student model is ignorant of them. In contrast, the teacher model possesses prior knowledge of the spoofing speech, leading to a low similarity in the embeddings. ## 3 Experiments and Results ### Datasets and Metrics To investigate the generalization ability of the proposed method, experiments are conducted on several different datasets. For all models, we utilize the training set of the ASVspoof 2019 LA (19LA) [16] for training, which is an influential dataset in spoofing speech detection. For the teacher model, all 25380 samples in 19LA training set are used. While for student model, only the bonafide samples (num 2580) are used for training. The test sets include the evaluation sets of 19LA, ASVspoof 2021 LA (21LA) and ASVspoof 2021 DeepFake (21DF) [7]. Utterances of the 21LA dataset are transmitted over various channels. The 21DF dataset collects about 600K utterances processed with various lossy codecs typically used for media storage, which is an influential dataset for generalization validation. 21LA and 21DF have a hidden track, in which the non-speech segments are trimmed. In addition, the proposed method is validated on the InTheWild dataset [8], which is a more challenging dataset whose data is collected from the real world. The equal error rate (EER) is used as the evaluation metric, which is defined as the point where the false acceptance rate (FAR) and the false rejection rate (FRR) are equal. ### Model Architecture and Details of Systems Implementation For the teacher model, we adopt the same structure as [4], with a Wav2Vec 2.0 front-end and an AASIST back-end. The pre-trained model used is Wav2Vec2-xlsr. During the training process of the teacher model, the pre-trained wav2vec 2.0 model is optimized jointly with the AASIST backend. The student model has a similar structure as the teacher, while the number of Transformer layers is compressed to 8. During the training process of the student model, parameters of the teacher model are frozen. All models are trained using Adam optimizer with \(\beta_{1}=0.9\), \(\beta_{2}=0.98\), \(\epsilon=10^{-8}\) and weight decay \(10^{-4}\). The teacher model utilizes a CrossEntropy loss function with weight of \(\{0.1,0.9\}\) to balance the training set. The learning objective of the student model is expressed in section 2.2, where \(\lambda\) is set to a constant value \(10^{-5}\). The learning rate is fixed at \(10^{-6}\). Training is conducted over 100 epochs and a batchsize of 32. Rawboost [9] is used as data augmentation method. ### Reusits and Analysis Tabel 1 shows the comparison of the teacher model and the student model of proposed OCKD method on different datasets. For 19LA and 21LA datasets, the EER of the student has slightly decreased. This may be because the eval sets of 19LA and 21LA contain many ID samples as the training set. For samples that have similar distribution as the training set, binary classification method yields superior results. On other eval sets, the student model achieves lower EERs. Among them, the 21DF and the InTheWild dataset are two commonly used datasets for evaluating the generalization ability. 21DF dataset contains more than 100 different spoofing attack algorithms. InTheWild dataset is collected from the real world. Obtaining performance gains on these two datasets demonstrate that the proposed one-class knowledge distillation method enables the student model to learn the distribution of real speech and effectively improving the generalization ability on unseen attacks. 21LA and 21DF hidden sets are the official subsets of ASVspoof 2021 dataset. These two subsets trim the non-speech segments, which can lead to performance degradation of anti-spoofing CMs [17]. The reason for such performance degradation is that CMs may overfit to the length of non-speech segments. The propsed OCKD method also obtains performance improvements on both datasets. For all datasets, the overall pooled EER of the student model decreases from 6.36% to 5.88%. Tabel 1 also shows the ablation results of the learning objective. For the two learning objectives introduced in this work, the performance of the MSE loss is poor. This could be attributed to the fact that the hidden embeddings used to be the target of learning objective have large dimension. It is hard to enforce the embeddings of the student model to align with those of the teacher model. In contrast, the cosine similarity loss achieves good results. Using two loss functions simultaneously achieves slight improvements compared to using only cosine loss.
2306.17520
Flavor violating Higgs and $Z$ decays at FCC-ee
Recent advances in $b$, $c$, and $s$ quark tagging coupled with novel statistical analysis techniques will allow future high energy and high statistics electron-positron colliders, such as the FCC-ee, to place phenomenologically relevant bounds on flavor violating Higgs and $Z$ decays to quarks. We assess the FCC-ee reach for $Z/h\to bs, cu$ decays as a function of jet tagging performance. We also update the SM predictions for the corresponding branching ratios, as well as the indirect constraints on the flavor violating Higgs and $Z$ couplings to quarks. Using type III two Higgs doublet model as an example of beyond the standard model physics, we show that the searches for $h\to bs, cu$ decays at FCC-ee can probe new parameter space not excluded by indirect searches. We also reinterpret the FCC-ee reach for $Z\to bs , cu$ in terms of the constraints on models with vectorlike quarks.
Jernej F. Kamenik, Arman Korajac, Manuel Szewc, Michele Tammaro, Jure Zupan
2023-06-30T10:16:59Z
http://arxiv.org/abs/2306.17520v2
# Flavor violating Higgs and \(Z\) decays at FCC-ee ###### Abstract Recent advances in \(b\), \(c\), and \(s\) quark tagging coupled with novel statistical analysis techniques will allow future high energy and high statistics electron-positron colliders, such as the FCC-ee, to place phenomenologically relevant bounds on flavor violating Higgs and \(Z\) decays to quarks. We assess the FCC-ee reach for \(Z/h\to bs,cu\) decays as a function of jet tagging performance. We also update the SM predictions for the corresponding branching ratios, as well as the indirect constraints on the flavor violating Higgs and \(Z\) couplings to quarks. Using type III two Higgs doublet model as an example of beyond the standard model physics, we show that the searches for \(h\to bs,cu\) decays at FCC-ee can probe new parameter space not excluded by indirect searches. We also reinterpret the FCC-ee reach for \(Z\to bs,cu\) in terms of the constraints on models with vectorlike quarks. **Introduction.** Flavor Changing Neutral Currents (FCNCs) are forbidden at tree level in the Standard Model (SM), and are as such ideal to search for effects of beyond the SM (BSM) physics. Most of the FCNC observables are accessible at experiments that are done at relatively low energies, but with large statistics. The list of such observables is very long, and involves both quarks and leptons. The classic examples are \(\mathcal{B}(\mu\to e\gamma)\), \(\mu\to e\) conversion rate, \(B_{(s)}-\bar{B}_{(s)}\), or \(K-\bar{K}\) mixing, \(\mathcal{B}(B_{s}\to\mu^{+}\mu^{-})\), and many more (for reviews see, e.g., [1; 2; 3; 4; 5]). The situation is different for high energy FCNC observables, where the list is rather short and almost always involves leptons. Examples are \(\mathcal{B}(h\to\ell\ell^{\prime})\), \(\mathcal{B}(Z\to\ell\ell^{\prime})\) and \(\sigma(pp\to\ell\ell^{\prime})\). The exception to this rule are the decays of top quarks, where \(t\to ch,cg,\dots\), can also be probed in high energy collisions, see, e.g., [6; 7; 8; 9; 10; 11; 12; 13]. In this Letter we show that, somewhat surprisingly, the on-shell FCNC decays of the Higgs, \(\mathcal{B}(h\to bs)\equiv\mathcal{B}(h\to\bar{b}s+b\bar{s})\) and \(\mathcal{B}(h\to cu)\equiv\mathcal{B}(h\to\bar{c}u+c\bar{u})\), can be added to the list of high energy FCNC observables, since they can be probed at a phenomenologically interesting level at a future lepton collider, such as the FCC-ee [14]. Over the full running period of FCC-ee, the collider is expected to produce \(N_{h}=6.7\times 10^{5}\)\(h\)'s [15] and \(N_{Z}=5\times 10^{12}\)\(Z\)'s [16; 17]. As we show in the following, FCC-ee is projected to have a sensitivity to \(\mathcal{B}(h\to bs)\) and \(\mathcal{B}(h\to cu)\) below the indirect bounds from \(B_{s}-\bar{B}_{s}\) and \(D-\bar{D}\) mixing, of Table 1, and we expect similar sensitivities to apply also to CEPC [18] (for a recent analysis of the \(h\to bs\) reach at ILC, but using \(b\)- and \(c-\)taggers, see [19]). The main reasons for these significant improvements are: _i)_ the recent advances in \(b\)-, \(c-\) and \(s\)-jet tagging, _ii)_ the analysis technique that we advocate for below, which results in excellent sensitivity to these FCNC transitions, and _iii)_ the relatively clean environment of \(e^{+}e^{-}\) collisions. The same approach can also be applied to \(\mathcal{B}(Z\to bs)\) and \(\mathcal{B}(Z\to cu)\), however, the phenomenologically interesting branching ratios are still below the floor set by the systematic uncertainties of taggers. **Accessing flavor violating transitions.** An analysis strategy that has been successfully applied to \(h\to c\bar{c}\) decays [20], as well as to suppressed \(t\to(s,d)W\) transitions [21; 22], is to distribute events into different event types according to how many flavor tagged (and anti-tagged) jets they contain. In particular, the inclusion of information about events with light jets was shown in Ref. [21] to lead to significant improvement in sensitivity to \(V_{ts,td}\). Here, we modify the approach of Ref. [21] and apply it to the case of \(h\to bs,cu\) and \(Z\to bs,cu\) decays. For notational expediency we focus first on just the \(bs\) final state, and then extend these results to the analysis of \(cu\) decays. In both \(h\to bs\) and \(Z\to bs\) decays there are two jets in the final state; in \(e^{+}e^{-}\to hZ(h\to bs,Z\to ee,\mu\mu)\) there are also two isolated leptons, while the \(e^{+}e^{-}\to Z\to bs\) events only have two jets. Applying the \(b\)- and \(s\)-taggers to the two jets, the events are distributed in \((n_{b},n_{s})\in\{(0,0),(1,0),(0,1),(2,0),(1,1),(0,2)\}\) bins, where \(n_{b(s)}\) denotes the number of \(b(s)\)-tagged jets in the event. The \(b\)- and \(s\)-taggers need to be orthogonal to ensure to event populates two different \((n_{b},n_{s})\) bins and is double-counted 1. We denote the tagger efficiencies as \(\epsilon_{\beta}^{b}\) and \(\epsilon_{\beta}^{s}\), where \(\beta=\{l,s,c,b\}\) denotes the flavor of the initial parton (\(l=g\) for \(h\) and \(l=u,d\) for \(Z\)). The expected number of events in the bin \((n_{b},n_{s})\) is given by \[\bar{N}_{(n_{b},n_{s})}=\sum_{f}p(n_{b},n_{s}|f,\nu)\bar{N}_{f}(\nu)\,, \tag{1}\] where the summation is over the relevant (signal and background) decay channels, \(f=\{gg,s\bar{s},c\bar{c},b\bar{b},bs\}\) for the \(h\) and \(f=\{u\bar{u}+d\bar{d},s\bar{s},c\bar{c},b\bar{b},bs\}\) for the \(Z\). The expect number of events in each decay channel is given by \[\bar{N}_{f}=\mathcal{B}(Z/h\to f)N_{Z/h}\mathcal{A}\,, \tag{2}\] where \(\mathcal{B}(Z/h\to f)\) are the corresponding branching fractions, \(N_{Z/h}\) are the number of \(Z\) and \(h\) bosons expected to be produced during the FCC-ee run, while \(\mathcal{A}\) is the detector acceptance including reconstruction efficiency, which we assume for simplicity to be the same for all the relevant decay channels. In writing down Eq. (1) we have neglected the backgrounds: the \(\tau^{+}\tau^{-}\) for \(Z\to bs\) and the Drell-Yan, \(WW,ZZ\) for \(h\to bs\). We expect that the inclusion of these backgrounds will not qualitatively change our results, since for most part they are small enough to constitute only a subleading effect. Perhaps the most worrisome is the \(ZZ\) background for \(h\to bs\). Even this we expect in the actual experimental analysis to be either reduced enough through optimized selection to be ignored (e.g., through use of a multivariate classifier trained on other kinematic observables such as the invariant masses and angular correlations), or alternatively it can, in the proposed analysis strategy, be treated as an appropriate small re-scaling of the predicted \(\bar{N}_{f}\). The probability distribution \(p(n_{b},n_{s}|f,\nu)\) for a given event to end up in the \((n_{b},n_{s})\) bin depends on a number of nuisance parameters, \(\nu=\{\mathcal{B}(h\to f),B(Z\to f^{\prime}),\epsilon^{\alpha}_{\beta},N_{Z/h},\mathcal{A}\}\), which are varied within the uncertainties in the numerical analysis 2. We build a probabilistic model for \(p(n_{b},n_{s}|f,\nu)\), with a graphical representation given in Fig. 13. The probability \(p(n_{b},n_{s}|f,\nu)\) depends on the flavor of the initial \(Z/h\to f\) parton decay, where \(f=\{u\bar{u}+d\bar{d}(gg),s\bar{s},c\bar{c},b\bar{b},bs\}\) for \(Z(h)\), since the tagging efficiencies \(\epsilon^{\alpha}_{\beta}\), \(\alpha=b,s\), depend on the flavor of the initial parton. Footnote 2: The nominal values and uncertainties on the nuisance parameters used in our analysis are listed in the supplementary material, in Tables S1, S4. Experimentally, the value of \(\mathcal{B}(Z/h\to bs)\) would be determined by comparing the measured number of events in each \((n_{b},n_{s})\) bin, \(N_{(n_{b},n_{s})}\), with the expected value \(\bar{N}_{(n_{b},n_{s})}\). The highest sensitivity to \(\mathcal{B}(Z/h\to bs)\) is expected from the \((n_{b},n_{s})=(1,1)\) bin, however, keeping also the \((2,0)\) and \((0,2)\) bins increases the overall statistical power. In order to estimate the sensitivity of FCC-ee to \(\mathcal{B}(Z/h\to bs)\), as a proof of concept, we can bypass the need for Monte Carlo simulations and work within the Asimov approximation [23], both because of the simplicity of the study and especially due to the high statistics environment. That is, we consider an ideal dataset where the observed number of events equals \(N^{A}_{(n_{b},n_{s})}=\tilde{N}_{(n_{b},n_{s})}(\mathcal{B}(Z/h\to bs)_{0},\nu =\nu_{0})\), that is, it equals to the expected number of events for the nominal values of nuisance parameters and the input value of \(\mathcal{B}(Z/h\to bs)_{0}\). The expected upper bound on \(\mathcal{B}(Z/h\to bs)_{0}\) is then obtained from a maximum likelihood, allowing nuisance parameters to float 4. Footnote 4: See Sec. S2 in the supplementary material for further details. **Expected reach at FCC-ee.** We first focus on the simplified case where only the \(b\)-tagger is used, and obtain the expected exclusion limits on FCNC decays summed over light quark flavors, \(\mathcal{B}(h\to bq)=\mathcal{B}(h\to bd)+\mathcal{B}(h\to bs)\). The exclusions are derived from the observed yields in the \(n_{b}=0,1,2\) bins. For simplicity, we parameterize \begin{table} \begin{tabular}{c c c c} \hline \hline Decay & SM prediction & exp. bound & indir. constr. \\ \hline \(\mathcal{B}(h\to bs)\) & \((8.9\pm 1.5)\cdot 10^{-8}\) & \(0.16\) & \(2\times 10^{-3}\) \\ \(\mathcal{B}(h\to bd)\) & \((3.8\pm 0.6)\cdot 10^{-9}\) & \(0.16\) & \(10^{-3}\) \\ \(\mathcal{B}(h\to cu)\) & \((2.7\pm 0.5)\cdot 10^{-20}\) & \(0.16\) & \(2\times 10^{-2}\) \\ \(\mathcal{B}(Z\to bs)\) & \((4.2\pm 0.7)\cdot 10^{-8}\) & \(2.9\times 10^{-3}\) & \(6\times 10^{-8}\) \\ \(\mathcal{B}(Z\to bd)\) & \((1.8\pm 0.3)\cdot 10^{-9}\) & \(2.9\times 10^{-3}\) & \(6\times 10^{-8}\) \\ \(\mathcal{B}(Z\to cu)\) & \((1.4\pm 0.2)\cdot 10^{-18}\) & \(2.9\times 10^{-3}\) & \(4\times 10^{-7}\) \\ \hline \hline \end{tabular} \end{table} Table 1: The SM predictions and current experimental upper bounds on hadronic FCNC decays of \(h\) and \(Z\), either from direct searches (3rd column) or indirect constraints (4th column), where the indirect bounds on \(\mathcal{B}(h\to qq^{\prime})\) assume no large cancellations, see main text for details. For details on the SM calculations see supplementary material, Sec. S4. Figure 1: Graphical representation of the probabilistic model for determining \(\mathcal{B}(Z/h\to bs)\). Starting with the \(Z/h\to f\) partonic decay, where \(f=\{u\bar{u}+d\bar{d}(gg),s\bar{s},c\bar{c},b\bar{b},bs\}\) for \(Z(h)\), the tagged flavours of the two final state jets, \(Z/h\to j_{1}j_{2}\), are determined by the corresponding \(s-\) and \(b-\)tagger efficiencies, \(\epsilon^{\alpha}_{\beta}\). The arrows denote the probabilities for each event to end up in the \((n_{b},n_{s})\) bin. the \(b\)-dagger as a function of two parameters: the true positive rate (TPR) \(\epsilon^{b}_{b}\) and the overall effective false positive rate (FPR) for all the other initial parton flavors, \(\epsilon^{b}_{gsc}\). The expected 95% CL upper limits on \(\mathcal{B}(h\to bq)\), assuming only statistical uncertainties, are shown in Fig. 2 (top). We observe a saturation: for low enough FPR \(\epsilon^{b}_{gsc}\) the upper limits become independent of \(\epsilon^{b}_{gsc}\) and depend only on \(\epsilon^{b}_{b}\). With relatively modest TPR \(\epsilon^{b}_{b}\in[0.4,0.8]\) and easily achievable FPR \(\epsilon^{b}_{gsc}\lesssim 10^{-2}\) the projected bounds are \(\mathcal{B}(h\to bq)\lesssim(5-7)\times 10^{-3}\). This is already in the regime that is interesting for the BSM physics searches, cf. Fig. 4 (top). However, the inclusion of strangeness tagging can result in further appreciable improvements in the expected sensitivity. Fig. 2 (bottom) shows the expected 95% CL bounds on \(\mathcal{B}(h\to bs)\) obtained from the comparison of all possible \((n_{b},n_{s})\) bins with the predictions. Here, the possible bins are \((n_{b},n_{s})=\{(0,0),(0,1),(1,0),(1,1),(2,0),(0,2)\}\), where the signal mostly populates the \((n_{b},n_{s})=(1,1)\) bin, while the remaining bins constrain the backgrounds. To scan over possible taggers we assume in Fig. 2 (bottom) for the purpose of presentation a common TPR for \(b\)- and \(s-\)tagging, \(\epsilon^{b}_{b}=\epsilon^{s}_{s}\), and similarly a common FPR, \(\epsilon^{b}_{lsc}=\epsilon^{s}_{lcb}\). This assumption is not crucial, and is for instance relaxed in the analysis in Sec. S3 of the supplementary material. Nevertheless, we anticipate it to give a reasonable guidance on the expected reach at FCC-ee, if the common FPR is identified as FPR=max\((\epsilon^{b}_{s},\epsilon^{s}_{b})\), where \(\epsilon^{b}_{s},\epsilon^{s}_{b}\) are the actual tagger working point mis-identification rates. The reason is that the backgrounds with two misidentified jets are highly suppressed relative to the backgrounds with one misidentified jet, and this is more often than not dominated by the larger mis-identification rate. For instance, the performance of the common medium working point (TPR, FPR) \(=(0.80,0.004)\), denoted with a star in Fig. 2 (bottom), is very close to the expected 95% upper-limit \(\mathcal{B}(h\to bs)<9.6\times 10^{-4}\), obtained when considering all the different efficiencies in the medium working point of the \(b\)- and \(s\)-taggers introduced in Refs. [24; 25], and assuming a 1% systematic uncertainty (the taggers still need to be calibrated). This limit, which does not consider other backgrounds such as Drell-Yan, \(WW,ZZ,q\bar{q}\), which we expect to not affect significantly the projected reach, is competitive with indirect measurements and represents a complementary direct probe. We use this as a benchmark expected exclusion in our exploration of the impact on new physics (NP) searches. Note that the SM prediction is orders of magnitude smaller, see Table 1, so that any positive signal would mean discovery of NP. In Fig. 2 (bottom) the relative uncertainties on the eight tagger parameters \(\epsilon^{a}_{\beta}\) are taken to be 1% (the uncertainties are treated as independent, while the central values are common TPR, FPR). The 1% uncertainty is currently below the calibrated scale factors in the LHC analyses [26; 27]. However, given the high statistics environment at the FCC-ee, it is reasonable to expect that a dedicated calibration for high precision taggers could reach such relatively low uncertainties. For 1% systematic uncertainties the expected upper bounds on \(\mathcal{B}(h\to bs)\) are statistics limited, except for very large FPR. Incidentally, this also justifies the neglect of systematics in Fig. 2 (top). A similar analysis can be performed to arrive at the expected FCC-ee sensitivity to \(\mathcal{B}(h\to cu)\). The main difference is that the sensitivity is determined just by the performance of the \(c\)-tagger (there is currently no efficient "\(u\)-tagger"). Using the loose (medium) working point for the \(c\)-tagger [24; 25] leads to the 95% CL expected bound for \(\mathcal{B}(h\to cu)<2.9(2.5)\times 10^{-3}\). 5 Footnote 5: Further details can be found in the supplementary material, Sec. S3.2. We move next to the case of \(Z\to bs\) decays. As before, we perform a scan over tagger efficiencies, taking the same TPR for \(b\)- and \(s\)-taggers, \(\epsilon^{b}_{b}=\epsilon^{s}_{s}\), and similarly for the FPR, \(\epsilon^{b}_{udsc}=\epsilon^{s}_{udcb}\). The resulting expected 95% CL upper limits are shown in Fig. 3, where the solid (dashed, dotted) lines correspond to the default 1% (0.1%, no) systematic uncertainties. The FPR of \(10^{-4}\) for \(\epsilon^{b}_{s}\) and few\(\times 10^{-3}\) for \(\epsilon^{s}_{b}\) were estimated to be achievable Figure 2: **Top:** Expected 95% CL upper bounds on \(\mathcal{B}(h\to bq)\) as a function of the \(b\)-tagger efficiencies, neglecting systematic uncertainties. **Bottom:** Expected 95% CL upper bounds on \(\mathcal{B}(h\to bs)\) as a function of TPR and FPR. Solid (dashed) lines and colors are with default (no) systematic uncertainties. The Medium Working Point is based on the taggers introduced in Refs. [24; 25]. See main text for details. at FCC-ee in Ref. [24; 25]. Obtaining the \(\epsilon_{b}^{s}\) well below \(10^{-3}\) level will be hard, since this is roughly the fraction of \(b\)-quarks that decay effectively promptly, within the projected vertexing resolution of FCC-ee detectors [28]. To further improve on \(\epsilon_{b}^{s}\) one would thus need to rely on jet shape variables to distinguish between \(s\)- and \(b\)-jets. For rather optimistic FPR of \(10^{-4}\) the expected reach on \(\mathcal{B}(Z\to bs)\) is \(\mathcal{O}(10^{-6})\) (\(\mathcal{O}(10^{-7})\)) when assuming systematics of 1% (rather aggressive 0.1%), which is still well above the SM value (see Table 1). Given existing indirect constraints on effective \(Zbs\) couplings coming from \(b\to s\ell^{+}\ell^{-}\) transitions, which have already been determined at SM rates, we conclude that it will be challenging to reach bounds on \(\mathcal{B}(Z\to bs)\) that probe parameter space sensitive to NP. Similarly, the expected reach for \(Z\to cu\) is \(\mathcal{B}(Z\to cu)\sim 2\times 10^{-3}\)6, and thus well above the sensitivity of indirect probes, e.g., \(\mathcal{B}(D^{0}\to\mu^{+}\mu^{-})\). We further quantify these statements below. Footnote 6: See section S3.4 for further details. **Sensitivity to NP.** We define the effective FCNC couplings of the \(h\) and \(Z\) bosons to \(b\) and \(s\) quarks as \[\begin{split}\mathcal{L}&\supset g_{sb}^{L}(\bar{s }_{L}\gamma_{\mu}b_{L})Z^{\mu}+g_{sb}^{R}(\bar{s}_{R}\gamma_{\mu}b_{R})Z^{\mu} \\ &+y_{sb}(\bar{s}_{L}b_{R})h+y_{bs}(\bar{b}_{L}s_{R})h+\text{h.c.} \,,\end{split} \tag{3}\] and similarly for couplings to \(c\) and \(u\) (or \(b\) and \(d\)) quarks, with obvious changes in the notation. Eq. (3) can be obtained as the effective low energy realization of various extensions of the SM, e.g., the addition of vector-like quarks [29], or in the Two-Higgs-Doublet Model (2HDM) [30]. We provide details on these models in Sec. S5 of the supplementary material, while here we focus on the relevant phenomenology. Existing direct limits on the non-standard hadronic decays of the \(Z\) follow from the agreement of the measurement and the SM prediction for the \(Z\) hadronic width [31], giving \(\mathcal{B}(Z\to qq^{\prime})<2.9\times 10^{-3}\) at 95 % CL, cf. Table 1. Similarly, existing Higgs boson studies at the LHC already impose limits on its undetermined decays \(\mathcal{B}(h\to\text{undet.})<0.16\) at 95 % CL [32; 33]. Assuming this bound is saturated by \(h\to bs\) or \(h\to cu\) decays, we obtain \(|y_{ij},y_{ji}|\lesssim 7\times 10^{-3}\), where \(ij=\{cu,bs\}\) (shown as purple contours in Fig. 4). At energies below the \(h\) and \(Z\) masses, the effective couplings in Eq. (3) give rise to additional contributions in numerous observables, such as the \(B_{s}-\bar{B}_{s}\) mass splitting and the branching ratio for leptonic decay \(B_{s}\to\mu^{+}\mu^{-}\). Starting from Eq. (3), we perform the matching to the Weak Effective Theory (WET) operators and employ the package wilson[34] to compute the RGE running down to the scale \(\mu\sim m_{b}\), where we use flavio[35] and smelli[36] to compute contributions to the relevant flavour observables and construct the resulting likelihoods. The \(Z-bs\) couplings generate the effective \(C^{(\prime)}_{9,\ell\ell},C^{(\prime)}_{10,\ell\ell}\) coefficients in WET. The most stringent constraints on \(g_{sb}^{L},g_{sb}^{L}\) therefore come from the \(b\to s\ell^{+}\ell^{-}\) transitions. From the global fit we obtain \(|g_{sb}^{L,R}|\lesssim 10^{-5}\) with negative values of \(g_{sb}^{L}\) slightly preferred by the current experimental results7 (implying \(\mathcal{B}(Z\to bs)\) is essentially constrained to the SM value, within uncertainties). The projected FCC-ee reach, \(\mathcal{B}(Z\to bs)\lesssim 10^{-6}\) (assuming 1% systematics), can probe couplings of \(\mathcal{O}(10^{-3})\) and is thus Figure 3: Expected 95% CL upper bound on \(\mathcal{B}(Z\to bs)\) as a function of TPR and FPR. Solid (dashed, dotted) lines and colors are with default 1% (0.1%, no) systematic uncertainties. unable to put competitive constraints on NP. The analogous cases of \(Z\to cu,bd\) are discussed in Sec. S5.1 with similar conclusions; the indirect bounds \(|y_{uc}^{L,R}|\lesssim 3\times 10^{-4}\) (\(|g_{bd}^{L,R}|\lesssim 1\times 10^{-4}\)) imply \(\mathcal{B}(Z\to{uc})<4\times 10^{-7}\) ( \(\mathcal{B}(Z\to{bd})<6\times 10^{-8}\) ), which are at least three orders of magnitude below the projected FCC-ee reach. The situation is very different for \(h\to bs,cu\). The \(h-bs\) effective couplings in Eq. (3) generate dominant contributions to scalar \((\bar{b}s)^{2}\) operators in WET, namely \(C_{2,bs}^{(\prime)}\) and \(C_{4,bs}\)[37], which are probed by the \(B_{s}\) meson mixing observables. The resulting bounds on flavor changing couplings read \(|y_{bs},y_{sb}|\lesssim 10^{-3}\) (barring large cancellations), as shown by the red regions in the upper panel in Fig. 4. Similarly, the \(D-\bar{D}\) mixing constraints lead to the indirect constraints on \(|y_{cu},y_{uc}|\lesssim\text{few}\times 10^{-3}\), shown in the lower panel in Fig. 4. Excluding the regions with large cancellations, this leads to the approximate indirect bounds on \(\mathcal{B}(h\to q_{i}q_{j})\) quoted in Tab. 1. This is to be compared with the projected upper limits of FCC-ee on \(\mathcal{B}(h\to bs)\) and \(\mathcal{B}(h\to cu)\) shown with black lines in Fig. 4. Taking the medium working point for jet-flavor taggers, the expected reach \(\mathcal{B}(h\to bs)<9.6\times 10^{-4}\) translates to the bound \(|y_{bs},y_{sb}|\lesssim 5\times 10^{-4}\), whereas \(\mathcal{B}(h\to cu)<2.5\times 10^{-3}\) translates to \(|y_{cu},y_{uc}|\lesssim 8\times 10^{-4}\), as shown by the black solid lines. The latter thus improves the strongest indirect constraints on flavor-changing Higgs couplings by a factor of a few. For completeness, we show with lighter lines the expected bounds obtained employing less performative taggers. Details about \(h\to bd\) can be found in Sec. S5, as well as more examples of constraints on 2HDM parameter space away from the limit of light Higgs being the dominant contribution. **Conclusions.** The FCC-ee, running at the center of mass energies between the \(Z\) boson mass and the \(t\bar{t}\) threshold, will allow to measure flavor, electroweak and Higgs processes with an unprecedented level of precision. In this Letter we demonstrated the potential of FCC-ee to explore flavor changing decays of the Higgs and \(Z\) bosons (with similar expectations for CEPC). The projected sensitivities to \(\mathcal{B}(h\to bs,cu)\), in particular, go well beyond the current constraints from indirect probes, such as the \(B_{s}\) and \(D\) meson oscillations. The expected reach does strongly depend on the performance of the flavor taggers, for which we explored a range of achievable efficiencies and uncertainties, based on existing measurements and ongoing studies. Auspiciously, even with rather conservative assumptions, where only the \(b\)-tagger is used in the analysis, the projected reach is already such that it will be able to probe significant portions of unconstrained NP parameter space as demonstrated in Fig. 4 (and on the example of a type III 2HDM in S5.1). Finally, as a side-result we have also updated the SM predictions for the \(h\to bs,cu\), and \(Z\to bs,cu\) branching ratios. These are orders of magnitude smaller, so that any signal in these channels would unambiguously imply existence of New Physics. **Acknowledgments.** The authors would like to thank Jose Zurita for the updated references on the LHC upper limits on non-standard Higgs boson decays. AK thanks Aleks Smolkovic for clarifications regarding flavio. JZ and MS acknowledge support in part by the DOE grant DE-SC0011784 and NSF OAC-2103889. JFK, AK and MT acknowledge the financial support from the Slovenian Research Agency (grant No. J1-3013 and research core funding No. P1-0035). This work was performed in part at the Aspen Center for Physics, which is supported by National Science Foundation grant PHY-2210452.
2309.05789
Entropy current and entropy production in relativistic spin hydrodynamics
We use a first-principle quantum-statistical method to derive the expression of the entropy production rate in relativistic spin hydrodynamics. We show that the entropy current is not uniquely defined and can be changed by means of entropy-gauge transformations, much the same way as the stress-energy tensor and the spin tensor can be changed with pseudo-gauge transformations. We show that the local thermodynamic relations, which are admittedly educated guesses in relativistic spin hydrodynamics inspired by those at global thermodynamic equilibrium, do not hold in general and they are also non-invariant under entropy-gauge transformations. Notwithstanding, we show that the entropy production rate is independent of those transformations and we provide a universally applicable expression, extending that known in literature, from which one can infer the dissipative parts of the energy momentum and spin tensors.
Francesco Becattini, Asaad Daher, Xin-Li Sheng
2023-09-11T19:47:35Z
http://arxiv.org/abs/2309.05789v2
# Entropy current and entropy production in relativistic spin hydrodynamics ###### Abstract We use a first-principle quantum-statistical method to derive the expression of the entropy production rate in relativistic spin hydrodynamics. We show that the entropy current is not uniquely defined and can be changed by means of entropy-gauge transformations, much the same way as the stress-energy tensor and the spin tensor can be changed with pseudo-gauge transformations. We show that the local thermodynamic relations, which are admittedly educated guesses in relativistic spin hydrodynamics inspired by those at global thermodynamic equilibrium, do not hold in general and they are also non-invariant under entropy-gauge transformations. Notwithstanding, we show that the entropy production rate is independent of those transformations and we provide a universally applicable expression, extending that known in literature, from which one can infer the dissipative parts of the energy momentum and spin tensors. ## I Introduction Motivated by the evidence of spin polarization of particles produced in relativistic heavy ion collisions [1; 2], there is a growing interest in the so-called relativistic spin hydrodynamics [3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19]. Relativistic spin hydrodynamics stipulates that the description of a relativistic fluid requires the addition of a _spin tensor_, that is the mean value of a rank 3 tensor operator \(\widehat{\mathcal{S}}^{\lambda\mu\nu}\) (the last two indices anti-symmetric) contributing to the overall angular momentum current: \[\widehat{\mathcal{J}}^{\lambda\mu\nu}=x^{\mu}\widehat{T}^{\lambda\nu}-x^{\nu} \widehat{T}^{\lambda\mu}+\widehat{\mathcal{S}}^{\lambda\mu\nu}\,,\] where \(\widehat{T}^{\mu\nu}\) is the stress-energy tensor operator. This current is conserved, which implies that the spin tensor fulfills the continuity equation: \[\partial_{\lambda}\widehat{\mathcal{S}}^{\lambda\mu\nu}=\widehat{T}^{\nu\mu}- \widehat{T}^{\mu\nu}\,. \tag{1}\] It is important to point out that the spin tensor - and the stress-energy tensor as well - are not uniquely defined. Indeed, they can be changed with a so-called pseudo-gauge transformation [20; 21] to a new couple of tensors fulfilling the same dynamical equations and providing the same integrated conserved charges. Since the spin tensor can be made vanishing with a suitable pseudo-gauge transformation, its dynamical meaning has been questioned, yet it was observed in ref. [22] that for a fluid _not_ in global thermodynamic equilibrium (such as the QGP throughout its lifetime) the quantum state of the system (i.e. the density operator describing initial local equilibrium) is not invariant under pseudo-gauge transformations. Thus, in principle, the physical measurements depend on the pseudo-gauge if the initial quantum state is not invariant, and particularly on the intensive quantity which is thermodynamically conjugated to the spin tensor, the spin potential. For instance, if the spin tensor does not vanish, the spin polarization of final particles depends on the difference between spin potential and thermal vorticity [23]. The microscopic conditions underpinning spin hydrodynamics have been studied and elucidated in ref. [11], where it was made clear that spin hydrodynamics regime occurs under a specific hierarchy of the interaction time scales in the system. A key problem in the relativistic hydrodynamics with spin is the derivation of the constitutive equations of the spin tensor as well as of the anti-symmetric part of the stress-energy tensor. This problem has drawn significant attention over the past few years, with several derivations of constitutive equations [7; 8; 9; 10; 11; 12; 13] based on the requirement of the positivity of local entropy production rate. However, like in the traditional approach to relativistic hydrodynamics, the entropy current is not really derived, but it is obtained from an educated guess of a thermodynamic relations between the proper densities of entropy, energy, charge and "spin density" \(S^{\mu\nu}\equiv u_{\lambda}\mathcal{S}^{\lambda,\mu\nu}\) as follows: \[\begin{split} Ts+\mu n=\rho+p-\frac{1}{2}\omega_{\mu\nu}S^{\mu \nu}\\ \mathrm{d}p=s\,\mathrm{d}T+n\,\mathrm{d}\mu+\frac{1}{2}S^{\mu\nu }\mathrm{d}\omega_{\mu\nu}\end{split} \tag{2}\] where \(T\) is temperature, \(\mu\) a chemical potential, \(\rho\) the proper energy density, \(p\) the pressure, \(n\) the charge density and \(\omega_{\mu\nu}\) is the spin potential1. Footnote 1: This is related to \(\Omega\) defined in ref. [22] by the relation \(\omega=T\Omega\). In this work, we apply the quantum-statistical approach to relativistic hydrodynamics [24; 25; 26] by including spin tensor. The quantum statistical method based on local equilibrium density operator has several advantages over other approaches in that it makes it possible to _derive_ from first principles a form of the entropy current and entropy production rate rather than constructing it assuming a particular form of the local thermodynamic relation such as the equation (2). We will use a recent result on the extensivity of the logarithm of the partition function to obtain an exact form of the entropy current [27]. We will be able to show that the relation (2) is incomplete and that the entropy density has, in general, additional terms involving the spin tensor. Furthermore, we will extend the derivation of ref. [24] of the entropy production rate to include the spin tensor. Such general relation is the starting point to derive the constitutive relations for the anti-symmetric part of the stress-energy tensor and the spin tensor. ## II Entropy current and local equilibrium In the quantum-statistical description of a relativistic fluid, the local equilibrium density operator denoted as \(\widehat{\rho}_{LE}\) is obtained by maximizing entropy \(S=-\operatorname{Tr}(\widehat{\rho}\log\widehat{\rho})\) over some preset space-like hypersurface by constraining the mean values of the energy, momentum, charge and spin densities to be equal to their actual values [22]: \[\widehat{\rho}_{\rm LE}=\frac{1}{Z_{\rm LE}}\exp\left[-\int_{\Sigma}{\rm d} \,\Sigma_{\mu}\,\left(\widehat{T}^{\mu\nu}\beta_{\nu}-\widehat{\zeta j}^{\mu} -\frac{1}{2}\Omega_{\lambda\nu}\widehat{\mathcal{S}}^{\mu\lambda\nu}\right) \right]\,, \tag{3}\] where \({\rm d}\Sigma_{\mu}\equiv{\rm d}\Sigma\,n_{\mu}\), \(n\) being the unit vector perpendicular to the hypersurface \(\Sigma\); the function \(Z_{\rm LE}\) is the partition function, and the operators \(\widehat{T}^{\mu\nu}\), \(\widehat{\mathcal{S}}^{\mu\lambda\nu}\) are the energy-momentum and spin tensor operators, a particular couple amongst all the possible couples connected by pseudo-gauge transformations. The constraints read: \[n_{\mu}T^{\mu\nu}=n_{\mu}T_{\rm LE}^{\mu\nu}\,,\qquad n_{\mu}j^{\mu}=n_{\mu}j _{\rm LE}^{\mu}\,,\qquad n_{\mu}\mathcal{S}^{\mu\lambda\nu}=n_{\mu}\mathcal{ S}^{\mu\lambda\nu}_{\rm LE}\,, \tag{4}\] where the local equilibrium values are defined as: \[X_{\rm LE}\equiv\operatorname{Tr}\!\left(\widehat{\rho}_{\rm LE}\widehat{X} \right)-\left\langle 0\right|\widehat{X}\left|0\right\rangle\,, \tag{5}\] with \(\left|0\right\rangle\) being the supposedly non-degenerate lowest lying eigenvector of the operator in the exponent of (3). In the equation (3), the fields \(\beta_{\nu}\), \(\zeta\) and \(\Omega_{\lambda\nu}\) are the Lagrange multipliers related to this problem, and they are the thermal velocity four-vector, the chemical potential to temperature ratio, and the spin potential to temperature ratio respectively, that is: \[\beta=\frac{u}{T}\,,\qquad\zeta=\frac{\mu}{T}\,,\qquad\Omega=\frac{\omega}{T}\,. \tag{6}\] It is worth pointing out that they can be obtained as solutions of the constraint equations (4) [26], if the exact values of the stress-energy tensor and other currents is known. In relativistic hydrodynamics, since they are not known _a priori_, they are solutions of the hydrodynamic partial differential equations with initial conditions expressed by the equations (4) over the initial Cauchy space-like hypersurface. It should also be stressed that \(\beta\) thereby defines a so-called hydrodynamic frame in its own (the so-called thermodynamic or thermometric or \(\beta\) frame), which does not coincide with the better known Landau or Eckart frames. At global equilibrium one has: \[\beta_{\mu}=b_{\mu}+\varpi_{\mu\nu}x^{\nu}\,,\,\,\,\text{with}\,\,\,b,\varpi =\text{const}\,,\qquad\Omega=\varpi\,,\qquad\zeta=\text{const}\,, \tag{7}\] where \(\varpi\) is a constant anti-symmetric tensor, the thermal vorticity. Starting from the equation (3), it is possible to prove [27] that if the operator: \[\widehat{\Upsilon}\equiv\int_{\Sigma}{\rm d}\,\Sigma_{\mu}\,\left(\widehat{T }^{\mu\nu}\beta_{\nu}-\widehat{\zeta j}^{\mu}-\frac{1}{2}\Omega_{\lambda\nu} \widehat{\mathcal{S}}^{\mu\lambda\nu}\right)\] is bounded from below and the lowest lying eigenvalue \(\left|0\right\rangle\) is non-degenerate, the logarithm of \(Z_{\rm LE}\) is _extensive_, namely it can be written as an integral over \(\Sigma\): \[\log Z_{\rm LE}=\int_{\Sigma}{\rm d}\Sigma_{\mu}\;\phi^{\mu}-\left\langle 0 \right|\widehat{\Upsilon}\left|0\right\rangle=\int_{\Sigma}{\rm d}\Sigma_{\mu} \;\left[\phi^{\mu}-\left\langle 0\right|\left(\widehat{T}^{\mu\nu}\beta_{\nu}- \widehat{\zeta}\widehat{j}^{\mu}-\frac{1}{2}\Omega_{\lambda\nu}\widehat{ \mathcal{S}}^{\mu\lambda\nu}\right)\left|0\right\rangle\right] \tag{8}\] where \[\phi^{\mu}=\int_{1}^{\infty}{\rm d}\lambda\;\left(T_{\rm LE}^{\mu\nu}(\lambda) \beta_{\nu}-\zeta j_{\rm LE}^{\mu}(\lambda)-\frac{1}{2}\Omega_{\lambda\nu} \mathcal{S}_{\rm LE}^{\mu\lambda\nu}(\lambda)\right) \tag{9}\] is defined as the thermodynamic potential current. In the equation (9), the integration variable \(\lambda\) is a dimensionless parameter which multiplies the exponent of the local equilibrium density operator (3), that is: \[\widehat{\rho}_{\rm LE}(\lambda)=\frac{1}{Z_{\rm LE}(\lambda)}\exp\left[- \lambda\int_{\Sigma}{\rm d}\,\Sigma_{\mu}\;\left(\widehat{T}^{\mu\nu}\beta_{ \nu}-\widehat{\zeta}\widehat{j}^{\mu}-\frac{1}{2}\Omega_{\lambda\nu}\widehat{ \mathcal{S}}^{\mu\lambda\nu}\right)\right]\,, \tag{10}\] and \(T_{\rm LE}^{\mu\nu}(\lambda),j_{\rm LE}^{\mu}(\lambda),\mathcal{S}_{\rm LE}^ {\mu\lambda\nu}(\lambda)\) are calculated with the equation (5) with the modified density operator just defined in the eq. (10). As \(\lambda\) multiplies \(\beta\), \(\zeta\) and \(\Omega\), this coefficient plays the role of a rescaled inverse temperature, so it possible to change the integration variable in (9) from \(\lambda\) to \(T^{\prime}(x)=T(x)/\lambda\) and rewrite the thermodynamic potential current: \[\phi^{\mu}(x)=\int_{0}^{T(x)}\frac{{\rm d}T^{\prime}}{T^{\prime 2}}\;\left(T_{ \rm LE}^{\mu\nu}(x)[T^{\prime},\mu,\omega]u_{\nu}(x)-\mu(x)j_{\rm LE}^{\mu}(x) [T^{\prime},\mu,\omega]-\frac{1}{2}\omega_{\lambda\nu}(x)\mathcal{S}_{\rm LE} ^{\mu\lambda\nu}(x)[T^{\prime},\mu,\omega]\right)\,, \tag{11}\] where we used the eq. (6). The equation (11) shows that the thermodynamic potential current can be calculated by integrating in temperature the mean values at local thermodynamic equilibrium of the various involved currents. It is important to stress the meaning of the square brackets, which denote a _functional_ dependence on the arguments. Indeed, the local equilibrium values of the currents at some point \(x\) depend not just on the value of \(T^{\prime},\mu,\omega\) at the same point \(x\), but on the whole functions \(T^{\prime}(y),\mu(y),\omega(y)\); tantamount, assuming analiticity, on the value of the functions and all their gradients at the point \(x\). Once the thermodynamic potential current \(\phi^{\mu}\) is determined, an entropy current can be defined. By using the definition (5) and the equations (3),(8) we have: \[\begin{split} S=&-{\rm Tr}(\widehat{\rho}_{\rm LE} \log\widehat{\rho}_{\rm LE})=\log Z_{\rm LE}+\int_{\Sigma}{\rm d}\Sigma_{\mu} \;\left({\rm Tr}\Big{(}\widehat{\rho}_{\rm LE}\widehat{T}^{\mu\nu}\Big{)} \beta_{\nu}-\zeta\,{\rm Tr}\Big{(}\widehat{\rho}_{\rm LE}\widehat{j}^{\mu} \Big{)}-\frac{1}{2}\Omega_{\lambda\nu}\,{\rm Tr}\Big{(}\widehat{\rho}_{\rm LE }\widehat{\mathcal{S}}^{\mu\lambda\nu}\Big{)}\right)\\ =&\int_{\Sigma}{\rm d}\Sigma_{\mu}\;\left(\phi^{ \mu}+T_{\rm LE}^{\mu\nu}\beta_{\nu}-\zeta j_{\rm LE}^{\mu}-\frac{1}{2}\Omega_{ \lambda\nu}\mathcal{S}_{\rm LE}^{\mu\lambda\nu}\right)\,,\end{split} \tag{12}\] which implies that we can define an entropy current as: \[s^{\mu}=\phi^{\mu}+T_{\rm LE}^{\mu\nu}\beta_{\nu}-\zeta j_{\rm LE}^{\mu}- \frac{1}{2}\Omega_{\lambda\nu}\mathcal{S}_{\rm LE}^{\mu\lambda\nu}\,. \tag{13}\] ## III Entropy current: quasi-objective form and entropy-gauge transformations The equations (11) and (13) define the fields \(\phi^{\mu}\) and \(s^{\mu}\). However, they depend not just on \(x\) but also on the space-like hypersurface employed to define the local equilibrium mean values of the currents through the density operator (3) \(\widehat{\rho}_{\rm LE}\). More specifically, to each point \(x\) there must be a corresponding hypersurface \(\Sigma\) needed to define local thermodynamic equilibrium through the constraints (4). Altogether, to define the thermodynamic potential and entropy current at each point \(x\) one needs to specify in advance a family of 3D space-like hypersurfaces, a so-called _foliation_ of the space-time. The dependence of the currents (11) and (13) on the foliation involves a problem in that, if we are to calculate the total entropy by integrating the entropy current (13) on some \(\Sigma\) which does not belong to the foliation (see figure 1), the result is in general different from the total entropy which would be obtained from the Von Neumann formula imposing the constraints of local equilibrium (4) over this particular \(\Sigma\). In symbols: \[\int_{\Sigma}{\rm d}\Sigma_{\mu}\;s^{\mu}\neq-{\rm Tr}(\widehat{\rho}_{\rm LE }(\Sigma)\log\widehat{\rho}_{\rm LE}(\Sigma))\,, \tag{14}\] with equality applying only if \(\Sigma\) belongs to the foliation. Such a situation is quite disturbing, as one of the requested features of the entropy current field is to provide the actual value of the total entropy. To settle the issue, one can define the entropy current more in general as: \[s^{\mu}=\phi^{\mu}+T^{\mu\nu}\beta_{\nu}-\zeta j^{\mu}-\frac{1}{2}\Omega_{\lambda \nu}\mathcal{S}^{\mu\lambda\nu}\,, \tag{15}\] with (omitting most arguments to make the expression more compact): \[\phi^{\mu}=\int_{0}^{T}\frac{\mathrm{d}T^{\prime}}{T^{\prime 2}}\ \left(T^{\mu\nu}[T^{ \prime}]u_{\nu}-\mu j^{\mu}[T^{\prime}]-\frac{1}{2}\omega_{\lambda\nu} \mathcal{S}^{\mu\lambda\nu}[T^{\prime}]\right)\,. \tag{16}\] Indeed, in the equations (15) and (16) we have used the actual values of the conserved currents. By doing so, whenever we integrate the currents over some hypersurface \(\Sigma\), not necessarily belonging to the original foliation, the result is the same that we would have obtained by enforcing the constraints (4) on \(\Sigma\) itself. Since: \[\int_{\Sigma}\mathrm{d}\Sigma_{\mu}s^{\mu} =\int_{\Sigma}\mathrm{d}\Sigma\;n_{\mu}\left(\phi^{\mu}+T^{\mu \nu}\beta_{\nu}-\zeta j^{\mu}-\frac{1}{2}\Omega_{\lambda\nu}\mathcal{S}^{\mu \lambda\nu}\right)\] \[=\int_{\Sigma}\mathrm{d}\Sigma\;n_{\mu}\left(\int_{0}^{T}\frac{ \mathrm{d}T^{\prime}}{T^{\prime 2}}\ \left(T^{\mu\nu}[T^{\prime}]u_{\nu}-\mu j^{\mu}[T^{ \prime}]-\frac{1}{2}\omega_{\lambda\nu}\mathcal{S}^{\mu\lambda\nu}[T^{\prime}] \right)+T^{\mu\nu}\beta_{\nu}-\zeta j^{\mu}-\frac{1}{2}\Omega_{\lambda\nu} \mathcal{S}^{\mu\lambda\nu}\right)\] a local equilibrium density operator like in equation (3) can be built on \(\Sigma\) by enforcing the constraints (4) therein. Therefore, the above expression becomes, by using the eqs. (9) and (12): \[\int_{\Sigma}\mathrm{d}\Sigma\;n_{\mu}\left(\int_{0}^{T}\frac{ \mathrm{d}T^{\prime}}{T^{\prime 2}}\ \left(T^{\mu\nu}_{\mathrm{LE}}[T^{\prime}]u_{\nu}-\mu j^{\mu}_{ \mathrm{LE}}[T^{\prime}]-\frac{1}{2}\omega_{\lambda\nu}\mathcal{S}^{\mu \lambda\nu}_{\mathrm{LE}}[T^{\prime}]\right)+T^{\mu\nu}_{\mathrm{LE}}\beta_{ \nu}-\zeta j^{\mu}_{\mathrm{LE}}-\frac{1}{2}\Omega_{\lambda\nu}\mathcal{S}^{ \mu\lambda\nu}_{\mathrm{LE}}\right)\] \[=-\operatorname{Tr}(\widehat{\rho}_{\mathrm{LE}}(\Sigma)\log \widehat{\rho}_{\mathrm{LE}}(\Sigma))\,.\] Similarly, the equation (8) can be extended to a relation which applies to any space-like hypersurface \(\Sigma\): \[\int\mathrm{d}\Sigma_{\mu}\phi^{\mu}=\log Z_{\mathrm{LE}}(\Sigma)+\left\langle 0 \right|\widehat{\Upsilon}\left|0\right\rangle \tag{17}\] The two equations (15) and (16) are the final expressions defining the entropy current for a system which is close to local thermodynamic equilibrium. It is worth remarking that those equations imply that the entropy current depends on the actual mean values of the conserved currents ensuing from the quantum field Lagrangian. The question is Figure 1: An example of a family of 3D space-like hypersurfaces (solid lines) defining a foliation parametrized by the real variable \(\tau\), which is necessary to define local thermodynamic equilibrium. The 3D space-like hypersurface \(\Sigma\) does not belong to the foliation. whether, with the definitions (16) and (15) the thermodynamic potential and entropy current fields are _objective_, namely independent of a predefined foliation. For this purpose, in the first place the Lagrange multiplier fields \(\beta,\zeta,\Omega\) which also appear in those definitions should be independent thereof, a condition which is achieved in relativistic hydrodynamics if they are obtained as solutions of partial differential equations from given initial conditions. Yet, a complete independence cannot be achieved. Looking carefully at the thermodynamic potential current in the eq. (16), it appears that its definition involves the knowledge of the conserved currents as functionals of the temperature. However, such functionals can be constructed only if the local equilibrium operator is introduced, hence a separation between the local equilibrium term and the dissipative term, which does require the introduction of a foliation. We can signify this limitation by saying that the thermodynamic potential current, and the entropy current as well, can be made _quasi-objective_. The quasi-objective nature of the entropy current also shows up in the entropy production rate (32), as will be discussed in Section V. A further issue is that the thermodynamic potential current and the entropy current fields are not unique. It is quite clear that a transformation of the thermodynamic potential current: \[\phi^{\prime\mu}=\phi^{\mu}+\nabla_{\lambda}A^{\lambda\mu}\,, \tag{18}\] where \(A^{\lambda\mu}\) is an arbitrary anti-symmetric tensor, implying: \[s^{\prime\mu}=s^{\mu}+\nabla_{\lambda}A^{\lambda\mu} \tag{19}\] will leave the total entropy \[S=\int_{\Sigma}\mathrm{d}\Sigma_{\mu}\;s^{\mu}\] invariant because of the relativistic Stokes theorem, provided that the tensor \(A\) fulfills suitable boundary conditions. Therefore, just like \(T^{\mu\nu}\) and \(\mathcal{S}^{\mu\lambda\nu}\), the entropy current \(s^{\mu}\) is not uniquely defined and can be changed with transformations (19), henceforth defined as _entropy-gauge transformations_. Such a freedom in defining the entropy current affects the local thermodynamic relations, as we will see. Nevertheless, the divergence of the entropy current is invariant under pseudo-gauge transformations because: \[\nabla_{\mu}s^{\prime\mu}=\nabla_{\mu}s^{\mu}+\nabla_{\mu}\nabla_{\lambda}A^{ \lambda\mu}=\nabla_{\mu}s^{\mu}\,.\] The entropy production rate will be discussed in Section V. ## IV Discussion on local thermodynamic relations The local thermodynamic relation between proper densities can be obtained by contracting the entropy current with a suitable four-velocity vector. For instance, one can contract the (15) with the four-velocity defined by the direction of \(\beta\) that is \(u^{\mu}=\beta^{\mu}/\sqrt{\beta^{2}}=T\beta^{\mu}\)2: Footnote 2: Note that if one contracts the (15) with the normalized time-like eigenvector of the stress-energy tensor \(u_{L}\), which defines with the Landau frame, the obtained LTR reads: \[s_{L}=s^{\mu}u_{L\mu}=\phi\cdot u_{L}+u_{L}\cdot\beta\rho_{L}-\zeta n_{L}- \frac{1}{2}\Omega_{\lambda\nu}u_{L\mu}\mathcal{S}^{\mu\lambda\nu}\equiv\phi \cdot u_{L}+u_{L}\cdot\beta\rho_{L}-\zeta n_{L}-\frac{1}{2}\Omega_{\lambda \nu}S^{\lambda\nu}_{L} \tag{20}\] Since \(u_{L}\cdot\beta\neq\sqrt{\beta^{2}}=1/T\), it turns out that, even if the entropy current was quasi-objective, the LTR is frame-dependent [26] and much care should be taken when using it to derive constitutive equations.: \[s\equiv s^{\mu}u_{\mu}=\phi\cdot u+\frac{1}{T}\rho-\zeta n-\frac{1}{2}\Omega_{ \lambda\nu}u_{\mu}\mathcal{S}^{\mu\lambda\nu}\equiv\phi\cdot u+\frac{1}{T} \rho-\frac{\mu}{T}n-\frac{1}{2}\Omega_{\lambda\nu}S^{\lambda\nu}\,, \tag{21}\] where \(\rho=u_{\mu}u_{\nu}T^{\mu\nu}\) and \(n=u_{\mu}j^{\mu}\). Defining the pressure as: \[p\equiv T\phi\cdot u\,,\] the eq. (21) coincides with the first thermodynamic relation in eq. (2). It should be pointed out though, that only at global equilibrium with \(\beta=\mathrm{const}\) this quantity coincides with the hydrostatic pressure, that is the diagonal spatial component of the mean value of the stress-energy tensor (see Appendix A); in all other cases, it does not need to. By contracting the eq. (16) with \(u=\beta/\sqrt{\beta^{2}}\) we obtain: \[p=T\phi\cdot u =T\int_{0}^{T}\frac{\mathrm{d}T^{\prime}}{T^{\prime 2}}\;\left(u_{ \mu}T^{\mu\nu}[T^{\prime}]u_{\nu}-\mu u_{\mu}j^{\mu}[T^{\prime}]-\frac{1}{2} \omega_{\lambda\nu}u_{\mu}\mathcal{S}^{\mu\lambda\nu}[T^{\prime}]\right) \tag{22}\] \[=T\int_{0}^{T}\frac{\mathrm{d}T^{\prime}}{T^{\prime 2}}\;\left(\rho[T^{ \prime}]-\mu n[T^{\prime}]-\frac{1}{2}\omega_{\lambda\nu}S^{\lambda\nu}[T^{ \prime}]\right)\,,\] whence the following relation can be readily obtained: \[\left.\frac{\partial p}{\partial T}\right|_{\mu,\omega}=s \tag{23}\] by using the (21). This equation is the first step in proving the second relation (2), but in fact the remaining two partial derivative of the pressure function do not need to coincide with the charge density and the spin density and in general: \[\left.\frac{\partial p}{\partial\mu}\right|_{T,\omega}\neq n\,,\qquad\qquad \left.\frac{\partial p}{\partial\omega_{\lambda\nu}}\right|_{T,\mu}\neq S^{ \lambda\nu}\,.\] Indeed, for the equality to apply, one would need the following thermodynamic relation to hold: \[T\mathrm{d}s=\mathrm{d}\rho-\mu\,\mathrm{d}n-\frac{1}{2}\omega_{\lambda\nu} \mathrm{d}S^{\lambda\nu}\,, \tag{24}\] and yet this cannot be obtained from the definitions (21) and (16). Furthermore, the relation (23) is not invariant under entropy-gauge transformations. The thermodynamic potential current can be redefined according to the (18) and, contracting with the four-velocity we get: \[p^{\prime}=T\phi^{\prime}\cdot u=p+Tu_{\mu}\nabla_{\lambda}A^{\lambda\mu}\,,\] where the transformed quantities are denoted with a prime. It is then easy to show that: \[\left.\frac{\partial p^{\prime}}{\partial T}\right|_{\mu\omega}=s^{\prime}+u _{\mu}T\frac{\partial}{\partial T}\nabla_{\lambda}A^{\lambda\mu}\right|_{\mu \omega}\,. \tag{25}\] If the second term on the right hand side is non-vanishing, even the relation (23) is broken. An example of an entropy-gauge transformation which breaks the (23) is shown in Appendix C for the global equilibrium with rotation. In conclusion, the local thermodynamic relations (2) are not fully appropriate in the derivation of the divergence of the entropy current. On one hand, it turns out that the differential relation in (2) cannot be proved in general and on the other hand, perhaps most importantly, because they are both non-invariant under entropy-gauge transformations. ## V Entropy production rate The entropy production rate, which is important to obtain the constitutive equations of relativistic hydrodynamics, is determined by taking the divergence of the equation (15). By using the continuity equations of the stress-energy tensor, the number current and the spin tensor, that is: \[\nabla_{\mu}T^{\mu\nu}=0\,,\qquad\nabla_{\mu}j^{\mu}=0\,,\qquad\nabla_{\mu} \mathcal{S}^{\mu\lambda\nu}=T^{\nu\lambda}-T^{\lambda\nu}\,, \tag{26}\] we obtain: \[\begin{split}\nabla_{\mu}s^{\mu}&=\nabla_{\mu} \phi^{\mu}+T^{\mu\nu}\nabla_{\mu}\beta_{\nu}-j^{\mu}\nabla_{\mu}\zeta-\frac{ 1}{2}\mathcal{S}^{\mu\lambda\nu}\nabla_{\mu}\Omega_{\lambda\nu}-\frac{1}{2} \Omega_{\lambda\nu}\nabla_{\mu}\mathcal{S}^{\mu\lambda\nu}\\ &=\nabla_{\mu}\phi^{\mu}+T_{S}^{\mu\nu}\xi_{\mu\nu}-j^{\mu} \nabla_{\mu}\zeta+T_{A}^{\mu\nu}(\Omega_{\mu\nu}-\varpi_{\mu\nu})-\frac{1}{2} \mathcal{S}^{\mu\lambda\nu}\nabla_{\mu}\Omega_{\lambda\nu}\,,\end{split} \tag{27}\] where \(T_{S}\) and \(T_{A}\) are the symmetric and anti-symmetric parts of the stress-energy tensor and \[\xi_{\mu\nu}=\frac{1}{2}\left(\nabla_{\mu}\beta_{\nu}+\nabla_{\nu}\beta_{\mu} \right)\,,\qquad\varpi_{\mu\nu}=\frac{1}{2}\left(\nabla_{\nu}\beta_{\mu}- \nabla_{\mu}\beta_{\nu}\right)\] are the thermal shear and thermal vorticity tensor respectively. The next step, as it appears from the equation (27), is the calculation of the divergence of the thermodynamic potential current, \(\nabla_{\mu}\phi^{\mu}\). To derive it, it is convenient to study the change of \(\log Z_{\mathrm{LE}}\) under an infinitesimal change of the integration 3D hypersurface (see figure 2). An infinitesimal change of hypersurface may be seen, in simple terms, as the result of locally moving every point \(x\in\Sigma\) to a point \(x^{\prime}(x,\epsilon)\in\Sigma_{\epsilon}\), \(\epsilon\) being a finite real parameter. Setting \(x^{\prime}(x,0)=x\), we define: \[\left.\frac{\mathrm{d}x^{\prime\mu}(x,\epsilon)}{\mathrm{d}\epsilon}\right|_{ \epsilon=0}=\delta^{\mu}(x)\,.\] For a small \(\epsilon\), the vector field \(\delta\) loosely represents the direction in which the hypersurface is locally modified and the parameter \(\epsilon\) describes how far along the vector field \(\delta\) we move the hypersurface. Formally, these definitions are those of a one-parameter group of diffeomorphisms, which are a prerequisite to define the Lie derivative. For the special case of the integration of a vector field \(V^{\mu}\) over a 3D-hypersurface, one has (see Appendix B): \[\lim_{\epsilon\to 0}\frac{1}{\epsilon}\left(\int_{\Sigma_{\epsilon}}\mathrm{d} \Sigma_{\mu}V^{\mu}-\int_{\Sigma}\mathrm{d}\Sigma_{\mu}\,V^{\mu}\right)=\int_{ \partial\Sigma}\mathrm{d}\tilde{S}_{\mu\nu}\,\delta^{\mu}V^{\nu}+\int_{\Sigma} \mathrm{d}\Sigma\cdot\delta\,\nabla_{\mu}V^{\mu}\,, \tag{28}\] where \(\partial\Sigma\) is the 2-D boundary surface. We can apply this equation to the (17) to obtain the infinitesimal change of \(\log Z_{\mathrm{LE}}\) by a change of the hypersurface: \[\lim_{\epsilon\to 0}\frac{1}{\epsilon}\left[\log Z_{\mathrm{LE}}( \Sigma_{\epsilon})-\log Z_{\mathrm{LE}}(\Sigma)\right]=\int_{\Sigma}\mathrm{d }\Sigma\cdot\delta\,\nabla_{\mu}\left(\phi^{\mu}-\langle 0|\,\widehat{T}^{\mu\nu} \beta_{\nu}-\widehat{\zeta j}^{\mu}-\frac{1}{2}\Omega_{\lambda\nu}\widehat{ \mathcal{S}}^{\mu\lambda\nu}\,|0\rangle\right)\] \[=\int_{\Sigma}\mathrm{d}\Sigma\cdot\delta\,\nabla_{\mu}\phi^{\mu} -\int_{\Sigma}\mathrm{d}\Sigma\cdot\delta\,\,\langle 0|\,\widehat{T}^{\mu\nu}_{S} \xi_{\mu\nu}-\widehat{j}^{\mu}\nabla_{\mu}\zeta+\widehat{T}^{\mu\nu}_{A}( \Omega_{\mu\nu}-\varpi_{\mu\nu})-\frac{1}{2}\widehat{\mathcal{S}}^{\mu \lambda\nu}\nabla_{\mu}\Omega_{\lambda\nu}\,|0\rangle\, \tag{29}\] where, in the last step, we have used the continuity equations (26), holding at operator level. On the other hand, the logarithm of the partition function can be calculated by means of its definition as a trace. For an infinitesimal \(\epsilon\) one has: \[Z_{\mathrm{LE}}(\Sigma_{\epsilon})=\mathrm{Tr}\left(\exp\left[- \int_{\Sigma_{\epsilon}}\mathrm{d}\Sigma_{\mu}\left(\widehat{T}^{\mu\nu}\beta_ {\nu}-\widehat{\zeta j}^{\mu}-\frac{1}{2}\Omega_{\lambda\nu}\widehat{\mathcal{ S}}^{\mu\lambda\nu}\right)\right]\right)\] \[\simeq\mathrm{Tr}\left(\exp\left[-\int_{\Sigma}\mathrm{d}\Sigma_{ \mu}\left(\widehat{T}^{\mu\nu}\beta_{\nu}-\widehat{\zeta j}^{\mu}-\frac{1}{2} \Omega_{\lambda\nu}\widehat{\mathcal{S}}^{\mu\lambda\nu}\right)-\epsilon\int _{\Sigma}\mathrm{d}\Sigma\cdot\delta\nabla_{\mu}\left(\widehat{T}^{\mu\nu} \beta_{\nu}-\widehat{\zeta j}^{\mu}-\frac{1}{2}\Omega_{\lambda\nu}\widehat{ \mathcal{S}}^{\mu\lambda\nu}\right)\right]\right)\] \[=\mathrm{Tr}\left(\exp\left[-\int_{\Sigma}\mathrm{d}\Sigma_{ \mu}\left(\widehat{T}^{\mu\nu}\beta_{\nu}-\widehat{\zeta j}^{\mu}-\frac{1}{2} \Omega_{\lambda\nu}\widehat{\mathcal{S}}^{\mu\lambda\nu}\right)\right.\] \[-\left.\left.\epsilon\int_{\Sigma}\mathrm{d}\Sigma\cdot\delta\, \,\left(\widehat{T}^{\mu\nu}_{S}\xi_{\mu\nu}-\widehat{j}^{\mu}\nabla_{\mu} \zeta+\widehat{T}^{\mu\nu}_{A}(\Omega_{\mu\nu}-\varpi_{\mu\nu})-\frac{1}{2} \widehat{\mathcal{S}}^{\mu\lambda\nu}\nabla_{\mu}\Omega_{\lambda\nu}\right) \right]\right)\,,\] where we have used the equation (28) - assuming that the boundary term vanishes - and, again, the continuity equations (26) at operator level. By expanding the trace in the small parameter \(\epsilon\), and keeping in mind the equation (3), we obtain: \[Z_{\mathrm{LE}}(\Sigma_{\epsilon}) \simeq Z_{\mathrm{LE}}(\Sigma)-\epsilon Z_{\mathrm{LE}}(\Sigma)\] \[\times\int_{\Sigma}\mathrm{d}\Sigma\cdot\delta\,\,\left(\mathrm{ Tr}\!\left(\widehat{\rho}_{\mathrm{LE}}\widehat{T}^{\mu\nu}_{S}\right)\!\xi_{\mu \nu}-\mathrm{Tr}\!\left(\widehat{\rho}_{\mathrm{LE}}\widehat{j}^{\mu}\right) \!\nabla_{\mu}\zeta+\mathrm{Tr}\!\left(\widehat{\rho}_{\mathrm{LE}}\widehat{T }^{\mu\nu}_{A}\right)\!(\Omega_{\mu\nu}-\varpi_{\mu\nu})-\frac{1}{2}\,\mathrm{ Tr}\!\left(\widehat{\rho}_{\mathrm{LE}}\widehat{\mathcal{S}}^{\mu\lambda\nu}\right)\! \nabla_{\mu}\Omega_{\lambda\nu}\right)\,,\] whence: \[\lim_{\epsilon\mapsto}\frac{1}{\epsilon}\left[\log Z_{\rm LE}( \Sigma_{\epsilon})-\log Z_{\rm LE}(\Sigma)\right] \tag{30}\] \[=-\int_{\Sigma}\mathrm{d}\Sigma\cdot\delta\ \left(\mathrm{Tr}\Big{(}\widehat{\rho}_{\rm LE}\widehat{T}_{S}^{\mu\nu} \Big{)}\xi_{\mu\nu}-\mathrm{Tr}\Big{(}\widehat{\rho}_{\rm LE}\widehat{j}^{\mu} \Big{)}\nabla_{\mu}\zeta+\mathrm{Tr}\Big{(}\widehat{\rho}_{\rm LE}\widehat{T}_{ A}^{\mu\nu}\Big{)}(\Omega_{\mu\nu}-\varpi_{\mu\nu})-\frac{1}{2}\,\mathrm{Tr} \Big{(}\widehat{\rho}_{\rm LE}\widehat{\mathcal{S}}^{\mu\lambda\nu}\Big{)} \nabla_{\mu}\Omega_{\lambda\nu}\right)\,.\] Therefore, by comparing the equation (29) with the equation (30), taking into account that both \(\Sigma\) and the field \(\delta\) are arbitrary, we can infer that: \[\nabla_{\mu}\phi^{\mu} =-\left[\left(\mathrm{Tr}\Big{(}\widehat{\rho}_{\rm LE}\widehat{ T}_{S}^{\mu\nu}\Big{)}-\bra{0}\widehat{T}_{S}^{\mu\nu}\ket{0}\right)\xi_{\mu\nu}- \left(\mathrm{Tr}\Big{(}\widehat{\rho}_{\rm LE}\widehat{j}^{\mu}\Big{)}- \bra{0}\widehat{j}^{\mu}\ket{0}\right)\nabla_{\mu}\zeta\] \[+\left(\mathrm{Tr}\Big{(}\widehat{\rho}_{\rm LE}\widehat{T}_{A}^ {\mu\nu}\Big{)}-\bra{0}\widehat{T}_{A}^{\mu\nu}\ket{0}\right)(\Omega_{\mu\nu}- \varpi_{\mu\nu})-\frac{1}{2}\left(\mathrm{Tr}\Big{(}\widehat{\rho}_{\rm LE} \widehat{\mathcal{S}}^{\mu\lambda\nu}\Big{)}-\bra{0}\widehat{\mathcal{S}}^{ \mu,\lambda\nu}\ket{0}\right)\nabla_{\mu}\Omega_{\lambda\nu}\right]\] \[=-\left(T_{S(\rm LE)}^{\mu\nu}\xi_{\mu\nu}-j_{\rm LE}^{\mu}\nabla _{\mu}\zeta+T_{A(\rm LE)}^{\mu\nu}(\Omega_{\mu\nu}-\varpi_{\mu\nu})-\frac{1}{ 2}\mathcal{S}_{\rm LE}^{\mu\lambda\nu}\nabla_{\mu}\Omega_{\lambda\nu}\right)\,, \tag{31}\] where, in the last step, we have used the definition of local equilibrium values. Now, substituting back the eq. (31) into the eq. (27), we obtain the evolution of entropy current: \[\nabla_{\mu}s^{\mu}=\left(T_{S}^{\mu\nu}-T_{S(\rm LE)}^{\mu\nu} \right)\xi_{\mu\nu}-\left(j^{\mu}-j_{\rm LE}^{\mu}\right)\nabla_{\mu}\zeta+ \left(T_{A}^{\mu\nu}-T_{A(\rm LE)}^{\mu\nu}\right)(\Omega_{\mu\nu}-\varpi_{ \mu\nu})-\frac{1}{2}\left(\mathcal{S}^{\mu\lambda\nu}-\mathcal{S}_{\rm LE}^{ \mu\lambda\nu}\right)\nabla_{\mu}\Omega_{\lambda\nu}\,. \tag{32}\] The equation (32) is the main result of this work and it is the starting point to derive the constitutive equations of dissipative spin hydrodynamics, which relate the anti-symmetric part of the stress-energy tensor and the spin tensor to the gradients of the spin potential and the difference between spin potential and thermal vorticity, besides the (thermal) shear tensor and the gradient of \(\zeta=\mu/T\). In the above form, it is in fact a generalization of the one found by Van Weert and Zubarev [24; 25], with the addition of the last two terms involving spin tensor and the spin potential. We stress that the formula (32) is exact and not an approximation at some order of a gradient expansion. Indeed, with respect to all previous assessments of dissipative spin hydrodynamics based on different approaches [7; 8; 9; 10; 11; 12; 13], a novel feature is apparently the simultaneous appearance of the last two terms of the right hand side. While the last term is neglected in almost all derivations, it was actually obtained in ref. [10]. However, it should be pointed out that some terms in previous derivations may have been omitted because of a gradient power counting method. A complete analysis of the constitutive equations implied by the (32) will be presented in a forthcoming study. ## VI Discussion and Conclusions The formula (32) shows that entropy production rate, in general, is non-vanishing whenever there is a difference between the actual value of the conserved (or conserved-related) currents and the corresponding values at local thermodynamic equilibrium, such as \(T_{S(\rm LE)}\), \(j_{\rm LE}\), etc. As we have emphasized in this paper, local equilibrium depends on the choice of a family of 3D space-like hypersurfaces, i.e. a foliation. In relativistic hydrodynamics, this freedom ultimately corresponds to the choice of a four-velocity vector, so-called hydrodynamic frame. The dependence on the foliation shows up in the divergence of the entropy current (32), which is manifestly dependent on local equilibrium values (see the discussion at the end of Section III). We emphasize that the formula (32) is exact, not an approximation at some order of a gradient expansion. In other words, fixing the order in a gradient expansion of hydrodynamic quantities is not required to obtain it. However, for future work, once constitutive equations are determined, a gradient ordering can be made based on the involved scales in the physical problem. In conclusion, in this work we have employed a quantum-statistical approach to derive the entropy current and entropy production rate without assuming the traditional local thermodynamic relations (2). In fact, we have shown that the local thermodynamic relations do not hold in general and that they are also non-invariant under allowed transformations of the entropy current, that we have defined as entropy-gauge transformations. We have obtained an expression of the entropy production rate (32) which extends to spin hydrodynamics previous expression obtained in refs. [24; 25]. This form is especially well-suited to derive the constitutive equations of dissipative spin hydrodynamics, what will be the subject of a forthcoming work. ## Acknowledgements Part of this work was carried out in the workshop "The many faces of relativistic fluid dynamics" held in the Kavli institute in Santa Barbara (CA) USA, supported in part by the National Science Foundation under Grants No. NSF PHY-1748958 and PHY-2309135. F.B. gratefully acknowledges fruitful discussions with the participants in the workshop, especially J. Armas, G. Denicol, P. Kovtun and M. Hippert Teixeira. Interesting discussions with R. Ryblewski, E. Grossi and A. Giachino are also acknowledged. A.D. thanks the Department of Physics, University of Florence and INFN for the hospitality. A.D. acknowledges the financial support provided by the Polish National Agency for Academic Exchange NAWA under the Programme STER-Internationalisation of doctoral schools, Project no. PPI/STE/2020/1/00020 and the Polish National Science Centre Grants No.2018/30/E/ST2/00432. ## Appendix A Thermodynamic potential current at homogeneous global equilibrium Homogeneous global equilibrium is defined by the condition \(\beta=\text{const}\) i.e. vanishing thermal vorticity in the equation (7). Plugging this form in the equation (3), the density operator takes on the familiar form (for simplicity, we assume there are no charges in the system): \[\widehat{\rho}_{\text{GE}}=\frac{1}{Z}\exp\left[-\beta\cdot\widehat{P}\right]\,. \tag{10}\] Due to the symmetries of the above operator, the mean value of the stress-energy tensor operator has the ideal form: \[T^{\mu\nu}=\text{Tr}\Big{(}\widehat{\rho}_{\text{GE}}\widehat{T}^{\mu\nu} \Big{)}-\bra{0}\widehat{T}^{\mu\nu}\ket{0}=(\rho+p)u^{\mu}u^{\nu}-pg^{\mu\nu }\,, \tag{11}\] where \(u\equiv\beta/\sqrt{\beta^{2}}\). According to eq. (16), the thermodynamic potential current is: \[\phi^{\mu}=\int_{0}^{T}\frac{\text{d}T^{\prime}}{T^{\prime 2}}\ T^{\mu\nu}[T^{ \prime}]u_{\nu}=\int_{0}^{T}\frac{\text{d}T^{\prime}}{T^{\prime 2}}\ \rho[T^{ \prime}]u^{\mu}\,, \tag{12}\] where \(T=1/\sqrt{\beta^{2}}\). The above expression confirms the expectation that, at the homogeneous global equilibrium, any vector field should be parallel to \(\beta\) with a coefficient depending on \(\beta^{2}\) or, equivalently, the temperature \(T\). Therefore: \[\phi^{\mu}=\beta^{\mu}\phi(\beta^{2}) \tag{13}\] and the goal is now to show that such scalar coefficient \(\phi(\beta^{2})\) is just the pressure, as defined by the equation (11). By taking the derivative with respect to \(\beta\) of the partition function, we have: \[-\frac{\partial\log Z}{\partial\beta_{\nu}}=\text{Tr}\Big{(}\widehat{\rho}_{ \text{GE}}\widehat{P}^{\nu}\Big{)}=\int_{\Sigma}\text{d}\Sigma_{\mu}\ \text{Tr}\Big{(}\widehat{\rho}_{\text{GE}}\widehat{T}^{\mu\nu}\Big{)}\,. \tag{14}\] Since: \[\log Z=\int\text{d}\Sigma_{\mu}\ \phi^{\mu}-\beta_{\nu}\bra{0}\widehat{P}^{ \nu}\ket{0}\] from the (8), from the (14) we can obtain the following equality \[\int_{\Sigma}\text{d}\Sigma_{\mu}\left(\frac{\partial\phi^{\mu}}{\partial \beta_{\nu}}\right)=-\int_{\Sigma}\text{d}\Sigma_{\mu}T^{\mu\nu}\,, \tag{15}\] where we have used the (11). Since the integration hypersurface \(\Sigma\) is arbitrary, being at global equilibrium, we can infer the following relation: \[T^{\mu\nu}=-\frac{\partial\phi^{\mu}}{\partial\beta_{\nu}}+\partial_{\lambda}A ^{\lambda\mu\nu}\,,\] where the rank 3 tensor \(A^{\lambda\mu\nu}\) is anti-symmetric in the indices \(\lambda\mu\). Such a gradient term is allowed by the Stokes theorem in Minkowski space-time if suitable boundary conditions are fulfilled. Yet, since the equilibrium is homogeneous, it must vanish due to traslational invariance as all mean values ought to be constant and uniform. This implies that, by using the equation (10): \[T^{\mu\nu}=-\frac{\partial\phi^{\mu}}{\partial\beta_{\nu}}=-\frac{\partial}{ \partial\beta_{\nu}}\phi\beta^{\mu}=-\phi g^{\mu\nu}+T\frac{\partial\phi}{ \partial T}u^{\mu}u^{\nu}\,. \tag{12}\] We can now compare (11) with (12) and infer that \(\phi=p\) and consequently \(\phi^{\mu}=p\beta^{\mu}\). By plugging the latter equation in the (12) and taking the derivative with respect to \(T\) we obtain: \[T\frac{\partial p}{\partial T}=\rho+p\,,\] which makes also the second term on the right hand side of equation (12) consistent with the identification \(\phi=p\). ## Appendix B Lie derivatives and integration Suppose we have a one-parameter group of diffeomorphisms \(x^{\prime}(x,\epsilon)\) with \(\epsilon\) a real number. Let \(\omega\) be a rank 3 differential form which is to be integrated over a 3D hypersurface embedded in the 4D space-time. We denote with \(\omega^{\prime}_{\epsilon}\) the differential form which is obtained from \(\omega\) through the diffeomorphism, that is: \[\omega^{\prime}_{\epsilon}(x)_{\mu_{1}\mu_{2}\mu_{3}}=J^{\nu_{1}}_{\mu_{1}}J^ {\nu_{2}}_{\mu_{2}}J^{\nu_{3}}_{\mu_{3}}\,\omega(x^{\prime}(x,\epsilon))_{\nu _{1},\nu_{2},\nu_{3}}\] where \(J^{\nu}_{\mu}=\partial x^{\prime}(x,\epsilon)^{\nu}/\partial x^{\mu}\) is the jacobian matrix element of the diffeomorphism. Let \(\Sigma_{\epsilon}\) be the image of the hypersurface \(\Sigma\) through the diffeomorphism. Then we have: \[\int_{\Sigma_{\epsilon}}\omega=\int_{\Sigma}\omega^{\prime}_{\epsilon}\] whence: \[\lim_{\epsilon\to 0}\frac{1}{\epsilon}\left(\int_{\Sigma_{\epsilon}} \omega(x)-\int_{\Sigma}\omega(x)\right)=\lim_{\epsilon\to 0}\frac{1}{ \epsilon}\int_{\Sigma}\omega^{\prime}_{\epsilon}(x)-\omega(x)=\int_{\Sigma} \mathcal{L}_{\delta}(\omega(x))\,,\] where \(\mathcal{L}_{\delta}\) stands for the Lie derivative along the vector field \(\delta(x)=\mathrm{d}x^{\prime}(x,\epsilon)/\mathrm{d}\epsilon|_{\epsilon=0}\). The so-called Cartan magic formula can now be used in the last expression, leading to: \[\int_{\Sigma}\mathcal{L}_{\delta}(\omega)=\int_{\Sigma}i_{\delta}\mathrm{d} \omega+\mathrm{d}(i_{\delta}\omega)=\int_{\Sigma}i_{\delta}\mathrm{d}\omega+ \int_{\partial\Sigma}i_{\delta}\omega\,, \tag{13}\] where \(i_{\delta}\) stands for the interior product of the form with the vector field \(\delta\) and \(\mathrm{d}\) stands for the exterior derivative. The second term on the right hand side of (13) is an integral of an exterior derivative and it has been turned into a 2D boundary integral of \(i_{\delta}\omega\) by using the generalized Stokes theorem for differential forms. We can apply the above formulae to the differential form which is the dual of a vector field \(V\) in a 4D space-time, namely: \[\omega_{\mu\nu\rho}=\frac{1}{6}E_{\mu\nu\rho\sigma}V^{\sigma}=\frac{1}{6} \sqrt{|g|}\epsilon_{\mu\nu\rho\sigma}V^{\sigma}\,. \tag{14}\] With this form, it can be shown that: \[\int_{\Sigma}\omega(x)=\int_{\Sigma}\mathrm{d}\Sigma_{\mu}V^{\mu}\,. \tag{15}\] The exterior derivative can be readily worked out by using the definition: \[(\mathrm{d}\omega)_{\lambda\mu\nu\rho}=-\frac{1}{24}E_{\lambda\mu\nu\rho} \nabla\cdot V\,,\] which leads, by using the definition of interior product, to: \[(i_{\delta}\mathrm{d}\omega)_{\mu\nu\rho}=\frac{1}{6}E_{\mu\nu\rho\sigma} \delta^{\sigma}\nabla\cdot V\,.\] Therefore, by using the above expression and the (15) we get: \[\int_{\Sigma}i_{\delta}\mathrm{d}\omega=\int_{\Sigma}\mathrm{d}\Sigma_{\mu}\, \delta^{\mu}\nabla\cdot V\,.\] The second integral in the (13) can be similarly worked out and one eventually obtains the equation (28). ## Appendix C Non-invariance of the local thermodynamic relations: an example We are going to show that the local thermodynamic relation (23) is not invariant under entropy-gauge transformation (18), namely that the equation (25) applies with a non-trivial second term in the right hand side. We consider, as specific example, global equilibrium with non-vanishing thermal vorticity in the equation (7). Let: \[A^{\lambda\mu}=f(\kappa^{2})\varpi^{\lambda\mu}\,,\] where \(\kappa^{\mu}=\varpi^{\mu\nu}\beta_{\nu}\) and \(f(\kappa^{2})=g(\kappa^{2})/\kappa^{2}\) with \(g(\kappa^{2})\) an adimensional differentiable function (this form of \(f(\kappa^{2})\) ensures that \(A^{\lambda\mu}\) has the correct dimension for the entropy gauge transformation (18)). One has, in Cartesian coordinates: \[\nabla_{\lambda}A^{\lambda\mu}=\partial_{\lambda}A^{\lambda\mu}=f^{\prime}( \kappa^{2})\varpi^{\lambda\mu}\partial_{\lambda}\kappa^{2}=f^{\prime}(\kappa^ {2})\varpi^{\lambda\mu}2\kappa^{\nu}\partial_{\lambda}(\varpi_{\nu\rho}\beta^ {\rho})=f^{\prime}(\kappa^{2})\varpi^{\lambda\mu}2\kappa^{\nu}\varpi_{\nu\rho }\varpi^{\rho}_{\lambda}\,,\] where, in the last step, we have used the relation \(\varpi_{\mu\nu}=\partial_{\nu}\beta_{\mu}\) which applies at global equilibrium where \(\partial_{\mu}\beta_{\nu}+\partial_{\nu}\beta_{\mu}=0\). Now let \(\gamma^{\rho}=\varpi^{\rho\nu}\kappa_{\nu}\) so that: \[\partial_{\lambda}A^{\lambda\mu}=-2f^{\prime}(\kappa^{2})\gamma^{\rho}\varpi _{\rho\lambda}\varpi^{\lambda\mu}\,. \tag{109}\] Contracting the equation (109) with \(u_{\mu}\) we get: \[u_{\mu}\partial_{\lambda}A^{\lambda\mu}=T\beta_{\mu}\partial_{ \lambda}A^{\lambda\mu}=-2f^{\prime}(\kappa^{2})T\gamma^{\rho}\varpi_{\rho \lambda}\varpi^{\lambda\mu}\beta_{\mu}=-2f^{\prime}(\kappa^{2})T\gamma^{\rho} \varpi_{\rho\lambda}\kappa^{\lambda}=-2f^{\prime}(\kappa^{2})T\gamma^{2}\,. \tag{110}\] The derivative in (25) must be taken by keeping \(\omega=T\varpi\) constant. Therefore, being: \[\kappa^{\mu}=\varpi^{\mu\nu}\beta_{\nu}=\frac{1}{T^{2}}\omega^{ \mu\nu}u_{\nu}\,,\qquad\qquad\gamma^{\rho}=\varpi^{\rho\nu}\kappa_{\nu}=\frac {1}{T^{3}}\omega^{\rho\nu}\omega_{\nu\alpha}u^{\alpha}\,,\] and choosing \(g(\kappa^{2})=1\), we have that the expression in the equation (110) is proportional to \(T^{3}\), \[T\frac{\partial}{\partial T}\left(u_{\mu}\partial_{\lambda}A^{ \lambda\mu}\right)=T\frac{2}{(\kappa^{2})^{2}}\gamma^{2}\,, \tag{111}\] which is non-vanishing. Therefore, using the (111) in the equation (25) we get: \[\frac{\partial p^{\prime}}{\partial T}\Big{|}_{\mu\omega}=s^{\prime}+T\frac{2 }{(\kappa^{2})^{2}}\gamma^{2}\,,\] which proves the non-invariance of the local thermodynamic relation.
2310.20442
Characteristics of gamma-ray burst afterglows in the context of non-axisymmetric structured jets
As the most energetic explosions in the Universe, gamma-ray bursts (GRBs) are commonly believed to be generated by relativistic jets. Recent observational evidence suggests that the jets producing GRBs are likely to have a structured nature. Some studies have suggested that non-axisymmetric structured jets may be formed through internal non-uniform magnetic dissipation processes or the precession of the central engine. In this study, we analyze the potential characteristics of GRB afterglows within the framework of non-axisymmetric structured jets. We simplify the profile of the asymmetric jet as a step function of the azimuth angle, dividing the entire jet into individual elements. By considering specific cases, we demonstrate that the velocity, energy, and line-of-sight direction of each jet element can greatly affect the behaviour of the overall light curve. The radiative contributions from multiple elements may lead to the appearance of multiple distinct peaks or plateaus in the light curve. Furthermore, fluctuations in the rising and declining segments of each peak can be observed. These findings establish a theoretical foundation for future investigations into the structural characteristics of GRBs by leveraging GRB afterglow data.
Jin-Da Li, He Gao, Shunke Ai, Wei-Hua Lei
2023-10-31T13:23:50Z
http://arxiv.org/abs/2310.20442v1
# Characteristics of gamma-ray burst afterglows in the context of non-axisymmetric structured jets ###### Abstract As the most energetic explosions in the universe, gamma-ray bursts (GRBs) are commonly believed to be generated by relativistic jets. Recent observational evidence suggests that the jets producing GRBs are likely to have a structured nature. Some studies have suggested that non-axisymmetric structured jets may be formed through internal non-uniform magnetic dissipation processes or the precession of the central engine. In this study, we analyze the potential characteristics of GRB afterglows within the framework of non-axisymmetric structured jets. We simplify the profile of the asymmetric jet as a step function of the azimuth angle, dividing the entire jet into individual elements. By considering specific cases, we demonstrate that the velocity, energy, and line-of-sight direction of each jet element can greatly affect the behavior of the overall light curve. The radiative contributions from multiple elements may lead to the appearance of multiple distinct peaks or plateaus in the light curve. Furthermore, fluctuations in the rising and declining segments of each peak can be observed. These findings establish a theoretical foundation for future investigations into the structural characteristics of GRBs by leveraging GRB afterglow data. keywords: Gamma-ray bursts (GRBs) ## 1 Introduction Gamma-ray Bursts (GRBs) are astrophysical phenomena that exhibit an immediate and intense release of gamma-ray radiation from a precise location in the sky, succeeded by a rapid decrease. The prompt emission of GRBs takes place over a span of 0.1-1000 seconds, followed by a multi-wavelength afterglow that can last for months to years (Zhang, 2018, for a review about GRBs). After extensive research spanning decades, two distinct types of progenitors have been identified for GRBs, namely core collapse from Wolf-Rayet stars for long GRBs (Woosley, 1993; Paczynski, 1998; MacFadyen & Woosley, 1999; Woosley & Bloom, 2006) and mergers of two compact stellar objects (neutron star-neutron star and neutron star-black hole systems) for short GRBs (Paczynski, 1986; Eichler et al., 1989; Paczynski, 1991b; Paczynski, 1991a; Narayan et al., 1992; Abbott et al., 2017). Following the catastrophic destruction of the progenitor system, a central engine is thought to form, which powers a relativistic jet. The prompt emission of GRBs is generally believed to originate from the dissipation process of the magnetic energy or kinetic energy of the jet (Rees & Meszaros, 2005; Lazzati et al., 2009, 2013), whereas the subsequent afterglow emission is attributed to the interaction between the jet and the circumburst medium (Meszaros & Rees, 1997). Therefore, the characteristics of the jet predominantly govern the multi-band radiation properties of GRBs. In previous studies, some structured jet models have been proposed, including the power-law jet model (Meszaros et al., 1998; Dai & Gou, 2001; Rossi et al., 2002; Zhang & Meszaros, 2002; Granot & Kumar, 2003) the Gaussian jet model (Zhang & Meszaros, 2002; Granot & Kumar, 2003; Zhang et al., 2004a) and the two-component jet model (Ramirez-Ruiz et al., 2002; Zhang et al., 2004b; Peng et al., 2005). Recently, motivated by the potential of gravitational wave astronomy in relation to GRB sources, the discussions on structured jet become revived (e.g. Lazzati et al., 2017; Lamb & Kobayashi, 2017). It appears that the jets associated with GRBs probably exhibit a structured nature. This assertion is supported by the results of multi-band observations conducted on short GRB 170817A, which represents the first electromagnetic counterpart of a gravitational wave originating from the merger of binary neutron stars (Abbott et al., 2017; Gao, 2018; Zhang et al., 2018; Gottlieb et al., 2018; Kasliwal et al., 2017; Piro & Kollmeier, 2018; Xiao et al., 2017; Lazzati et al., 2018; Lyman et al., 2018; Troja et al., 2018). Based on the analysis of GRB 221009A, the most bright GRB ever detected, some studies suggest that the jets of long GRBs may also show structured nature (An et al., 2023; O'Connor et al., 2023). A shared characteristic among these jet structures is their symmetric configuration relative to the axis of the jet, and the prompt and afterglow radiation characteristics of GRBs in such models have been extensively analyzed (e.g. Filgas, R. et al., 2011; Nicuesa Guelbenzu, A. et al., 2011; Lamb & Kobayashi, 2017; Gill & Granot, 2018; Lyman et al., 2018; Margutti et al., 2018; Resmi et al., 2018; Troja et al., 2018; Xie et al., 2018; Kann et al., 2018; Lamb et al., 2019; Meng et al., 2019; Beniamini et al., 2020; Oganesyan et al., 2020; Gottlieb et al., 2021). On the other hand, the non-axisymmetric structures have also been studied in the literature. Meszaros et al. (1998) first claimed that, due to the angular anisotropy of the fireball, the afterglow could be significant different with the isotropic scenario. Later, some works investigated the observational features for several possible asymmetric structures, such as the jet hotspots, the patchy shells and the micro/sub jets (Nakamura, 2000; Yamazaki et al., 2004; Ioka et al., 2005). Recently, Lamb et al. (2022) used the results of the 3-dimensional hydrodynamic jets in the neutron star merger environment to determine the degree of polar and rotational inhomogeneity (\(N\times N\) jet model). They found that the result of these inhomogeneities in the jet's energy/Lorentz factor distribution showed some degree of rotational variation, although the change in energy/Lorentz factor from these simulations was not large enough to show significant temporal variability on the afterglow. It is worth noting that in some special cases, GRB jets may be heavily non-axisymmetric. For instance, the presence of significant non-uniformity in the internal magnetic dissipation of a jet can lead to the development of complex and asymmetric jet structures (Narayan & Kumar, 2009). A more recent study conducted by Huang et al. (2019) has demonstrated that the non-uniformity of jets can exist in the circumferential direction due to the precession of GRBs' central engine. Here we intend to conduct a first step analysis of the potential characteristics of gamma-ray burst afterglows within the context of non-axisymmetric structured jets. To achieve this, we will examine a basic jet structure consisting of \(N\) partitioned elements around its circumference, where the initial Lorentz factor \(\gamma_{0}\) and isotropic energy \(E_{\rm iso}\) are step functions. This structure can be extended to any arbitrary \(N\)-value, allowing for the construction of complex asymmetric jet structures. To calculate the afterglow properties for any \(N\)-value, we have developed a method that utilizes both semi-analytical and numerical estimations. We present an overview of our findings for \(N=2\) and \(N=4\), and briefly discuss the expected results for any arbitrary \(N\)-value. ## 2 Model description A non-axisymmetric structured jet can be represented by a schematic image within a coordinate system that combines spherical and Cartesian coordinates. The \(z-\)axis of this coordinate system points towards the observer, while the jet axis resides in the \(x-z\) plane, with the angle between \(z-\)axis and jet axis as \(\theta_{\rm obs}\) (see Figure 1). The spherical coordinates are based on the jet axis, with a half-opening angle of a cone around the jet axis denoted as \(\theta\), ranging from \(\theta=0\) at the jet axis to the half-opening angle of the jet, \(\theta_{j}\). The azimuth angle \(\theta\) forms a circumference around the jet axis, ranging from \(-\pi\) to \(\pi\). We define \(\phi=0\) at the projection of the \(x\)-axis onto the jet's cross-section. The jet is divided into \(N\) partitioned elements along the azimuthal direction, while we assume that the jet is uniform along \(\theta\). To avoid confusion on the sign of \(\theta\) and \(\phi\), we only consider the jet in the region \(z>0\), so that \(0<\theta_{\rm obs}<\pi/2\) and the element at \(\phi=0\) or \(\phi=\pi\) is always the furthermost part from the observer's line of sight (LOS). Here we treat each of the \(N\) elements as an independent "patchy", characterized by its own initial Lorentz factor \(\gamma_{0}\) and isotropic kinetic energy \(E_{\rm K,iso}\). The interaction between each "patchy" and the interstellar medium could produce a strong external shock. Electrons are accelerated in the external shocks, which radiate synchrotron emission in the magnetic fields behind the shocks that are believed to be generated in situ due to plasma instabilities (Gao et al., 2013, for a review). In most cases (except for \(\theta_{\rm obs}=0\)), the majority of "patchies" are off-axis with respect to the observer. For the \(i\)-th "patchy", we can first use the standard GRB afterglow model to calculate its on-axis flux evolution with time, \(F_{\nu,i}(t)\) (see sections 2.1 and 2.2 for details), then we can transfer \(F_{\nu,i}(t)\) to the observer direction through Doppler conversion. The cumulative effect of the individual contributions from \(N\) 'patchies' can yield the comprehensive afterglow characteristics of a non-axisymmetric jet. ### Numerical Formalism For the \(i\)-th "patchy" (henceforth treated as a uniform jet), we can first follow the formulae derived in Huang et al. (2000) to calculate its dynamical evolution. In the frame of an on-axis observer, the evolution of the jet's radius \(R\) over time \(T\) reads as \[\frac{dR}{dT}=\beta c\gamma(\gamma+\sqrt{\gamma^{2}-1}), \tag{1}\] where \(\beta\) and \(\gamma\) represent the dimensionless velocity and Lorentz factor of the jet's bulk motion, respectively. The accumulation of the jet-swept mass by each element from the interstellar medium \(m\) with the jet radius \(R\) can be described as \[\frac{dm}{dR}=\frac{1}{N}2\pi R^{2}(1-\cos\theta_{\rm j})nm_{\rm p}, \tag{2}\] where \(m_{\rm p}\) is the mass of proton. \(n=AR^{-k}\) is the particle number density of the interstellar medium, where \(k\) is wind profile variable. \(k=0\) is for uniform interstellar medium and \(k=2\) is for stellar wind Figure 1: The diagram illustrates the jet structure and coordinate system. The example shown on the left side represents an off-axis observation of an asymmetrically structured jet. The upper right section displays a cross-sectional view of the jet, while the lower right section depicts the projection of the jet on the \(x-y\) plane. The azimuthal angle \(\phi\), ranging from \(-\pi\) to \(\pi\), is used to denote orientation around the jet axis. This azimuthal range can be divided into \(N\) segments to effectively represent the asymmetric nature of the jet structures. environment. For the wind model, \(A=(3.0\times 10^{35}{\rm cm}^{-1})A_{*}\), \(A_{*}\) is a dimensionless free parameter depending on the wind environment (Zhang 2018). \(\theta_{\rm j}\) stands for the half-opening angle of the jet. Here we ignore the lateral spread of the jet, so that \(\theta_{\rm j}\) is treated as a constant. Taking the radiation cooling effect into consideration, the evolution of jet's bulk motion Lorentz factor \(\gamma\) with respect to \(m\) can be written as \[\frac{d\gamma}{dm}=-\frac{\gamma^{2}-1}{M_{\rm{ej}}+Em+2(1-\varepsilon)\gamma m}, \tag{3}\] where \(M_{\rm{ej}}=E_{0}/\left(\gamma_{0}c^{2}\right)\) is the ejecta mass and \(E_{0}\) is the initial kinetic energy of a element. The radiative efficiency, \(\varepsilon\), is defined as the fraction of the shock generated internal energy (in jet's comoving frame) that would be radiated, which can be expressed as (Dai & Lu 1999) \[\varepsilon=\epsilon_{e}\frac{r_{\rm{syn}}^{{}^{\prime}-1}}{t_{\rm{syn}}^{{}^ {\prime}-1}+t_{\rm{ex}}^{{}^{\prime}-1}}, \tag{4}\] where \(r_{\rm{syn}}^{{}^{\prime}}=6\pi m_{e}c/\left(\sigma_{\rm T}B^{{}^{\prime}2} \gamma_{e,\rm{min}}\right)\) is the synchrotron cooling timescale, and \(t_{\rm{ex}}^{{}^{\prime}}=R/(\gamma c)\) is the expansion timescale in the jet's comoving comoving frame. \(m_{e}\) represents the mass of electron, and \(\sigma_{\rm T}\) represents the cross section for Thompson scattering. Assume a fraction \(\epsilon_{B}\) of the total shock-generated internal energy goes into the random magnetic field, the magnetic energy density in the jet's comoving frame can thus be estimated as \[\frac{B^{{}^{\prime}2}}{8\pi}=\epsilon_{B}^{2}\frac{\hat{\gamma}\gamma+1}{\hat {\gamma}-1}\left(\gamma-1\right)nm_{p}c^{2}, \tag{5}\] where \(\hat{\gamma}=\left(4\gamma+1\right)/(3\gamma)\) is the adiabatic index (Dai & Lu 1999). Assume a fraction \(\epsilon_{e}\) of the total shock-generated internal energy goes into the electrons and assume that the accelerated electrons is a power law function with the index of \(p\) (\(dN_{e}/d\gamma_{e}\propto\gamma_{e}^{-P}\)), the minimum Lorentz factor for the random motion of electrons in the jet's comoving frame can thus be derived as (Huang et al. 2000) \[\gamma_{e,\rm{min}}=\epsilon_{e}\left(\gamma-1\right)\frac{m_{p}\left(p-2 \right)}{m_{e}\left(p-1\right)}+1. \tag{6}\] For synchrotron radiation, the observed radiation power and the characteristic frequency of an electron with Lorentz factor \(\gamma_{e}\) are given by (Sari et al. 1998) \[P\left(\gamma_{e}\right)=\frac{4}{3}\sigma_{T}d\gamma^{2}\gamma_{e}^{2}\frac {B^{2}}{8\pi}. \tag{7}\] \[\nu\left(\gamma_{e}\right)=\gamma\gamma_{e}^{2}\frac{q_{e}B}{2\pi m_{e}c}, \tag{8}\] where \(q_{e}\) is the charge of an electron. The peak power occurs at \(\nu(\gamma_{e})\), where it has the approximate value \[P_{\nu,\rm{max}}\simeq\frac{P\left(\gamma_{e}\right)}{\nu\left(\gamma_{e} \right)}=\frac{m_{e}c^{2}\sigma_{T}}{3q_{e}}\gamma B. \tag{9}\] Usually, a characteristic Lorentz factor \(\gamma_{c}\) is defined as (Sari et al. 1998) \[\gamma_{c}=\frac{6\pi m_{e}c}{\sigma_{T}B^{2}}=\frac{3m_{e}}{16\epsilon_{B} \sigma_{T}m_{p}c}\frac{1}{T\gamma^{3}n}, \tag{10}\] beyond which the electrons might have significantly cooled. The electrons' Lorentz factors \(\gamma_{e,\rm{min}}\) and \(\gamma_{c}\) define two characteristic emission frequencies \(\nu_{m}\) and \(\nu_{c}\) in the synchrotron spectrum. For the fast cooling regime (\(\nu_{c}<\nu_{m}\)), the self absorption frequency \(\nu_{a}\) is \[\nu_{a}=\begin{cases}\left[\frac{c\lambda_{e}enR}{(3-k)B\gamma_{c}^{2}} \right]^{3/5}\nu_{c}&\nu_{a}<\nu_{c},\\ \left[\frac{c\lambda_{e}enR}{(3-k)B\gamma_{c}^{2}}\right]^{1/3}\nu_{c}&\nu_{c }<\nu_{a}<\nu_{m},\\ \left[\frac{c\lambda_{e}enR}{(3-k)B\gamma_{c}^{2}}\right]^{2/(p+5)}\left( \frac{\gamma_{m}}{\nu_{c}}\right)^{(p-1)/(p+5)}\nu_{c}&\nu_{m}<\nu_{a}.\end{cases} \tag{11}\] \(c_{1}\) and \(c_{2}\) are coefficients dependent on \(p\)(Wu et al. 2003). The observed flux density \(F_{\nu}\) is divided into the following three situations (1)\(\nu_{a}<\nu_{c}<\nu_{m}\): \[F_{\nu}=F_{\nu,\rm{max}}\begin{cases}\left(\frac{\nu}{\nu_{a}}\right)^{2} \left(\frac{\nu_{a}}{\nu_{c}}\right)^{1/3}&\nu<\nu_{a},\\ \left(\frac{\nu}{\nu_{c}}\right)^{1/3}&\nu_{a}<\nu<\nu_{c},\\ \left(\frac{\nu}{\nu_{c}}\right)^{-1/2}&\nu_{c}<\nu<\nu_{m},\\ \left(\frac{\nu}{\nu_{c}}\right)^{-1/2}\left(\frac{\nu}{\nu_{m}}\right)^{-p/2}& \nu_{m}<\nu.\end{cases} \tag{12}\] (2)\(\nu_{c}<\nu_{a}<\nu_{m}\): \[F_{\nu}=F_{\nu,\rm{max}}\begin{cases}\left(\frac{\nu}{\nu_{c}}\right)^{2} \left(\frac{\nu_{c}}{\nu_{a}}\right)^{3}&\nu<\nu_{c},\\ \left(\frac{\nu}{\nu_{c}}\right)^{5/2}\left(\frac{\nu_{a}}{\nu_{c}}\right)^{-1/2 }&\nu_{c}<\nu<\nu_{a},\\ \left(\frac{\nu}{\nu_{m}}\right)^{-1/2}&\nu_{a}<\nu<\nu_{m},\end{cases} \tag{13}\] (3)\(\nu_{c}<\nu_{m}<\nu_{a}\): \[F_{\nu}=F_{\nu,\rm{max}}\begin{cases}\left(\frac{\nu}{\nu_{c}}\right)^{2} \left(\frac{\nu}{\nu_{a}}\right)^{2}\left(\frac{\nu}{\nu_{a}}\right)^{3}\left( \frac{\nu}{\nu_{m}}\right)^{-(p-1)/2}&\nu<\nu_{c},\\ \left(\frac{\nu}{\nu_{m}}\right)^{5/2}\left(\frac{\nu_{a}}{\nu_{m}}\right)^{-p/2} \left(\frac{\nu}{\nu_{c}}\right)^{-1/2}&\nu_{c}<\nu<\nu_{a},\\ \left(\frac{\nu}{\nu_{m}}\right)^{-p/2}\left(\frac{\nu}{\nu_{c}}\right)^{-1/2}& \nu_{a}<\nu.\end{cases} \tag{14}\] where \(F_{\nu,\rm{max}}\) represents the peak flux density, which can be estimated as \[F_{\nu,\rm{max}}=\frac{N_{e}P_{\nu,\rm{max}}}{4\pi D_{L}^{2}}, \tag{15}\] where \(N_{e}\) is the total number of swept-up electrons in the post-shock fluid (assuming a spherical geometry). \(D_{L}\) is the luminosity distance from the source to the observer. And in the slow cooling regime(\(\nu_{c}>\nu_{m}\)), the self absorption frequency \(\nu_{a}\) is: \[\nu_{a}=F_{\nu,\rm{max}}\begin{cases}\left[\frac{c_{1}\sigma_{e}nR}{(3-k)B\gamma_{ c}^{2}}\right]^{3/5}\nu_{m}&\nu_{a}<\nu_{m},\\ \left[\frac{c\lambda_{e}enR}{(3-k)B\gamma_{c}^{2}}\right]^{2/(p+4)}\nu_{m}&\nu_{m}< \nu_{a}<\nu_{c},\\ \left[\frac{c\lambda_{e}enR}{(3-k)B\gamma_{c}^{2}}\right]^{2/(p+5)}\left( \frac{\nu_{c}}{\nu_{m}}\right)^{1/(p+5)}\nu_{m}&\nu_{c}<\nu_{a}.\end{cases} \tag{16}\] And the flux in slow cooling regime is \[F_{\nu}=F_{\nu,\rm{max}}\begin{cases}\left(\frac{\nu}{\nu_{a}}\right)^{2}\left( \frac{\nu}{\nu_{m}}\right)^{1/3}&\nu<\nu_{a},\\ \left(\frac{\nu}{\nu_{m}}\right)^{1/3}&\nu_{a}<\nu<\nu_{m},\\ \left(\frac{\nu}{\nu_{m}}\right)^{-(p-1)/2}F_{\nu,\rm{max}}&\nu_{m}<\nu<\nu_{c}, \\ \left(\frac{\nu}{\nu_{m}}\right)^{-(p-1)/2}\left(\frac{\nu}{\nu_{c}}\right)^{-p/2}& \nu_{c}<\nu.\end{cases} \tag{17}\] (2)\(\nu_{m}<\nu_{a}<\nu_{c}\): \[F_{\nu}=F_{\nu,\max}\begin{cases}\left(\frac{\nu}{\nu_{m}}\right)^{2}\left(\frac{ \nu_{m}}{\nu_{a}}\right)^{(p+4)/2}F_{\nu,\max}&\nu<\nu_{m},\\ \left(\frac{\nu}{\nu_{a}}\right)^{5/2}\left(\frac{\nu_{m}}{\nu_{a}}\right)^{-(p -1)/2}F_{\nu,\max}&\nu_{m}<\nu<\nu_{a},\\ \left(\frac{\nu}{\nu_{m}}\right)^{-(p-1)/2}\left(\frac{\nu}{\nu_{c}}\right)^{- p/2}F_{\nu,\max}&\nu_{c}<\nu.\end{cases} \tag{18}\] (3)\(\nu_{m}<\nu_{c}<\nu_{a}\): \[F_{\nu}=F_{\nu,\max}\begin{cases}\left(\frac{\nu}{\nu_{m}}\right)^{2}\left( \frac{\nu_{m}}{\nu_{a}}\right)^{(p+4)/2}\left(\frac{\nu_{a}}{\nu_{c}}\right)^{ -1/2}&\nu<\nu_{m},\\ \left(\frac{\nu}{\nu_{a}}\right)^{5/2}\left(\frac{\nu_{a}}{\nu_{c}}\right)^{- p/2}\left(\frac{\nu_{a}}{\nu_{m}}\right)^{-(p-1)/2}&\nu_{m}<\nu<\nu_{a},\\ \left(\frac{\nu}{\nu_{c}}\right)^{-p/2}\left(\frac{\nu_{a}}{\nu_{m}}\right)^{ -(p-1)/2}&\nu_{a}<\nu.\end{cases} \tag{19}\] For an off-axis observer, the observed flux needs to be corrected by1(Granot et al., 2002) Footnote 1: For more precise results, it is better to calculate the flux for a given observation angle throughout the flux calculation (e.g, Lamb et al., 2018; Fraija et al., 2020; Ryan et al., 2020; Nedora et al., 2023). \[F_{\nu}=a^{3}F_{\nu/a}\left(at\right), \tag{20}\] with a factor \[a=\frac{1-\beta}{1-\beta\cos\theta_{\rm obs}}\approx\frac{1}{1+\gamma^{2} \theta_{\rm obs}^{2}}, \tag{21}\] where \(\theta_{\rm obs}\) is the angle between LOS and the jet. In this work, if the LOS pass through the \(i-i\)-th element of the jet, we take \(\theta_{\rm obs,i}=0\), otherwise, we take the angle between the LOS and the nearest edge of the \(i-\)th element as \(\theta_{\rm obs,i}\). Overall, the total radiation flux can be calculated as \[F_{\nu}\left(t\right)=\sum_{i=1}^{N}a_{i}^{3}F_{\nu/a_{i}}\left(a_{i}t\right). \tag{22}\] ### Semi-analytical Formalism In addition to the numerical approach, Granot (2005) presented a semi-analytical technique for characterizing the afterglow of GRBs with a structured jet, although their work only considered the jet's polar angle (\(\theta\)) dependence. Considering that semi-analytical results can help us better understand the properties of light curve results, such as the peak time and the rising and decaying slopes, we have also provided a semi-analytical formalism for the non-axisymmetric jet model. To model the afterglow of a relativistic jet structured with azimuthal variation, akin to the methodology described in Section 2.1, we divide the entire jet into \(N\) segments along the azimuthal angle and analyze them independently using the methodology outlined in Granot (2005). For each independent element, the evolution of the bulk-motion Lorentz factor \(\gamma\) can be approximately expressed as a function of ejecta's radius \(R\), which reads as (Blandford & McKee, 1976): \[\gamma\left(R\right)\approx\begin{cases}\gamma_{0}&R<R_{\rm dec},\\ \gamma_{0}\left(R/R_{\rm dec}\right)^{-(3-k)/2}&R>R_{\rm dec},\end{cases} \tag{23}\] where \(\gamma_{0}\) is the initial bulk-motion Lorentz factor and \(R_{\rm dec}\) is the deceleration radius. Before the deceleration time (\(T<T_{\rm dec}\)), the observed flux (for an off-axis observer) can be calculated as \[F_{\nu}\left(T\right) =\frac{2\gamma_{0}L_{\nu/2\gamma_{0}}^{\prime}\left[R_{L}\left(T \right)\right]}{4\pi D_{L}^{2}}\int_{0}^{1}dxx^{1+\alpha-\beta}\frac{\Delta \phi\left(x\right)}{2\pi} \tag{24}\] \[=\frac{2\gamma_{0}L_{\nu/2\gamma_{0}}^{\prime}\left(R_{\rm dec} \right)}{4\pi D_{L}^{2}}\left(\frac{T}{T_{\rm dec}}\right)^{\alpha}\int_{0}^{1 }dxx^{1+\alpha-\beta}\frac{\Delta\phi\left(x\right)}{2\pi}.\] while after that (\(T>T_{\rm dec}\)) the flux reads as \[F_{\nu}\left(T\right)=\frac{2\gamma_{0}L_{\nu/2\gamma_{0}}^{ \prime}\left(R_{\rm dec}\right)}{4\pi D_{L}^{2}}\left\{\left(\frac{T}{T_{\rm dec }}\right)^{\beta-2}\right. \tag{25}\] \[\times\int_{0}^{1}dyy^{1+\alpha-\beta}\frac{\Delta\phi\left(y \right)}{2\pi}+x_{\rm dec}^{-\alpha+(1-\beta)\left(3-k\right)/2}\] \[\times\int_{\lambda_{\rm dec}}^{1}dxx^{\alpha-2+(3-\beta)\left(5 -k\right)/2}\left[\frac{1+3\left(3-k\right)x^{4-k}}{4-k}\right]^{\beta-2}\frac{ \Delta\phi\left(x\right)}{2\pi}\right\},\] where the power-law indices \(\alpha\) and \(\beta\) change between different power-law segments (PLSs) of the spectrum, which are listed in Table 1(Granot & Sari, 2002). The integral variables are defined as \(x=R/R_{L}\) and \(y=R/R_{\rm dec},{\rm thus}\,x_{\rm dec}=R_{\rm dec}/R_{L}\). \(\gamma_{L}\) and \(R_{L}\left(T\right)\) are the Lorentz factor and radius when a photon is emitted and reaches the observer at time \(T\) in the observer's frame. We have \[R_{L}\left(T\right)=\frac{2cT}{1+z}\begin{cases}\gamma_{0}^{2}&T\leq T_{\rm dec},\\ \frac{(4-k)\gamma_{L}^{2}}{1+(4-k)\lambda_{\rm dec}^{4-k}}&T>T_{\rm dec}.\end{cases} \tag{26}\] where \(z\) is the red shift of the source. \(L_{\nu^{\prime}}^{\prime}\) represents the specific luminosity of the afterglow in the jet's comoving frame, where \(\nu^{\prime}\approx\nu/2\gamma_{0}\), while \(\nu\) is defined in the observer's frame. The coefficient in front of the time term in Equation 24 and 25 is approximately equal to the flux density at the deceleration time \(F_{\nu}\left(T_{\rm dec}\right)\), where the deceleration time can be calculate by \[T_{\rm dec}=\begin{cases}90.5\left(1+z\right)n^{-1/3}E_{150,52}^{1/3}\gamma_{0,2 }^{-8/3}&k=0,\\ 0.3\left(1+z\right)A_{n}^{-1}E_{150,52}\gamma_{0,2}^{-4}&k=2,\end{cases} \tag{27}\] After performing individual computation of the on-axis radiation flux for each element, the summation of radiation flux emanating from all elements in a specified LOS direction can be ascertained through the utilization of formulas 20 to 22, analogous to the numerical approach. ### Two-element jet with interface in the plane containing the LOS The simplest non-axisymmetric jet structure is characterized by a sharp interface between two distinct elements resulting from variations of the physical parameters \(\gamma_{0}\) and \(E_{\rm iso}\) at different azimuth \(\phi\). Assume it is uniform in \(\theta\) direction and has a well-defined interface. In order to explore the effects of the physical parameters of two elements on the light variation curve more clearly, we first considered a special case that the plane of interface contains the LOS. On the \(x-y\) plane, the projection of the interface is along the \(x\)-axis. This structure can be mathematically described as \[\gamma_{0}=\begin{cases}\gamma_{01}&-\pi<\phi<0,\\ \gamma_{02}&\text{others},\end{cases} \tag{28}\] and \[E_{\rm iso}=\begin{cases}E_{\rm iso,1}&-\pi<\phi<0,\\ E_{\rm iso,2}&\text{others}.\end{cases} \tag{29}\] Figure 2 shows the cross section for the two-element jet discussed in this paper. The light curves with different bulk-motion Lorentz factors for the jet are shown in Figure 3. Specifically, we fix the bulk-motion Lorentz factor for one element at \(\gamma_{01}=100\), while systematically increasing the value of the other element from 20 to 100. In the plot, we set \(E_{\rm iso,1}=10^{50}\)ergs and \(E_{\rm iso,2}=10^{51}\)ergs, a jet's half opening angle of \(\theta_{\rm j}=0.2\), electron power-law distribution spectral index of \(p=2.2\), interstellar medium particle number density of \(n=0.1\)cm\({}^{-3}\), and microphysics shock parameters, i.e., the electron and magnetic energy fraction parameters \(\epsilon_{e}=0.1\) and \(\epsilon_{B}=0.001\). As an example, we consider an observation frequency of \(\nu_{\rm obs}=8.22\times 10^{14}\) Hz. The results indicate that the asymmetry of the Lorentz factors in the jet can significantly affect the shape of the afterglow light curve. For an on-axis observer, when the asymmetry reaches a certain level, the light curve exhibits two distinct peaks. As the asymmetry becomes stronger, the time interval between the two peaks gradually increases. For an off-axis observer, the asymmetry of the Lorentz factors in the jet usually results in wiggling during the rising phase, without exhibiting a clear double-peak structure. Figure 4 illustrates the impact of the isotropic energy \(E_{\rm iso}\) for each element on the afterglow's light curve. Similarly, we fixed the \(E_{\rm iso,1}=10^{50}\)ergs for one element and vary the other from \(10^{50}\)ergs to \(10^{52}\)ergs. And we fix the initial Lorentz factor of two elements at \(\gamma_{01}=100\) and \(\gamma_{02}=50\), with all other parameters identical to those in Figure 3. The results indicate that the asymmetry of the isotropic energy in the jet can also alter the shape of the afterglow light curve. For a given asymmetry in the Lorentz factor of the jet, the larger the energy of the slower portion, the later and brighter the second peak appears in the light curve. For an off-axis observer, although the double-peak structure disappears, the wiggling of the rising segment of the light curve becomes more pronounced with increasing energy of the slower element. ### Two-element jet whose interface in the plane intersecting with the LOS In most cases, the interface plane between the two elements of the jet will not contain the LOS. In such cases, we denote the azimuth of the interface on the jet's spherical coordinate system relative to the LOS as \(\Phi\) (see Figure 5). Equations 28 and 29 could be generalized as \[\gamma_{0}=\begin{cases}\gamma_{01}&-\pi+\Phi<\phi<\Phi,\\ \gamma_{02}&\text{others},\end{cases} \tag{30}\] and \[E_{\rm iso}=\begin{cases}E_{\rm iso,1}&-\pi+\Phi<\phi<\Phi,\\ E_{\rm iso,2}&\text{others}.\end{cases} \tag{31}\] Figure 5 shows the schematic picture for the cases when \(\Phi\neq 0\). In this scenario, the shape of the light curve depends not only on the physical parameters of two elements, but also on the values of \(\Phi\) and \(\theta_{\rm obs}\). Figure 6 shows the light curves of afterglows with varying \(\Phi\) and \(\theta_{\rm obs}\). We adopt a fixed value of \(\theta_{\rm j}=0.1\), whilst allowing \(\theta_{\rm obs}\) to vary between 0 and 0.3. And we compare the cases \(\Phi=\pi/4\) and \(\Phi=\pi/2\). We set the initial Lorentz factors as \(\gamma_{01}=100\) and \(\gamma_{02}=20\), and the isotropic energy as \(E_{\rm iso,1}=10^{50}\)ergs and \(E_{\rm iso,2}=10^{52}\)ergs. Other parameters are identical to those in Figure 3. In figure 6(a) and 6(b), we assume the LOS is more inclined towards the element with a larger Lorentz factor. In this case, when \(\theta_{\rm obs}<\theta_{\rm j}\), the light curve contains two distinct peaks, with the first peak being less affected by changes in \(\Phi\) and \(\theta_{\rm obs}\). The second peak will appear delayed and weakened as \(\Phi\) and \(\theta_{\rm obs}\) increases. On the other hand, when \(\theta_{\rm obs}>\theta_{\rm j}\), both peaks will appear delayed and weakened as \(\theta_{\rm obs}\) increases. Moreover, the variation of \(\Phi\) can further affect the second peak. Figures 6(c) and 6(d) show the light curves when the LOS leans towards to the element with lower Lorentz factor. In this case, when \(\theta_{\rm obs}<\theta_{\rm j}\), the light curve often only contains one peak that appears relatively later, and the original first peak will become a small bump in the rising stage, which will gradually disappear with the further increase of \(\Phi\). On the other hand, when \(\theta_{\rm obs}>\theta_{\rm j}\), the light curve will be dominated by the contribution from the element with the lower Lorentz factor. The other element may or may not produce a bump in the rising stage, depending on the specific energy and velocity ratios between the two elements. ### More than 2 elements in the jet For more complex asymmetric structures, the jet may be divided into multiple elements with \(N>2\). These individual elements exhibit differences in both \(\gamma_{0}\) and \(E_{\rm iso}\), which consequently leads to time-varying afterglow radiation observable by the observer. In principle, each element could produce a distinct peak, with the timing and magnitude of the peak dependent on the energy, velocity, and LOS of the corresponding element. The superposition of multiple radiation components can result in various intriguing types of light curves: Figure 2: The schematic diagrams depict the cross-sectional view of a two-element jet with a half-open angle \(\theta_{\rm j}=0.2\). The positions of the LOS, represented by \(\theta_{\rm obs}\), are indicated on the \(x\)-axis to illustrate the relative arrangements. Figure 4: The afterglow’s light curves for a two-element jet with theisotropic energy for one element being fixed at \(E_{\rm ino,1}=10^{50}\)ergs and the other is varying from \(10^{50}\)ergs to \(10^{52}\)ergs. The different color represent different value of \(E_{\rm ino,2}\). And we use the solid line and dash line to distinct the on-axis observation and off-axis observation. Figure 3: The afterglow’s light curves for a two-component jet with the bulk-motion Lorentz factor for one element being fixed at \(\gamma_{01}=100\), while the other is varying from 20 to 100. The different color represent different value of \(\gamma_{02}\). And we use the solid line and dash line to distinct the on-axis observation and off-axis observation. * when the LOS is aligned with the axis of the jet and there are significant differences in the physical parameters of each element, the light curve may exhibit multiple distinct peaks; * when the LOS is aligned with the axis of the jet and there are significant differences in the velocities of each element while the energy differences remain small, the light curve may exhibit a plateau; * when the LOS is inclined towards the faster-moving elements, the peak of the light curve appears earlier, and in the later stage, there is a possibility of either a re-brightening or the absence of a re-brightening, depending on the energy magnitude of the slower-moving elements; * when the LOS is inclined towards the slower-moving elements, the peak of the light curve appears later, and in the early rising phase, there is a possibility of encountering some fluctuations or not encountering any fluctuations, depending on the energy magnitude of the fast-moving elements; * when the LOS is significantly larger than the jet opening angle, the peak of the light curve appears later. In the rising and falling phases, there is a possibility of encountering some fluctuations or not encountering any fluctuations, primarily determined by the elements closer to the LOS. As an example, we study a complicated structure with four elements. The schematic picture is shown in Figure 7. The four elements are equally distributed at different azimuth ranges, with different \(\gamma_{0}\) and \(E_{\rm iso}\) values. Its structure is mathematically defined as, \[\gamma_{0}=\begin{cases}\gamma_{01}&-\pi/2<\phi<0,\\ \gamma_{02}&0<\phi<\pi/2,\\ \gamma_{03}&\pi/2<\phi<\pi,\\ \gamma_{04}&-\pi<\phi<-\pi/2,\end{cases} \tag{32}\] and \[E_{\rm iso}=\begin{cases}E_{\rm iso1}&-\pi/2<\phi<0,\\ E_{\rm iso2}&0<\phi<\pi/2,\\ E_{\rm iso3}&\pi/2<\phi<\pi,\\ E_{\rm iso4}&-\pi<\phi<-\pi/2.\end{cases} \tag{33}\] Figures 8 illustrate the afterglows from a four-element jet. For Case I (see figures 8(a) and 8(b)), we set the initial Lorentz factor of the four elements as \(\gamma_{01}=20\), \(\gamma_{02}=50\), \(\gamma_{03}=75\), and \(\gamma_{04}=110\) respectively. And their isotropic energies are \(E_{\rm iso,1}=10^{53}\)ergs, \(E_{\rm iso,2}=10^{52}\)ergs, \(E_{\rm iso,3}=10^{51.3}\)ergs and \(E_{\rm iso,4}=10^{50}\)ergs respectively. And \(\theta_{\rm j}=0.1\). Other parameters are consistent with those in section 3.1. We have analyzed the situations from five different LOS: 1) when \(\theta_{\rm obs}=0\), the jet exhibits a characteristic light curve with four distinct peaks; 2) when \(\theta_{\rm obs}=0.09\), \(\phi=\pi\) (i.e. the LOS is within the jet and more inclined towards elements with higher Lorentz factors), the light curve exhibits two prominent main peaks, followed by a weaker re-brightening period with noticeable fluctuations in its slope; 3) when \(\theta_{\rm obs}=0.15\), \(\phi=\pi\) (i.e. the LOS is outside the jet and more inclined towards elements with higher Lorentz factors), the light curve reveals two clear peaks, both exhibiting changes in slope during the rising phase; 4) when \(\theta_{\rm obs}=0.09\), \(\phi=0\) (i.e. the LOS is inside the jet and more inclined towards elements with lower Lorentz factors), the light curve displays two peaks at relatively late time; 5) when \(\theta_{\rm obs}=0.15\), \(\phi=0\) (i.e. the LOS is outside the jet and more inclined towards elements with lower Lorentz factors), there is only one peak at late time with wiggling feature during the rising phase. For Case II (see figures 8(c) and 8(d)), we selected four elements with Lorentz factors \(\gamma_{01}=20\), \(\gamma_{02}=27\), \(\gamma_{03}=40\), and \(\gamma_{04}=70\), and isotropic energies \(E_{\rm iso,1}=10^{51.8}\)ergs, \(E_{\rm iso,2}=10^{51.5}\)ergs, \(E_{\rm iso,3}=10^{51.2}\)ergs, and \(E_{\rm iso,4}=10^{50.9}\)ergs. The differences in Lorentz factor and isotropic energy between these elements are not significant. Compared to Case I, when \(\theta_{\rm obs}=0\), the light curve does not exhibit clear four peaks, but rather a flattened shape near the peak. When \(\theta_{\rm obs}=0.09\) and \(\theta_{\rm obs}=0.15\), the impact of the observation angle on the light curve resembles that of Case I. Nonetheless, due to the marginal disparity in the physical parameters of each component, the perturbations in the light curve are comparatively attenuated. Consequently, certain peaks have been mitigated to minor fluctuations or have even ceased to manifest. ## 4 Conclusions and Discussions In this study, we intend to conduct a first step analysis of the potential characteristics of gamma-ray burst afterglows within the framework of non-axisymmetric structured jets, where the physical parameters vary along the azimuthal direction. To accomplish this, we simplify the profile of the asymmetric jet as a step function of the azimuth \(\phi\), dividing the entire jet into \(N\) individual elements. Each element is considered to be uniform and independent. The total light curve of the afterglow, driven by the entire jet, is approximately estimated by superimposing the light curve associated with each individual element. By considering specific cases with \(N=2\) and \(N=4\), we find that the velocity, energy, and line-of-sight direction of each element can significantly impact the behavior of the overall light curve. The radiative contributions from multiple elements may result in the appearance of multiple distinct peaks or plateaus in the light curve. Or in some cases only a small number of peaks, but there are clear signs of fluctuations in the rising and declining segments of each peak. It is worth noting that if some simple variations appear in the GRB afterglow light curve, such as a single re-brightening feature, they could also potentially be generated by axisymmetric structured jet (e.g., two component jet model; Huang et al., 2004; Peng et al., 2005; Wu et al., 2005; Beniamini et al., 2020). However, if the light curve shows more intricate patterns, such as multiple peaks or plateaus, the explanation relying on axisymmetric structured jets becomes challenging, since it is generally difficult for axisymmetric structures to exhibit a variety of discrete energy/velocity distribution patterns. The accumulated dataset of GRB optical afterglow currently exhibits a significant number of sources displaying indications of multiple peak and plateau configurations (Li et al., 2012). In the future, detailed fitting of our model to these sources holds the potential to enhance Figure 5: The diagrams of a two-element jet with interface at an arbitrary \(\phi\). our comprehension of the structural characteristics of GRB jets. On the other hand, in the future, conducting additional numerical simulations, similar to Lamb et al. (2022), that incorporate the effects of jet precession or non-uniform jet dissipation, will contribute to the verification of the physical origin of non-axisymmetric jets. ## Acknowledgements This work is supported by the National Natural Science Foundation of China (Projects 12021003), and the National SKA Program of China (2022SKA0130100). SA acknowledges the China Postdoctoral Science Foundation (2023M732713). ## Data Availability No new data were generated or analysed in support of this research. Figure 6: The light curves of afterglows with varying \(\Phi\) and \(\theta_{\rm obs}\). Different color represent different values of \(\theta_{\rm obs}\), and different styles of lines represent different values of \(\Phi\).
2306.17508
Research on Virus Cyberattack-Defense Based on Electromagnetic Radiation
Information technology and telecommunications have rapidly permeated various domains, resulting in a significant influx of data traversing the networks between computers. Consequently, research of cyberattacks in computer systems has become crucial for many organizations. Accordingly, recent cybersecurity incidents have underscored the rapidly evolving nature of future threats and attack methods, particularly those involving computer viruses wireless injection. This paper aims to study and demonstrate the feasibility of remote computer virus radiation injection. To achieve this objective, digital signal processing (DSP) plays a vital role. By studying the principles and models of radiation attacks and computer virus propagation, the modulation of the binary data stream of the simulated virus into a terahertz radar carrier signal by Phase-Shift Keying (PSK) is simulated, enabling the implementation of an attack through the "field to line" coupling of electromagnetic signals. Finally, the defense and countermeasures based on signal recognition are discussed for such attacks. Additionally, an idea of establishing a virus library for cyberattack signals and employing artificial intelligence (AI) algorithms for automated intrusion detection is proposed as a means to achieve cybersecurity situation awareness.
Ruochen Wu
2023-06-30T09:39:47Z
http://arxiv.org/abs/2306.17508v1
# Research on Virus Cyberattack-Defense Based on Electromagnetic Radiation ###### Abstract Information technology and telecommunications have rapidly permeated various domains, resulting in a significant influx of data traversing the networks between computers. Consequently, research of cyberattacks in computer systems has become crucial for many organizations. Accordingly, recent cybersecurity incidents have underscored the rapidly evolving nature of future threats and attack methods, particularly those involving computer viruses wireless injection. This paper aims to study and demonstrate the feasibility of remote computer virus radiation injection. To achieve this objective, digital signal processing (DSP) plays a vital role. By studying the principles and models of radiation attacks and computer virus propagation, the modulation of the binary data stream of the simulated virus into a terahertz radar carrier signal by Phase-Shift Keying (PSK) is simulated, enabling the implementation of an attack through the "field to line" coupling of electromagnetic signals. Finally, the defense and countermeasures based on signal recognition are discussed for such attacks. Additionally, an idea of establishing a virus library for cyberattack signals and employing artificial intelligence (AI) algorithms for automated intrusion detection is proposed as a means to achieve cybersecurity situation awareness. Cyberattack Cyberweapon Signal processing Radio signal Radiation injection ## 1 Introduction In the information age, cybersecurity is an increasingly serious issue. With the advancement of computer and communication technology, network attacks represented by computer viruses have become a serious threat to the majority of users. Since the 1980s, there have been computer viruses that pose a serious threat to computer security [12]. Currently, network devices, such as computers, are extensively utilized in various fields. As a result, information security and network attacks have become modern methods of warfare. Especially in high-tech warfare environments, cyber attack-defense combat capabilities will be the decisive factor in determining the outcome of the information battlefield [13, 14]. With the substantial increase in network transmission distance and speed, the effectiveness of electromagnetic attacks has also increased accordingly. Compared to wired attack and defense technologies, wireless attacks, such as electromagnetic radiation, are more covert and challenging to prevent. The attack methods of computer viruses mainly include wireless injection [15], wired insertion [16], network intrusion [17], mail transmission [15], and node attack [2]. Among these methods, wireless injection is an important means in space information countermeasure. Additionally, in the field of information countermeasure, research on computer virus weapons is currently a trending topic. The goal is to radiate and attack the vulnerable areas of the enemy's information system with electromagnetic radiation information containing computer viruses, thereby injecting viruses that can paralyze the information systems [Kai, 2018]. Wireless injection involves converting computer viruses into a virus code data stream, that is, radio signal, and then modulate it into electromagnetic waves and transmitted via radio signals. This allows the virus code to be radiated into the enemy radio receiver through transmitter, exploiting any loopholes or weak links in their system to gain entry. In the future, computer virus intrusion methods will focus on injecting viruses through antenna electromagnetic waves transmitted by antennas and using satellite radiation injection. As early as the early 21st century, the U.S. Department of Defense began developing a virus cannon capable of injecting computer viruses over long distances [Xiongwei, 2005]. The virus is radiated to the enemy's host computer, sensors, and network bridges through electromagnetic waves, thereby waiting for an opportunity to attack and destroy the enemy's weapon system, command and control system, and communication system. This paper presents a comprehensive study on the computer virus radiation attack-defense technology, which systematically explains the computer virus radiation attack mechanism, radiation injection principle, and signal modulation and processing technology. The theoretical model demonstrates the feasibility of implementing virus electromagnetic radiation strikes, which lays the foundation for information security and cyber attack-defense countermeasure and related signal processing. In addition, the fundamental framework of binary data stream modulation to the electromagnetic signal is studied, and the operational mode that enables the coupling of the radiation signal to execute an attack is determined. ## 2 Theoretical Background and Methods Computer virus attack-defense is an important research topic in the field of cybersecurity. For the radiation injection method, virus propagation model, radiation injection framework, and signal modulation principle are the foundation for information countermeasure missions. ### Mathematical model of computer virus propagation To initiate a computer virus infection, the virus must be injected into a specific computer within the local area network (LAN). Once injected, the virus can spread and infect multiple devices during the process of information communication, allowing to carry out the attack. Researchers have established relevant mathematical models in order to study computer virus network attacks and their infection patterns [Xianghao and Yixin, 1999]. Suppose there is a computer group consisting of \(N\) computers that are threatened by virus \(V\), and there is data interaction within the network. \(N\) represents the maximum number of computers that the virus can infect. The set \(C=\{C_{1},C_{2},...,C_{N}\}\) represents these \(N\) computers. Define the binary relationship \(D_{C}=\{\left\langle C_{i},C_{j}\right\rangle\mid C_{i},C_{j}\in C,i\neq j\}\) on \(C\) according to the data communication relationship between them, where there is data flow from \(C_{i}\) to \(C_{j}\); \(C_{i}\) is the source computer device and \(C_{j}\) is the target device. Define \(M_{V}=\{C_{i}\mid C_{i}\in C\}\) as the set of computers infected with viruses, where \(C_{i}\) represents a computer infected with virus. It is stipulated that from the moment the virus enters the system (\(n=0\)), the number of the infected computer is \(C_{1}\). The condition for a computer \(C_{j}\) (\(j\neq 1\)) to be infected with the virus is: \(\left(\left\langle C_{i},C_{j}\right\rangle\in D_{C}\right)\cap\left(C_{i} \in M_{V}\right)\cap\left(C_{i}\notin M_{V}\right)\). Let \(X_{n}\) represent the number of computers infected with viruses in \(C\) at the nth unit time, therefor, \(\left\{X_{n},n\geq 1\right\}\) constitutes a discrete random process. At the \(n\)th unit time, the number of virus-infected computers is \(E\left(X_{n}\right)\), then the number of uninfected computers is \(N-E\left(X_{n}\right)\). In the \(M\) times of data communication during the \(n\)th and \(n+1\)th time intervals, the mathematical expectation of infected computers in \(C_{i}\) is \(\frac{M}{N}E\left(X_{n}\right)\), while the number of virus-free computers in \(C_{j}\), \(M-\frac{M}{N}E\left(X_{n}\right)\). According to the randomness of virus propagation, the mathematical expectation of newly infected computers after this copy is: \[E\left(X_{n+1}\right)-E\left(X_{n}\right)=\frac{M}{N}E\left(X_{n}\right)\left( 1-\frac{E\left(X_{n}\right)}{N}\right) \tag{1}\] Let \(E\left(X_{n}\right)\) be regarded as the discretized value of a continuous function \(f\left(x\right)\) at the point \(x=n\). According to Eq. 1, the differential equation of \(f\left(x\right)\) can be obtained: \[\frac{df\left(x\right)}{dx}=\frac{M}{N}f\left(x\right)\left(\frac{1-f\left(x \right)}{N}\right) \tag{2}\] Because \(f\left(x\right)=1\), the number of computers infected with viruses at time \(n\) can be obtained by solving and discretizing the separated variables in Eq. 2. \[E\left(X_{n}\right)=\frac{N}{1+\left(\frac{N}{X_{0}}-1\right)e^{-n\frac{M}{N} }}=\frac{N}{1+\left(N-1\right)e^{-n\frac{M}{N}}} \tag{3}\] Assuming that there are 100 computers connected to each other, the number of data communications between them is 15 times, and one of these computers is infected with a virus. Figure 1 shows the computer virus propagation simulation curve based on Eq. 3. Around the 20th unit time, the virus spreads slowly, which is the incubation period. From approximately the 20th to the 60th, the virus shows a rapid growth trend. At this time, the virus is spreading extensively through computer networks due to the interconnected communication in the LAN. After the 60th, the trend of virus spread tends to stabilize because the increasing number of infected computers leads to a rise in network communication traffic, and the virus significantly impacts the performance of the computer itself. ### Computer virus radiation injection technology #### 2.2.1 Principle The process of computer virus radiation injection involves injecting virus information into the network cable through coupling. Once recognized, received, and executed by the computer system, the virus utilizes its rapid and widespread characteristics to launch attacks and disrupt the computer network. In contrast, a wide area network (WAN) is primarily connected through fiber optic cables, which offer robust electromagnetic information security and anti-interference capabilities [Chen, 2007]. The connection mode adopted by LAN makes its radiation coupling ability more strong, making it the primary target of radiation attacks. Taking Ethernet that implements the IEEE802.3 protocol as an example [Archimbaud, 1992], if it is required to successfully inject virus information into it, it should be injected when the network communication is idle. If there are already ongoing or pending conversations in the network, it is necessary to use high-power radiation to force injection [Xuewen, 2021]. Furthermore, the electromagnetic signal progressively weakens as the communication distance increases. Therefore, in order to ensure successful injection of virus information, the equipment emitting radiation signals must have sufficient power. This makes it possible to use high-power sensors such as satellites and radars as sources of emissions. #### 2.2.2 Method Based on the fundamental principle of radiation injection, one implementation method involves using a high-power computer virus microwave launcher or a corresponding device to precisely control the peak value of its electromagnetic pulse. This allows for the injection of a virus into a specific part of the enemy's computer system, thereby infecting it. Additionally, the dual modulation technology of high-power microwave and computer virus can be directly combined to transmit a continuous stream of high-power microwave, which is modulated with a computer virus. This approach enables the virus to be injected into a computer that is currently receiving information. Figure 2 shows the computer virus radiation injection schema, including the process of computer virus signal processing. By analyzing the data format, encoding, and modulation mode of the target network, the virus source code is transformed to match the same format as the enemy network data information. Finally, signal modulation is performed on the coded Figure 1: Computer virus propagation simulation curve and modulated virus information source, which is emitted as an electromagnetic wave through the transmitting antenna to attack the enemy's computer system with radiation. Among them: * Modulation of emission: signal modulation is performed on the generated virus source and loaded into the original emission signal. * Power amplification: Complete output power generation and preamplification. This paper focuses on the relevant methods and technologies of virus signal processing to explore the feasibility of modulating virus information into the emitted signal. ### Signal modulation technology Signal modulation is the process of modifying one or more characteristics of a periodic waveform, known as the carrier signal, by incorporating a modulated signal that typically carries the information to be transmitted [Chan and Gadbois, 1989]. In layman's terms, modulation is to move the signal (original information) to be transmitted to the carrier signal. Signal modulation is divided into two categories: analog modulation and digital modulation. In addition, depending on the modulated objects, modulation can be classified into four types: frequency modulation (FM), amplitude modulation (AM), phase modulation (PM), and quadrature amplitude modulation (QAM) [Azzouz and Nandi, 2013]. To modulate a computer virus signal, which is converted into a binary data stream, for radar emission, digital modulation is the preferred choice. Digital modulation involves converting discrete digital signals into continuous analog signals through modulation techniques [Xiong, 2006]. Various schemes can be chosen based on specific requirements, such as amplitude shift keying(ASK), frequency shift keying (FSK), and phase shift keying (PSK). It allows the transformation of digital signal information into specific characteristics of an analog signal, such as frequency, phase, or amplitude, enabling transmission in a communication system. In ASK, the binary digital signal utilizes \(0\) and \(1\) to determine the presence or absence of the carrier amplitude, resulting in changes in the carrier amplitude along with the signal. When the signal is \(1\), the carrier signal is transmitted, while it is not transmitted when the signal is \(0\). FSK allows for the modulation of the carrier frequency based on the digital signal. It involves transmitting carrier signals of different frequencies for binary \(1\) and \(0\). In this case, \(s_{0}\left(t\right)=A\cos\left(2\pi f_{0}t\right)\) is used to represent \(0\), and \(s_{1}\left(t\right)=A\cos\left(2\pi f_{1}t\right)\), to represent \(1\). PSK represents digital binary data using the phase of an analog carrier wave, which varies according to the binary input. When the input signal is \(1\), the output is a carrier wave with a phase of \(0\). Conversely, when the input signal is \(0\), the output is a carrier wave with a phase of \(\pi\). Based on the aforementioned theories and technologies, it is important to select different modulation techniques based on specific application requirements and system design. Computer virus is transmitted in wireless channels as signals encoded in the form of electromagnetic waves. In this case, the receiving end can become infected with the virus information through "field to line" coupling, thereby executing an attack on the target computer. This technique enables wireless signal transmission from a transmitting device to a target device through electromagnetic radiation without physical connection. Within a certain range, it allows signal transmission and injection without direct physical contact with the target device. Transmitting antennas emit radio signals using electromagnetic radiation, creating an electromagnetic field that induces signal information into the target device, enabling signal injection. ### Signal recognition for computer virus defense With the advancement of cyberspace countermeasure, the modulation and encoding techniques of computer virus injection signals are constantly evolving, and the variety of transmitting antennas is becoming increasingly intricate. In order to accurately detect and defend against virus intrusions, the utilization of signal recognition technology is of paramount importance. The underlying principle involves conducting time-frequency analysis on the received Figure 2: Computer virus radiation injection schema radiation signal, extracting its time-domain and frequency-domain features through Fourier transform, and employing a recognition classifier for identification. Once the radiation spectrum of the signal exhibits a significant correlation with the established criteria, it confirms the presence of an intrusion signal. For the radiation source signal, taking FSK as an example. It is mentioned above that it uses the frequency change between different pulses to realize signal modulation, that is: \[s\left(t\right)=Ae^{j\left(2\pi f\left(t\right)+\theta_{0}\right)}+n\left(t \right),\left(0<t\leq T\right) \tag{4}\] where \(A\) represents the signal amplitude, \(f\left(t\right)\) represents the frequency modulation function, \(\theta_{0}\) represents the initial phase, and \(T\), pulse width. Radiation source signal identification can be conducted based on the aforementioned theory. The objective is to receive, process, and identify the radiation source signal, achieving optimal identification results while minimizing errors. A radiation source signal identification model is shown in Figure 3. When employing time-frequency analysis to extract signal features, a higher parameter dimension leads to increased complexity in representing information. Hence, it is crucial to extract parameters with robust explanatory properties in order to effectively identify signal features, which can enhances computational efficiency and improves recognition accuracy during the classification process. ## 3 Simulation and Results On the basis of the aforementioned theory and technology, the use of radar antenna as a transmitter to spread radio signals carrying computer virus information is demonstrated in this chapter. Typically, a radar detects target by emitting electromagnetic waves (radio waves) and receiving the reflected waves from the target. The emitted signal is modulated and adjusted before being transmitted into space. During transmission, the electromagnetic waves follow the laws of propagation, which are a form of energy propagation resulting from the interaction of electric and magnetic fields, and they can be transmitted wirelessly as radio waves. In radar system, the emitted signal is appropriately modulated in terms of frequency and waveform. Then the modulated signal is transmitted into space through an antenna and propagate as electromagnetic waves. When the electromagnetic waves reach the target or receiver, the receiving system can detect and interpret the electromagnetic signal, extracting the relevant information it carries. Therefore, this process establishes the theoretical foundation for transmitting radar signals through wireless channels in the form of electromagnetic waves. For computer virus codes, which are typically expressed in hexadecimal form, it allows for effective utilization of storage space. Moreover, the hexadecimal representation is easier to read and comprehend compared to binary data. By utilizing characters \(0-9\) and \(A-F\) to represent values \(0-15\), it facilitates recognizing the structure and patterns of Figure 4: Carrier signal. The right subfigure shows the signal waveform of the enlarged part of the signal in the left subfigure. Figure 3: Radiation source signal identification model the code. The utilization of hexadecimal to represent computer virus code serves the purpose of efficiently storing and processing binary data, while also enhancing readability and compatibility. This paper is interested in the signal processing part. For this, the modulation of a computer virus converted into a binary data stream into a radar carrier signal is simulated. The computer virus is simulated by generating a random binary data stream, and the emission signal is defined by radar parameters. In this case, a radar with a center frequency of 120 GHz is used as the transmitter. In addition, the signal is modulated using FSK to simulate the generation and modulation process of a computer virus signal. Figure 4 shows a carrier signal with a binary sequence. A cosine carrier signal is generated according to the radar center frequency. Subsequently, the binary data is modulated using FSK by superimposing cosine signals of different frequencies. The frequency of the modulating signal is determined by the value of the data bits. In this simulation, for each binary data bit, if its value is \(0\), set the frequency of the carrier signal to \(f_{c}-bitRate/2\); on the contrary, set the frequency to \(f_{c}+bitRate/2\), that is, a low-frequency cosine signal represents a binary data bit of \(0\), while a high-frequency cosine signal represents a binary data bit of \(1\). Finally, the modulation signal is added to the carrier signal to generate the signal to be emitted, as shown in Figure 5. The carrier signal is typically a high-frequency signal, with a frequency much higher than that of the modulating signal. By combining the modulated signal with the carrier signal, the frequency characteristics of the modulating signal can be incorporated into the carrier signal, enabling data transmission and modulation effects. Ideally, the combined transmitted signal can be introduced into the enemy's computer system through radiation coupling, resulting in the release of transmitted data information. Figure 6 shows the spectrum of the signal intended for transmission, revealing three distinct peaks. During the modulation process of each data bit, different frequency components are employed to generate the modulated signal based on the data bit's value. As a result, the modulated signal contains multiple frequency components. In addition, Figure 5: Modulated signal and the signal to be emitted. The right subfigures show the signal waveform of the enlarged part of the signals in the left subfigures. when the carrier signal is combined with the modulated signal, its frequency aligns with a specific frequency component of the modulated signal. In the FFT spectrum, a vibrant color appears around the frequency of approximately 10 Hz, indicating a high signal strength in this frequency range due to the presence of a powerful signal. Moving from 10 Hz to 500 Hz, the existence of multiple frequency components within this range and a gradual increase in signal energy. Furthermore, at various time points, there are distinct, light-colored straight lines spanning from 0 to 500 Hz, representing instances of higher signal intensity. In this case, the signal power of each frequency component is satisfactory and information attenuation may have little effect on the signal, which lays a solid foundation for the success of the attack mission. During the actual process, when emitting an electromagnetic wave signal carrying virus information, the radiation information experiences attenuation due to space transmission and "field to line" coupling. To ensure successful injection of the virus into the enemy's computer system, a sufficiently high power is required for the transmitting antenna. ## 4 Anti-Radiation Attacks With the continuous advancement of attack methods in recent years, relying solely on physical isolation is insufficient to effectively defend against cyber threats. Attacks such as radiation injection have the capability to bypass physical isolation and infiltrate isolated networks to steal information and cause damage. Therefore, in order to effectively identify and respond to these attacks, it is crucial to have a comprehensive understanding of the characteristics of network signals and establish a corresponding virus signal library based on them. For a computer to successfully interpret information, it relies on encoding to convert the source information into binary code. This means that the characters or strings processed by the computer are represented by binary numbers, which can be represented by a rectangular waveform, as shown in Figure 7, which presents the time-domain waveform. In actual network communication transmission, the waveform of the network signal resembles a sawtooth wave. This is due to the reciprocal relationship between a sawtooth wave and a rectangular wave. By applying the Fourier transform, the Fourier series of the sawtooth wave can be obtained, which allows to convert the signal from a time domain representation to a frequency domain representation for spectrum analysis. In network communication, Manchester encoding is commonly used as a coding method to represent binary data, the principle of which is to alter the signal level to represent \(0\) and \(1\). Each bit time is divided into two equal periods. For a binary \(0\), the signal transitions from high to low in the middle of the period, while for a binary \(1\), the signal transitions from low to high in the middle of the period. This encoding method offers the advantage of having a level change in each clock cycle, facilitating clock synchronization and minimizing the impact of clock drift on signal demodulation. In section 2.4, it was mentioned that the spectrum information of the network signal can be used for feature extraction, serving as prior knowledge for the virus signal library. By comparing the spectral characteristics of the injected signals in the network with predefined criteria, their correlation can be verified to identify potential attacks. Leveraging the extensive prior knowledge, the application of AI algorithms like support vector machines (SVM), artificial neural Figure 6: Spectrum of emitted signal. The left subfigure shows the FFT of the emitted signal, and the right subfigure, SFT of the emitted signal. networks (ANN), and deep learning (DL) enables automatic identification and classification of attack signals. This approach can significantly enhance the efficiency of signal recognition, allowing for prompt countermeasures in cases where intrusion behavior is clearly detected. ## 5 Conclusion This paper examines the feasibility of electromagnetic-based computer virus radiation injection attacks. The characteristics, injection methods, and attack modes of computer virus weaponry is studied. A computer virus propagation model is established to simulate the spread of computer viruses, and the impact of which is evaluated through simulation curves, providing compelling evidence for the viability of radiation attacks. The study further explores virus attack workflow tailored to the characteristics of wireless computer networks and communication protocols. Additionally, the encoding of virus data and signal modulation through signal processing algorithms is analyzed, enabling the generation of virus attack signals, and the process of modulating virus binary code into radar emission signals is simulated. Finally, this paper discusses defense measures based on the radiation injection scheme and proposes a potential approach involving the establishment of a virus signal library and the integration of AI algorithms for detection and analysis, aiming to counteract radiation attacks. Wireless injection is a promising yet challenging one in the field of cybersecurity. In future research, a comprehensive exploration of network traffic patterns, computer virus data, and machine learning will enable researchers to address current issues. The advancement of intelligent information processing algorithms such as DL will facilitate the development of advanced signal processing for cyber attack and defense.
2309.06280
Coupled-mode theory for non-periodic structured waveguides
In this work a new generalization of the theory of coupled modes for non-periodic structured waveguide is presented. Based on a set of eigen waves of a homogeneous periodic waveguide, a new basis of vector functions is introduced that takes into account the non-periodicity of the waveguide. Representing the total field as the sum of these functions with unknown scalar coefficients, a system of coupled equations that determines the dependence of these coefficients on the longitudinal coordinate has been obtained. It was shown that the single-wave equation has an additional phase term. In the frame of proposed approach, the z-dependent series impedance and local wave vector were introduced for structured waveguides.
M. I. Ayzatsky
2023-09-12T14:41:43Z
http://arxiv.org/abs/2309.06280v1
**Coupled-mode theory for non-periodic structured waveguides** ###### Abstract In this work a new generalization of the theory of coupled modes for non-periodic structured waveguide is presented. Based on a set of eigen waves of a homogeneous periodic waveguide, a new basis of vector functions is introduced that takes into account the non-periodicity of the waveguide. Representing the total field as the sum of these functions with unknown scalar coefficients, a system of coupled equations that determines the dependence of these coefficients on the longitudinal coordinate has been obtained. It was shown that the single-wave equation has an additional "phase" term. In the frame of proposed approach, the z-dependent series impedance and local wave vector were introduced for structured waveguides ## 1 Introduction Waveguides that consist of similar (but not always identical) cells are called structured. Structured waveguides based on coupled resonators play an important role in many applications. Their use in active devices, in comparison with passive ones, has a number of distinctive features. First, TH modes with a large longitudinal component of the electric field are used. Secondly, in accelerators and high-frequency electronic devices, the spatial distribution of longitudinal electromagnetic fields (especially their phases) over the structure plays a decisive role. Thirdly, the active elements of these devices are electron beams. There are powerful programs that can be used to calculate the RF characteristics of structured waveguides and the electromagnetic fields induced in them by electron beams, but they are difficult to use for characterization and preliminary design. For these purposes simpler approaches are needed. Electromagnetic fields in the periodic structured waveguides can be effectively described by the Floquet-Bloch's theory and the electrodynamic approach based on the field expansion in forward and backward waves which constitute a complete orthogonal set of vector functions and have a rigorous physical basis. In this case the problem is one-dimensional, since the wave amplitudes depend only on one coordinate and can be found by solving a set of ordinary differential equations that are not coupled. In the general case, when the coupled resonators are different, there are no physical concepts that could simplify the understanding of the electromagnetic process. If the parameters of resonators change smoothly and slowly, we can expect that the electromagnetic fields will have some features of forward and backward waves. In the case of inhomogeneous smooth waveguides, approximate approaches are a powerful tool for study its properties [1, 2]. The development of approximate approaches for structured waveguides is at an early stage. A method based on a generalization of the theory of coupled modes for the case when the structure and the non-periodicity can be described by differential operators was proposed [3, 4]. To use this approach for a waveguide whose structure is determined by the boundaries, it is necessary to transform the side walls into a smooth cylinder and obtain differential equations describing the fields in a smooth waveguide with inhomogeneous filling. This is a complex (and in some cases impossible) procedure. In this work we presented a new generalization of the theory of coupled modes for non-periodic structured waveguide with ideal metal walls. ## 2 Basic equations Let us first consider a periodic structured waveguide with metal walls. We will consider the axisymmetric waveguides. In most cases the boundary of a periodic structured waveguide is completely determined by a finite set of geometrical parameters \(g_{i},\ \ i=1,...,I\). For example, for a circular disk-loaded waveguide (see Figure 1), we have four geometrical parameters: \(g_{i}=b\) -the radius of the waveguide, \(g_{2}=a\) - the radius of the aperture, \(g_{3}=t\) - the thickness of the disk and \(g_{4}=d\) - the distance between disks (waveguide period \(D=t+d\)). We can write the dependence of the radius of this waveguide on the longitudinal coordinate \(z\) as \[R(z)=\begin{cases}g_{2},\ \ \left(n-1\right)D+z<(n-1)D+t,\\ g_{1},\ \ \left(n-1\right)D+t<z<nD.\end{cases} \tag{1}\] For each fixed \(z\) there are a subset of \(g_{i}^{(l)}\) (let's call them local geometrical parameters), that determine the geometry of the cross section \(S_{\perp}(z,g_{i}^{(l)})\) which can have a complex shape, even be multiply connected. The remaining parameters will be called global \(g_{i}^{(g)}\). The division of the geometrical parameters \(g_{i}\) into local and global ones depends on \(z\). For example, in the case of a circular disk-loaded waveguide (see Figure 1) we have \(g_{2}^{(l)}=a\) for \(\big{(}n-1\big{)}D<z<\big{(}n-1\big{)}D+t\) and \(g_{i}^{(l)}=b\) for \(\big{(}n-1\big{)}D+t<z<nD\). There may be restrictions on the range of variation of each parameter \(g_{i}\), determined by the geometry of the waveguide. We will assume that all quantities have a time variation given by \(\exp\)\((-i\omega t)\). The behavior of electromagnetic field is governed by Maxwell's equations \[rot\,\ddot{E}=i\,\omega\,\mu_{b}\ddot{H}, \tag{2}\] \[rot\,\ddot{H}=-i\,\omega\,\varepsilon_{b}\ddot{E}+\ddot{J}. \tag{3}\] For periodic waveguide without losses we have two sets of eigen waves \(\big{\{}\ddot{E}_{s}(\vec{r}),\ddot{H}_{s}(\vec{r})\big{\}}=\Big{\{}\ddot{ \vec{E}}_{s}(\vec{r}),\ddot{H}_{s}(\vec{r})\Big{\}}\exp\big{(}\gamma_{s}z \big{)}\) and \(\big{\{}\ddot{E}_{-s}(\vec{r}),\ddot{H}_{-s}(\vec{r})\big{\}}=\Big{\{}\ddot{ \vec{E}}_{-s}(\vec{r}),\ddot{H}_{-s}(\vec{r})\Big{\}}\exp\big{(}\gamma_{-s}z \big{)}\), \(\gamma_{-s}=-\gamma_{s}\), which are the solutions to such equations (\(s>0\) - forward waves, \(s<0\) - backward waves) \[rot\,\ddot{E}_{s}=i\omega\mu_{b}\ddot{H}_{s}, \tag{4}\] \[rot\,\ddot{H}_{s}=-i\,\omega\varepsilon_{b}\ddot{E}_{s}, \tag{5}\] satisfy the orthogonality condition \[N_{s,s}=\int\limits_{S_{s}(z)}\big{\{}\ddot{E}_{s}\ddot{H}_{s}\big{\}}-\Big{[} \ddot{E}_{s}\ddot{H}_{s}\big{]}\ddot{\varepsilon}_{s}dS=\begin{cases}0,&s^{ \prime}\neq-s,\\ N_{s},&s^{\prime}=-s,\end{cases} \tag{6}\] and the boundary conditions on the side metallic surface of the waveguide \[\ddot{E}_{s,s}=0,\] \[\ddot{H}_{s,\perp}=0.\] that this dependence can be considered as functional and written \(\ddot{E}_{s}=\ddot{E}_{s}\left(\ddot{r}_{\perp},z,g_{1},...,g_{I}\right)\), \(\ddot{H}_{s}=\ddot{H}_{s}\left(\ddot{r}_{\perp},z,g_{1},...,g_{I}\right)\). The domain of these functions is determined by the geometry of the waveguide. For example, the longitudinal component of the electric field in a cylinder resonator \(E_{z}=J_{0}\Big{(}\dfrac{z_{\perp}}{b}\,r\Big{)}\!\cos\!\left(\dfrac{\pi}{d} \,z\right)\) can be considered as a function of four variables \(E_{z}=E_{z}\left(r,z,b,d\right)\), where \(0\leq r\leq b,\ 0\leq z\leq d\). We can introduce new vector functions \(\ddot{E}_{s}^{(z)}=\ddot{E}_{s}\left(\ddot{r}_{\perp},z,g_{1}^{(z)}(z),...,g_ {I}^{(z)}(z)\right),\)\(\ddot{H}_{s}^{(z)}=\ddot{H}_{s}\left(\ddot{r}_{\perp},z,g_{1}^{(z)}(z),...,g_ {I}^{(z)}(z)\right),\) where \(g_{i}^{(z)}(z)\) and its derivatives are continuous functions of \(z\). The set \(g_{i}^{(z)}(z)\) doesn't describe any real waveguide. But for each fixed \(z\) the vectors \(\ddot{E}_{s}^{(z)}\), \(\ddot{H}_{s}^{(z)}\) represent the fields of a periodic waveguide (analogue of a virtual waveguide [3,4]) in the cross section \(S_{1}^{(z)}(z,g_{i}^{(z)}(z))\), where \(g_{i}^{(l,z)}(z)\) are the subset of local geometrical parameters. The vector functions \(\ddot{E}_{s}^{(z)}\), \(\ddot{H}_{s}^{(z)}\) are no longer the solutions to equations (4). Indeed, as \[\dfrac{\partial\ddot{E}_{s}^{(z)}}{\partial z}=\dfrac{\partial\ddot{E}_{s}^{( z)}}{\partial z}\big{|}_{g_{s-\omega t}}+\sum\limits_{i}\dfrac{\partial\ddot{E}_{s}^{( z)}}{\partial g_{i}^{(z)}}\dfrac{dg_{i}^{(z)}}{dz}, \tag{7}\] then \[rot\ddot{E}_{s}^{(z)}=rot\ddot{E}_{s}^{(z)}\big{|}_{g_{s-\omega t}}+\ddot{E}_{ s}^{(r)}=i\,\omega\mu_{b}\ddot{H}_{s}^{(z)}+\ddot{E}_{s}^{(r)},\] \[rot\,\ddot{H}_{s}^{(z)}=rot\,\ddot{H}_{s}^{(z)}\big{|}_{g_{s-\omega t}}+\ddot{H} _{s}^{(r)}=-i\,\omega\varepsilon_{b}\varepsilon\ddot{E}_{s}^{(z)}+\ddot{H}_{s}^ {(r)},\] where \[\begin{split}\vec{E}_{s}^{({\rm{v}})}=&\sum_{i}\frac{ dg_{s}^{({\rm{c}})}}{dz}\bigg{[}\vec{e}_{z}\,\frac{\partial\vec{E}_{s}^{({\rm{c}})}}{ \partial g_{s}^{({\rm{c}})}}\bigg{]}\\ \vec{H}_{s}^{({\rm{v}})}=&\sum_{i}\frac{dg_{s}^{({ \rm{c}})}}{dz}\bigg{[}\vec{e}_{z}\,\frac{\partial\vec{H}_{s}^{({\rm{c}})}}{ \partial g_{s}^{({\rm{c}})}}\bigg{]}.\end{split} \tag{9}\] The new vector functions \(\vec{E}_{s}^{({\rm{c}})}\), \(\vec{H}_{s}^{({\rm{c}})}\) still obey the boundary conditions (6) on the contour \(\vec{r}_{i}=\vec{r}_{\perp}^{({\rm{c}})}(z,g_{s}^{({\rm{c}})}(z))\), which limits the cross section \(S_{\perp}^{({\rm{c}})}(z,g_{s}^{({\rm{c}})}(z))\) of some waveguide, and the orthogonality conditions \[N_{s,s}^{({\rm{c}})}=\int\limits_{S_{\perp}^{({\rm{c}})}}\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! These functions ensure the continuity of functions and its derivatives. The values of global parameters \(g_{3}^{(z)}(z),g_{4}^{(z)}(z)\) for the case under consideration are constant along \(z\) The sets of eigen solutions \(\tilde{E}_{s},\tilde{H}_{s}\) are complete. We can expect that the sets \(\tilde{E}_{s}^{(z)}\), \(\tilde{H}_{s}^{(z)}\) are complete, too, and we can look for a solution of equations (2) and (3) in the form of such a series \[\tilde{H}\left(\tilde{r}\right)=\sum_{z\in 0}\left\{C_{s}\left(z\right)\tilde{H}_{s }^{(z)}\left(\tilde{r}\right)+C_{-z}\left(z\right)\tilde{H}_{-z}^{(z)}\left( \tilde{r}\right)\right\}. \tag{13}\] Using the vector relation \[rot\left(C_{s}\tilde{H}_{s}^{(z)}\right)=\left[\nabla C_{s}\tilde{H}_{s}^{(z )}\right]+C_{s}rot\,\tilde{H}_{s}^{(z)} \tag{14}\] and assuming that the series (13) can be differentiated term by term, we get from (3) \[\begin{array}{l}\tilde{E}=-\frac{1}{i\,\omega_{\phi}\,\varepsilon}rot\, \tilde{H}+\frac{\tilde{J}}{i\,\omega_{\phi}\,\varepsilon}=\\ \\ \sum_{s}\left(C_{s}\tilde{E}_{s}^{(z)}+C_{-z}\tilde{E}_{-z}^{(z)} \right)-\frac{1}{i\,\omega_{\phi}\,\varepsilon}\sum_{s}\left(\frac{dC_{s}}{dz }\left[\tilde{e}_{s}\tilde{H}_{s}^{(z)}\right]+\frac{dC_{-z}}{dz}\left[\tilde{ e}_{s}\tilde{H}_{-z}^{(z)}\right]+C_{s}\tilde{H}_{s}^{(z)}+C_{-z}\tilde{H}_{-z}^{(z)} \right)+\frac{\tilde{J}}{i\,\omega_{\phi}\,\varepsilon}.\end{array} \tag{15}\] Suppose that \[\sum_{s}\left(\frac{dC_{s}}{dz}\left[\tilde{e}_{s}\tilde{H}_{s}^{(z)}\right]+ \frac{dC_{-z}}{dz}\left[\tilde{e}_{s}\tilde{H}_{-z}^{(z)}\right]\right)=\tilde {J}_{z}-\sum_{s}\left(\tilde{H}_{s}^{(z)}C_{s}+\tilde{H}_{-z}^{(z)}C_{s}\right), \tag{16}\] where \(\tilde{J}=\tilde{J}_{z}+\tilde{J}_{z}\). We cannot use in (16) the full current \(\tilde{J}\) as the sums don't have the longitudinal components. Then (15) takes the form \[\tilde{E}=\sum_{s}\left(C_{s}\tilde{E}_{s}^{(z)}+C_{-s}\tilde{E}_{-z}^{(z)} \right)+\frac{\tilde{J}_{z}}{i\,\omega_{\phi}\,\varepsilon}. \tag{17}\] Substitution of (17) and (13) into the equation (2) gives \[\sum_{s}\left(\frac{dC_{s}}{dz}\left[\tilde{e}_{s}\tilde{E}_{s}^{(z)}\right]+ \frac{dC_{-z}}{dz}\left[\tilde{e}_{z}\tilde{E}_{-z}^{(z)}\right]\right)=-\frac {1}{i\,\omega\,\omega_{\phi}\,\varepsilon}rot\,\tilde{J}_{z}-\sum_{s}\left(C_ {s}\tilde{E}_{s}^{(z)}+C_{-z}\tilde{E}_{-z}^{(z)}\right) \tag{18}\] Multiplying (18) by \(\tilde{H}_{-z}^{(z)}\), (16) by \(\tilde{E}_{-z}^{(z)}\), adding the resulting equations and integrating the result over the cross-section, we obtain \[N_{s}^{(z)}\frac{dC_{s^{\prime}}}{dz}=\int\limits_{S_{z}^{(z)}(z)}\left\{ \tilde{J}_{z}\tilde{E}_{-z}^{(z)}-\frac{1}{i\,\omega\,\omega_{\phi}\, \varepsilon}\tilde{H}_{-z}^{(z)}rot\,\tilde{J}_{z}-\tilde{E}_{-z}^{(z)}\sum_{ s}\left(\tilde{H}_{s}^{(z)}C_{s}+\tilde{H}_{-z}^{(z)}C_{-z}\right)-\tilde{H}_{-z}^{(z)} \sum_{s}\left(\tilde{E}_{s}^{(z)}C_{s}+\tilde{E}_{-z}^{(z)}C_{-z}\right)\right\}dS\,, \tag{19}\] where \[N_{s}^{(z)}=\int\limits_{S_{z}^{(z)}}\left\{\left[\tilde{E}_{s}^{(z)}\tilde{H }_{-z}^{(z)}\right]-\left[\tilde{E}_{-z}^{(z)}\tilde{H}_{s}^{(z)}\right] \right\}\tilde{e}_{s}dS. \tag{20}\] Similarly, multiplying (18) by \(\tilde{H}_{s^{\prime}}\), (16) by \(\tilde{E}_{s^{\prime}}\), adding and integrating give \[N_{s^{\prime}}^{(z)}\frac{dC_{-z^{\prime}}}{dz}=-\int\limits_{S_{z}^{(z)}(z)} \left\{\tilde{J}_{z}^{(z)}-\frac{1}{i\,\omega\,\omega_{\phi}\,\varepsilon} \tilde{H}_{-z}^{(z)}rot\,\tilde{J}_{z}^{(z)}-\tilde{E}_{-z^{\prime}}^{(z)} \sum_{s}\left(\tilde{H}_{s}^{(z)}C_{s}+\tilde{H}_{-z}^{(z)}C_{-z}\right)-\tilde{H }_{s^{\prime}}^{(z)}\sum_{s}\left(\tilde{E}_{s}^{(z)}C_{s}+\tilde{E}_{-z}^{(z )}C_{-z}\right)\right\}dS\,. \tag{21}\] We transform the integrand in (19). \[\begin{array}{l}\left(\tilde{H}_{-z}^{(z)}rot\,\tilde{J}_{z}\right)=\tilde{ H}_{-z}^{(z)}rot\left(j,\tilde{e}_{z}\right)=\tilde{H}_{-z^{\prime}}^{(z)} \left(\left[\nabla j_{z}\tilde{e}_{z}\right]-j_{z}rot\,\tilde{e}_{z}\right)=\\ \\ =\tilde{H}_{-z}^{(z)}\left[\nabla j_{z}\tilde{H}_{-z}^{(z)}\right]\tilde{e}_{z}=- \tilde{e}_{z}^{(z)}\left[\left(\nabla j_{z}\tilde{H}_{-z^{\prime}}^{(z)}\right]- j_{z}rot\left(\tilde{H}_{-z^{\prime}}^{(z)}\mid_{k,-\text{corr}}\right)+j_{z}rot\left( \tilde{H}_{-z^{\prime}}^{(z)}\mid_{k,-\text{corr}}\right)\right]=\\ \\ =-\tilde{e}_{z}^{(z)}\left[rot\left(j_{z}\tilde{H}_{-z}^{(z)}\mid_{k,-\text{corr} }\right)+j_{z}rot\left(\tilde{H}_{-z^{\prime}}^{(z)}\mid_{k,-\text{corr}} \right)\right]=-\tilde{e}_{z}^{(z)}rot\left(j_{z}\tilde{H}_{-z^{\prime}}^{(z)} \mid_{k,-\text{corr}}\right)-\tilde{J}_{z}io\,\omega_{\phi}\,\varepsilon\tilde{E}_{-z }^{(z)}.\end{array} \tag{22}\] Taking into account that Figure 3: \[\int\limits_{S^{(2)}_{z}(z)}\!rot\left(j_{z}\vec{H}^{(2)}_{-,z}\left|{}_{{}_{B- round}}\right)\vec{e}_{z}dS=\int\limits_{S^{(2)}_{z}(z)}\!rot\left(j_{z}\vec{H}^{(2)}_{-,z} \left|{}_{{}_{B- round}}\right.\right)d\vec{S}=\oint j_{z}\vec{H}^{(2)}_{-,z} \left|{}_{{}_{B- round}}\right.d\vec{l}=0\,, \tag{23}\] we get \[N^{(2)}_{z}\frac{dC_{-,z^{\prime}}}{dz}=-\int\limits_{S^{(2)}_{z}(z)}\!\left\{ \widetilde{j}\vec{E}^{(2)}_{z^{\prime}}-\sum\limits_{s}\left\{C_{s}\left(\vec{ E}^{(2)}_{z^{\prime}}\vec{H}^{(2)}_{s}+\vec{H}^{(2)}_{s^{\prime}}\vec{E}^{(2)}_{s} \right)+C_{-s}\left(\vec{E}^{(2)}_{z^{\prime}}\vec{H}^{(2)}_{-s}+\vec{H}^{(2)}_ {s^{\prime}}\vec{E}^{(2)}_{-s}\right)\right\}\right\}dS\,, \tag{24}\] \[N^{(2)}_{z^{\prime}}\frac{dC_{-,z^{\prime}}}{dz}=\int\limits_{S^{(2)}_{z}(z)} \!\left\{\widetilde{j}\vec{E}^{(2)}_{-,z^{\prime}}-\sum\limits_{s}\left\{C_{ s}\left(\vec{E}^{(2)}_{z^{\prime}}\vec{H}^{(2)}_{s}+\vec{H}^{(2)}_{-,z^{\prime}} \vec{E}^{(2)}_{s}\right)+C_{-s}\left(\vec{E}^{(2)}_{-,z^{\prime}}\vec{H}^{(2) }_{-s}+\vec{H}^{(2)}_{-,z^{\prime}}\vec{E}^{(2)}_{-s}\right)\right\}\right\}dS\,. \tag{25}\] Separation the exponential dependence on z \[\vec{E}^{(2)}_{z},\vec{H}^{(2)}_{z}=\vec{E}^{(2)}_{z},\vec{\vec{H}}^{(2)}_{z} \exp\left(\gamma^{(2)}_{z}z\right) \tag{26}\] gives \[\int\limits_{S^{(2)}_{z}(z)}\left(\vec{E}^{(2)}_{z}\vec{H}^{(2)}_{z^{\prime}} +\vec{H}^{(2)}_{z}\vec{E}^{(2)}_{z^{\prime}}\right)dS=\exp\left(\gamma^{(2)}_ {z^{\prime}}z+\gamma^{(2)}_{z}z\right)W^{(2)}_{z^{\prime}}+z\frac{d\gamma^{(2 )}_{z^{\prime}}}{dz}\vec{N}^{(2)}_{z^{\prime}}\left(z\right)\delta_{z,-z^{ \prime}}\,, \tag{27}\] where \[W^{(2)}_{z,z}=\sum\limits_{s}\frac{dg^{(2)}_{z}}{dz}\int\limits_{S^{(2)}_{z} (z)}\!\left\{\frac{\partial}{\partial S^{(2)}_{z}}\!\left[\vec{E}^{(2)}_{z^{ \prime}}\vec{H}^{(2)}_{z}\right]\!-\!\left[\vec{E}^{(2)}_{z}\frac{\partial \vec{H}^{(2)}_{z^{\prime}}}{\partial S^{(2)}_{z}}\right]\!-\!\left[\vec{E}^{(2 )}_{z^{\prime}}\frac{\partial\vec{H}^{(2)}_{z^{\prime}}}{\partial S^{(2)}_{z} }\right]\!\right\}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! \(P_{x^{\prime}}^{(z)}(z)\) is a complex power that fields \(\stackrel{{\widetilde{E}_{x^{\prime}}}}{{E}}_{x^{\prime}}^{(z)}\) transfer through the cross-section \(S_{\perp}^{(z)}(z)\). As for periodic waveguide \(\mathrm{Re}\,P_{x^{\prime}}\) is a constant value, then the active power \(\mathrm{Re}\,P_{x^{\prime}}^{(z)}(z)\) is a function of \(g_{1}^{(z)}(z)\),...,\(g_{t}^{(z)}(z)\) only and \(\sum_{i}\frac{dg_{i}^{(z)}}{dz}\frac{\partial\mathrm{Re}\,P_{x^{\prime}}^{(z) }}{\partial g_{i}^{(z)}}=\frac{d\mathrm{Re}\,P_{x^{\prime}}^{(z)}}{dz}\). This is not true about reactive power \(\mathrm{Im}\,P_{x^{\prime}}^{(z)}\). It can be shown that it is a rapidly changing function with \(z\) and \(\sum_{i}\frac{dg_{i}^{(z)}}{dz}\frac{\partial\mathrm{Im}\,P_{x^{\prime}}^{(z) }}{\partial g_{i}^{(z)}}\neq\frac{d\mathrm{Im}\,P_{x^{\prime}}^{(z)}}{dz}\). In the previous works (see, for example, [7, 8, 9, 10, 11, 12, 13, 14]), equations without the second term (phase \(\Phi_{x^{\prime}}^{(z)}\)) in (34) were used. In this approach Eq.(33) takes the form \[\frac{d\widetilde{C}_{x^{\prime}}}{d\,z}+\frac{1}{2N_{x^{\prime}}^{(z)}}\frac {dN_{x^{\prime}}^{(z)}}{dz}\widetilde{C}_{x^{\prime}}=\exp\left\{-p_{x^{\prime} }\left(z\right)\right\}\frac{1}{N_{x^{\prime}}^{(z)}}\int\limits_{S_{z^{\prime }}^{(z)}(z)}\stackrel{{\widetilde{E}_{x^{\prime}}}}{{E}}_{x^{ \prime}}^{(z)}dS\,. \tag{38}\] The eigen vector \(\stackrel{{\widetilde{E}}}{{E}}_{x}\) of a periodic waveguide can be represented as \[\stackrel{{\widetilde{E}}}{{E}}_{x}=\mathcal{C}_{\infty}\sum_{n} \stackrel{{\widetilde{E}}}{{E}}_{x,n}\left(\stackrel{{ \widetilde{E}}}{{E}}_{x}\right)\exp\left(i\,\frac{2\pi n}{D}\,z\right). \tag{39}\] For TH waves with azimuthal symmetry \(\widetilde{E}_{x,n,z}\left(0\right)\neq 0\) and we can assume that \(\widetilde{E}_{x,0,z}\left(0\right)=1\). Then for a non- periodic waveguide the series impedance that depends on \(z\) can be introduced \[R_{\varkappa\varkappa}^{(z)}\left(z\right)=\frac{\mathcal{C}_{0}^{2}}{\mathrm{Re }\,P_{x^{\prime}}^{(z)}\left(z\right)} \tag{40}\] The equation (38) can be rewritten in a more usual form \((\stackrel{{\widetilde{C}}}{{\widetilde{C}}}_{x^{\prime}}= \mathcal{C}_{0}\widetilde{C}_{x^{\prime}})\,P_{x^{\prime}}^{(z)}(z)\) \[\frac{d\widetilde{C}_{x^{\prime}}}{d\,z}-\frac{1}{2R_{\varkappa\varkappa}^{(z) }}\frac{dR_{\varkappa\varkappa}^{(z)}}{dz}\stackrel{{\widetilde{C }}}{{\widetilde{C}}}_{x^{\prime}}=-\exp\left\{-p_{x^{\prime}}\left(z\right) \right\}\frac{R_{\varkappa\varkappa}^{(z)}}{4}\sum_{n}\exp\left(i\,\frac{2\pi n }{D\left(z\right)}z\right)\int\limits_{S_{z^{\prime}}^{(z)}(z)}\stackrel{{ \widetilde{E}_{x^{\prime}}}}{{E}}_{x,n}^{(z)}\left(z,r\right)dS\,. \tag{41}\] The vector of electric field in this case is \[\widetilde{E}\left(r,z\right)=\widetilde{C}_{x}\left(z\right)\exp\left\{p_{x }\left(z\right)\right\}\stackrel{{\widetilde{E}_{x}}}{{E}}_{x}^{ (z)}\left(z,r\right)+\frac{\stackrel{{\widetilde{J}}}{{j}}}{i \,\omega_{\varrho}\varrho}=\stackrel{{\widetilde{C}}}{{C}}_{x} \left(z\right)\exp\left\{p_{x}\left(z\right)\right\}\sum_{n}\stackrel{{ \widetilde{E}_{x}}}{{E}}_{x,n}^{(z)}\left(z,r\right)\exp\left(i\,\frac{2\pi n }{D\left(z\right)}z\right)+\frac{\stackrel{{\widetilde{J}}}{{j}}}{i \,\omega_{\varrho}\varrho}\,. \tag{42}\] A power that electromagnetic fields transfer through the cross-section \(S_{\perp}^{(z)}(z)\) of the lossless waveguide without the electron beam \((\stackrel{{\widetilde{J}}}{{j}}=0\)) proportional to the product of two factors \(\left|\widetilde{C}_{x}\left(z\right)\right|^{2}\mathrm{Re}\,P_{x^{\prime}}^{ (z)}\left(z\right)-R_{\varkappa\varkappa}^{(z)}\left(z\right)\mathrm{Re}\,P_{x ^{\prime}}^{(z)}\left(z\right)\) and is a constant value (see (40) A single-wave approach without an additional "phase" term \(i\Phi_{x^{\prime}}^{(z)}\)is widely used to calculate the characteristics of non-periodic accelerating and other slow wave structures. A procedure of calculation of \(\Phi_{x^{\prime}}^{(z)}\) is not simple, especially its second part. The role of this term and the conditions when it can be neglected will be studied in the future work. It shouldbe noted that in the approach proposed above the change in the phase of the electric field inside one cell \(P_{s}^{(z)}\left(z\right)=i\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! waveguide there is a standing wave with a small amplitude. Can we get a uniform distribution of phases in this case? And if it possible, what kind of amplitude distribution do we create? Couple-mode approach needs to know the dependence of characteristics of eigen modes of homogeneous waveguide on some geometrical parameters. Since the calculation of the modes of periodic waveguides is not a simple task, the matrix approach, which provides a procedure for calculation of the distribution of the electric field, seems to be useful for simple geometries [15, 16]. ## Acknowledgements The author would like to thank David Reis and Valery Dolgashev for their support.
2305.19713
Red Teaming Language Model Detectors with Language Models
The prevalence and strong capability of large language models (LLMs) present significant safety and ethical risks if exploited by malicious users. To prevent the potentially deceptive usage of LLMs, recent works have proposed algorithms to detect LLM-generated text and protect LLMs. In this paper, we investigate the robustness and reliability of these LLM detectors under adversarial attacks. We study two types of attack strategies: 1) replacing certain words in an LLM's output with their synonyms given the context; 2) automatically searching for an instructional prompt to alter the writing style of the generation. In both strategies, we leverage an auxiliary LLM to generate the word replacements or the instructional prompt. Different from previous works, we consider a challenging setting where the auxiliary LLM can also be protected by a detector. Experiments reveal that our attacks effectively compromise the performance of all detectors in the study with plausible generations, underscoring the urgent need to improve the robustness of LLM-generated text detection systems.
Zhouxing Shi, Yihan Wang, Fan Yin, Xiangning Chen, Kai-Wei Chang, Cho-Jui Hsieh
2023-05-31T10:08:37Z
http://arxiv.org/abs/2305.19713v2
# Red Teaming Language Model Detectors with Language Models ###### Abstract The prevalence and high capacity of large language models (LLMs) present significant safety and ethical risks when malicious users exploit them for automated content generation. To prevent the potentially deceptive usage of LLMs, recent works have proposed several algorithms to detect machine-generated text. In this paper, we systematically test the reliability of the existing detectors, by designing two types of attack strategies to fool the detectors: 1) replacing words with their synonyms based on the context; 2) altering the writing style of generated text. These strategies are implemented by instructing LLMs to generate synonymous word substitutions or writing directives that modify the style without human involvement, and the LLMs leveraged in the attack can also be protected by detectors. Our research reveals that our attacks effectively compromise the performance of all tested detectors, thereby underscoring the urgent need for the development of more robust machine-generated text detection systems. ## 1 Introduction Large language models (LLMs), such as Chat-GPT [3], and PaLM [1], have demonstrated human-like capabilities to generate high-quality content, follow instructions, and respond to user queries. Although LLMs can improve the working efficiency of humans, they also pose several ethical and safety concerns, when it becomes harder to differentiate between text written by a human and text generated by an LLM. For example, LLMs may be inappropriately used for academic plagiarism or creating misinformation at large scale [15]. Therefore, it is important to develop reliable approaches to protecting LLMs and detecting the presence of AI-generated texts to mitigate the abuse of LLMs. Towards this end, prior works have developed methods for automatically detecting text generated by LLMs. The existing methods mainly fall into three categories: 1) Classifier-based detectors by training a classifier, often a neural network, from supervised data with AI-generated/human-written labels [18, 3]; 2) Watermarking [16] by injecting patterns into the generation of LLMs such that the pattern can be statistically detected but imperceptible to humans; 3) Likelihood-based detector, e.g., DetectGPT [17], by leveraging the log-likelihood of the generated texts. However, as recent research demonstrates that text classifiers are vulnerable [19, 20, 21], we suspect that these detectors are not reliable under adversarial manipulations of AI-generated texts. To stress-test the reliability of the detectors, we red team and attack the detectors by prompting an LLM-based generative model. We modify and generate texts that become more challenging for the detectors. We develop two methods. In the first method, we prompt an LLM to generate candidate substitutions of words in an LLM-generated text. We then substitute certain words and choose replacements either in a query-free way or through a query-based evolutionary search [1], in order to bypass the detection. Our second method is for instruction-tuned LLMs such as ChatGPT [3]. We search for an instructional prompt on a small subset of training data and fix the prompt at the test time. The prompt instructs the LLM to write in a style such that the generated texts are hard to be detected. There are concurrent works [1], 2023; Krishna et al., 2023) that evade detectors by paraphrasing AI-generated texts. However, they assume that the paraphrasing models are _not protected_ by a detector; therefore, it is natural that the paraphrased text can not be recognized by the detector. In contrast, we consider a challenging setting, where we assume that the LLM leveraged for generating attacks is _protected_, meaning it also has a detection mechanism in place. This assumption imposes a realistic constraint on the attacker, as it is possible that all public LLMs are protected in the future. Furthermore, we also consider using the original LLM for text generation itself to jailbreak the detection mechanism, showing that malicious users can bypass the detectors. We systematically test the three types of detectors, ranging from statistical approaches to commercial APIs. Our results reveal that all the detectors are vulnerable under the proposed attack mechanisms, and the detection performance drops significantly. These findings suggest the current detectors are not reliable and shed light on the discussion about how to build trustworthy detectors. We suggest possible defense strategies in the conclusion section and leave the exploration of defenses to future work. ## 2 Related Work Detectors for AI-generated text.Recent detectors for AI-generated text mostly fall into three categories. First, classifier-based detectors are trained with supervised data to distinguish human-written text and AI-generated text. For example, the AI Text Classifier developed by OpenAI (OpenAI, 2023) is a fine-tuned language model. Second, watermarking methods introduce distinct patterns into AI-generated text, allowing for its identification. Among them, Kirchenbauer et al. (2023) randomly partition the vocabulary into a greenlist and a redlist during the generation, where the division is based on the hash of the previously generated tokens. The language model only uses words in the greenlists, and thereby the generated text has a different pattern compared to human-written text which does not consider such greenlists and redlists. Third, DetectGPT (Mitchell et al., 2023) uses the likelihood of the generated text for the detection, as they find that text generated by language models tends to reside in the negative curvature region of the log probability function. Consequently, they define a curvature-based criterion for the detection. Methods for red-teaming detectors.As the detectors emerge, there are also several works showing that the detectors may be evaded to some extent, typically by paraphrasing the text (Sadasivan et al., 2023; Krishna et al., 2023). However, they need additional paraphrasing models which are typically unprotected models that are much weaker than the original LLM. Besides paraphrasing, Kirchenbauer et al. (2023) also discussed attacks against watermarking detectors with word substitutions generated by a masked language model such as T5 (Raffel et al., 2020) which is a relatively weaker language model, and thus it may generate attacks with lower quality. Adversarial Examples in NLP.Red-teaming and attacking detectors for testing their reliability are also relevant to works on adversarial examples in NLP. Word substitution is a commonly used strategy in generating textual adversarial examples (Alzantot et al., 2018; Ren et al., 2019; Jin et al., 2020). Language models such as the BERT (Devlin et al., 2019) have also been used for generating word substitutions (Shi and Huang, 2020; Li et al., 2020; Garg and Ramakrishnan, 2020). In this work, we demonstrate the effectiveness of using the latest LLMs for generating high-quality word substitutions, and our query-based word substitutions are also inspired by the genetic algorithm in (Alzantot et al., 2018; Yin et al., 2020). For our instructional prompt, it is relevant to recent works that prompt LLMs to red team LLMs themselves (Perez et al., 2022) rather than detectors in this work. In addition, we fix a single instructional prompt at test time, which is partly similar to universal triggers in adversarial attacks (Wallace et al., 2019; Behjati et al., 2019), but unlike them constructing an unnatural sequence of tokens as the trigger, our prompt is natural and it is added to the input for the generative model rather than the detector directly. ## 3 Settings and Overview We consider a large language model \(G\) that conditions on an input context or prompt \(\mathbf{X}\) and generates an output text \(\mathbf{Y}=G(\mathbf{X})\). In this work, we use upper-case characters such as \(\mathbf{X}\) to denote a sequence of tokens \(\mathbf{X}=[\mathbf{x}_{1},\mathbf{x}_{2},...,\mathbf{x}_{m}]\), where \(m\) is the sequence length. The model may be protected by a detector \(f(\mathbf{Y})\in[0,1]\) that predicts whether \(\mathbf{Y}\) is generated by the language model \(G\) where higher \(f(\mathbf{Y})\) score means more likely to be generated by a language model. We use \(\tau\) to denote a detection threshold such that \(\mathbf{Y}\) is considered AI-generated if \(f(\mathbf{Y})\geq\tau\). In this work, we consider three categories of detectors: (1) classifier-based detectors, (2) watermarking detectors, and (3) likelihood-based detectors. For classifier-based detectors, a text classifier \(f(\mathbf{Y})\) is trained on a labeled dataset with \(G\)-generated and human-written texts. For watermarking detectors, \(G\) is modified from a base generator \(G_{0}\) with a watermarking mechanism \(W\), denoted as \(G=W(G_{0})\), and a watermark detector \(f(\mathbf{Y})\) is constructed to predict whether \(\mathbf{Y}\) is generated by an LLM watermarked with \(W\). Specifically, we consider the watermarking mechanism in Kirchenbauer et al. (2023). For likelihood-based detectors, they estimate the LLM-generated score \(f(\mathbf{Y})\) based on the output logits of \(G\). Specifically, we consider DetectGPT (Mitchell et al., 2023). We consider a model \(G\) as _protected_ if there is a detector \(f(\mathbf{Y})\) in place to protect the model from inappropriate usage. To stress test the reliability of those detectors in this setting, we develop red-teaming techniques to generate texts that can bypass a detector using an LLM that is also protected by this detector. This differs from previous methods that requires a separate unprotected paraphrasing model Sadasivan et al. (2023); Krishna et al. (2023). We use \(G^{\prime}\) to denote the protected LLM used for generating attacks, where \(G^{\prime}\) may also be the same as \(G\) if applicable, and we consider the attack from two aspects: * **Output perturbation** that directly perturbs the original output \(\mathbf{Y}\) and generates a perturbed output \(\mathbf{Y}^{\prime}\). * **Input perturbation** that perturbs the input \(\mathbf{X}\) into \(\mathbf{X}^{\prime}\) as the new input, leading to a new output \(\mathbf{Y}^{\prime}=G(\mathbf{X}^{\prime})\). In both cases, we aim to minimize \(f(\mathbf{Y}^{\prime})\) so that the new output \(\mathbf{Y}^{\prime}\) is wrongly considered as human-written by the detector \(f\). Meanwhile, we require that \(\mathbf{Y}^{\prime}\) has a quality similar to \(\mathbf{Y}\) and remains a plausible output to the original input \(\mathbf{X}\). For our attack algorithms, we also assume that the detector \(f\) is black-box, and only the output scores are visible, but not internal parameters. We propose to attack the detectors in two different ways. In Section 4, we construct an output perturbation by replacing some words in \(\mathbf{Y}\), where we prompt a protected LLM \(G^{\prime}\) to obtain the new substitution words, and we then build query-based and query-free attacks respectively with these word substitutions. In Section 5, if \(G\) is able to follow instructions, we search for an instructional prompt from the generation by \(G\) and append the prompt to \(\mathbf{X}\) as an input perturbation, where the instructional prompt instructs \(G\) to generate texts in a style making it hard for the detector to detect. Table 1 summarizes our methods and their applicability to different detectors. At test time, instructional prompts are fixed and thus totally query-free. For word substitutions, they require querying \(G^{\prime}\) multiple times to generate word substitutions on each test example; the query-free version does not repeatedly query \(f\) while the query-based version also requires querying \(f\) multiple times. In practice, we may choose between these methods depending on the query budget and their applicability to the detectors. ## 4 Attack with Word Substitutions To attack the detectors with output perturbations, we aim to find a perturbed output \(\mathbf{Y}^{\prime}\) that is out of the original detectable distribution. This is achieved by substituting certain words in \(\mathbf{Y}\). To obtain suitable substitution words for the tokens in \(\mathbf{Y}^{\prime}\) that preserve the naturalness and semantic meaning, we utilize a protected LLM denoted as \(G^{\prime}\). For each token in \(\mathbf{Y}\) denoted as \(\mathbf{y}_{k}\), we use \(s(\mathbf{y}_{k},\mathbf{Y},G^{\prime},n)\) to denote the process of gen \begin{table} \begin{tabular}{c|c c c|c c c} \hline \hline \multirow{2}{*}{Attack} & \multirow{2}{*}{Perturbation type} & \multicolumn{2}{c|}{Test-time Queries} & \multicolumn{3}{c}{Applicability} \\ & & \(G^{\prime}\) & \(f\) & Classifier & Watermarking & Likelihood \\ \hline Query-free word substitutions & Output & ✓ & - & ✓ & ✓ & ✓ \\ Query-based word substitutions & Output & ✓ & ✓ & ✓ & - & ✓ \\ Instructional Prompts & Input & - & - & ✓ & - & - \\ \hline \hline \end{tabular} \end{table} Table 1: Properties of various attack methods proposed in this paper and their applicability to various detectors. “Test-time queries” indicates whether each method requires querying \(G^{\prime}\) or \(f\) for multiple times at test time. erating at most \(n\) word substitution candidates for \(\mathbf{y}_{k}\) given the context in \(\mathbf{Y}\) by prompting \(G^{\prime}\), and \(s(\mathbf{y}_{k},\mathbf{Y},G^{\prime},n)\) outputs a set of at most \(n\) words. Note that not every word can be substituted, and \(s(\mathbf{y}_{k},\mathbf{Y},G^{\prime},n)\) can be an empty set if it is not suitable to replace \(\mathbf{k}\). We will discuss how to generate the word substitution candidates using \(G^{\prime}\) in Section 4.1. General attack objective.The objective of attacking \(f\) with word substitutions can be formulated as a minimization problem given a substitution budget \(\epsilon\): \[\mathbf{Y}^{\prime}=\operatorname*{arg\,min}_{\mathbf{Y}^{\prime }}f(\mathbf{Y}^{\prime}), \tag{1}\] \[\text{s.t.}\quad\mathbf{y}_{k}^{\prime}\in\{\mathbf{y}_{k}\}\cup s (\mathbf{y}_{k},\mathbf{Y},G^{\prime},n),\] \[\quad\sum_{k=1}^{m}\mathbbm{1}(\mathbf{y}_{k}\neq\mathbf{y}_{k}^ {\prime})\leq\epsilon m.\] Here we aim to find an optimally perturbed output \(\mathbf{Y}^{\prime}\) that minimizes the predicted score \(f(\mathbf{Y}^{\prime})\) among all possible \(\mathbf{Y}^{\prime}\). Each word in the perturbed output \(\mathbf{y}_{k}^{\prime}\) is either the unperturbed word \(\mathbf{y}_{k}\) or selected from the word substitution candidates \(s(\mathbf{y}_{k},\mathbf{Y},G^{\prime},n)\), and the total number of perturbed words is at most \(\epsilon m\). To solve the minimization problem in Eq. (1), we consider both query-free and query-based substitutions respectively. For query-based substitutions, we use the evolutionary search algorithm [1, 16] originally for generating adversarial examples in NLP, with details in Appendix A. And we also design query-free substitution methods in Section 4.2. We may choose between query-based substitutions and query-free substitutions depending on whether we may query \(f\) for multiple times. ### Generating Word Substitution Candidates Table 2 shows the prompts we use when \(G^{\prime}\) is ChatGPT and LLaMA respectively and outputs by the LLMs. ChatGPT is able to follow instructions, and thus our prompt is an instruction asking the model to generate substitution words, and multiple words can be substituted simultaneously. For LLaMA which cannot directly follow instructions, we expect it to generate a text completion following our prompt, where the prompt is designed such that a plausible text completion consists of suggested substitution words, and we replace one word at each time. ### Query-free Substitutions For the query-free attack, we apply word substitution on random tokens in \(\mathbf{Y}\) to attack DetectGPT and classifier-based detectors. For watermarking detectors, we further design an effective query-free attack utilizing the properties of the detection method. Specifically, we consider the watermarking mechanism introduced in kirchenbauer2023hash. A watermarked LLM generates a token with modified predicted logits at position \(i+1\): \(g(\mathbf{y}_{i+1}|[\mathbf{y}_{1},...,\mathbf{y}_{i}])=g_{0}(\mathbf{y}_{i+1 }|[\mathbf{y}_{1},...,\mathbf{y}_{i}])+\delta\) if the candidate token \(\mathbf{y}_{i+1}\) is in the greenlist decided by a hash function \(h([\mathbf{y}_{1},...,\mathbf{y}_{i}])\). Here we use the lower-case \(g_{0}\) to denote the logits outputs of a generative model \(G_{0}\) and \(g\) for the watermarked version \(G\). \(\delta\) is an offset value pre-defined by the wa \begin{table} \begin{tabular}{l} \hline \hline ChatGPT \\ \hline **Given this sentence: ”The scientists said the findings could help improve treatments and lead to new ways of preventing breast cancer spreading. Experts have described the findings as ”exciting”. ”, for each word in ”findings, exciting”, give 10 substitution words that do not change the meaning of the sentence. Return each word and its substitutions in one line, in the format of ”word:substitutions”** \\ \hline findings: results, discoveries, outcomes, conclusions, observations, data, evidence, findings, research, studies \\ exciting: thrilling, exhilarating, stimulating, electrifying, gripping, captivating, invigorating, inspiring, breathtaking, exciting \\ \hline \hline LLaMA \\ \hline \multicolumn{2}{l}{**”_The scientists said the findings could help improve treatments and lead to new ways of preventing breast cancer spreading. Experts have described the findings as ”exciting”.”_**} \\ \multicolumn{2}{l}{**The word ”exciting” in the above sentence is equivalent to: a) ”interesting”**} \\ \multicolumn{2}{l}{**b) ”surprising”**} \\ \multicolumn{2}{l}{c) ”unusual”**} \\ \hline \hline \end{tabular} \end{table} Table 2: Prompts for generating word substitution candidates using ChatGPT and LLaMA respectively and the outputs by the LLMs. Text in bold denotes templates in the prompts. Text in italic denotes a text to be perturbed or words to be replaced for a given example. Text in blue denotes the generated word substitutions. termarking. Therefore, a text generated by a watermarked LLM tends to have more greenlist tokens and \(f(\mathbf{Y})\) calculates the score with the count of greenlist tokens in \(\mathbf{Y}\). Therefore, given a fixed substitution budget \(\epsilon\), we aim to identify and substitute more greenlist tokens to reduce the total count of greenlist tokens. We achieve this with a two-stage algorithm. At the first stage, we sort all tokens in \(\mathbf{Y}\) by the prediction entropy estimated by a language model \(M\), which can be a weaker model than \(G\) as we only use the entropy as a heuristic score. The prediction entropy is calculated with the output probability among all the possible vocabulary. As the watermarking offset \(\delta\) is applied on the decoding process, a token with higher entropy is easier to be affected by watermarking. At the second stage, we pick \(\epsilon m\) tokens with highest entropy and use a watermarked LLM \(G^{\prime}\) to generate word substitutions as introduced in Section 4.1. ## 5 Attack by Instructional Prompts In this section, we build attacks by perturbing the input prompt to encourage LLMs to generate texts that is difficult to be detected. In particular, we focus on LLM-based generative models that can follow instructions and classifier-based detectors. We consider ChatGPT (OpenAI, 2023b) as the generative model \(G\) and OpenAI AI Text Classifier (OpenAI, 2023a) as the detector \(f\). The OpenAI AI Text Classifier is a fine-tuned neural network, while neural networks have been shown to be vulnerable to distribution shifts in NLP literature (Miller et al., 2020; Awadalla et al., 2022). Therefore, we aim to shift the generated text to a different distribution where the detector is more likely to fail, while making the generated text still a plausible output to the input. We achieve this by searching for an additional prompt \(\mathbf{X}_{p}\) appended to the original input \(\mathbf{X}\), which forms a new input \(\mathbf{X}^{\prime}=[\mathbf{X},\mathbf{X}_{p}]\) to \(G\). In particular, \(\mathbf{X}_{p}\) consists of \(\mathbf{X}_{\text{ins}}\) and \(\mathbf{X}_{\text{ref}}\), where \(\mathbf{X}_{\text{ins}}\) is an instruction asking the model to follow the writing style of reference \(\mathbf{X}_{\text{ref}}\). **Searching for \(\mathbf{X}_{p}\).** We search for \(\mathbf{X}_{p}\) on a small subset of training examples with \(n\) examples \(\mathbf{X}_{1},\mathbf{X}_{2},\cdots,\mathbf{X}_{n}\). We assume that we can query the detector \(f\) for multiple times during search time. After an effective \(\mathbf{X}_{p}\) is found, it can be applied universally on all inputs from this dataset at test time. The objective of the search is: \[\operatorname*{arg\,min}_{\mathbf{X}_{p}}\frac{1}{n}\sum_{i=1}^{n}\mathbbm{1} \big{(}f(G([\mathbf{X}_{i},\mathbf{X}_{p}]))\geq\tau\big{)}, \tag{2}\] which aims to minimize the average detection rate for the new outputs generated with \(\mathbf{X}_{p}\) appended to the input. ``` 0: Training data \(\mathbf{X}_{1},\cdots,\mathbf{X}_{n}\); generative model \(G\) and detector \(f\); initial instruction \(\mathbf{X}_{\text{ins},0}\). 0: An attacking instructional prompt \(\mathbf{X}_{p}\). \(\mathbf{X}_{\text{ins}}\leftarrow\mathbf{X}_{\text{ins},\,0}\) \(\mathcal{O}\leftarrow\text{PriorityQueue}()\) GenerateAndDetect("","") for\(t=1,\cdots,T\)do \(\mathbf{X}_{\text{ref}}\leftarrow\text{SearchForReference}(\mathbf{X}_{\text{ins}})\) \(\mathbf{X}_{\text{ins}}\leftarrow\text{SearchForInstruction}(\mathbf{X}_{\text{ins}}, \mathbf{X}_{\text{ref}})\) \(\mathbf{X}_{p}\leftarrow[\mathbf{X}_{\text{ins}},\mathbf{X}_{\text{ref}}]\) return\(\mathbf{X}_{p}\) function\(\text{GenerateAndDetect}(\mathbf{X}_{\text{ins}},\mathbf{X}_{\text{ref}})\) for\(i=1,\cdots,n\)do \(\mathbf{Y}_{i}\gets G([\mathbf{X}_{i},\mathbf{X}_{\text{ins}},\mathbf{X}_{ \text{ref}}])\) \(\mathcal{O}.\text{push}(\text{key}=f(\mathbf{Y}_{i}),\text{value}=\mathbf{Y}_ {i})\) return Detection rate and average score function\(\text{SearchForReference}(\mathbf{X}_{\text{ins}})\) for\(i=1,\cdots,K\)do \(\mathbf{C}_{\text{ref}}^{(i)}\leftarrow\mathcal{O}.\text{pop}()\) \(Best\leftarrow\operatorname*{arg\,min}\{\) GenerateAndDetect\((\mathbf{X}_{\text{ins}},\mathbf{C}_{\text{ref}}^{(i)})\}_{i=1}^{K}\) return\(\mathbf{C}_{\text{ref}}^{(Best)}\) function\(\text{SearchForInstruction}(\mathbf{X}_{\text{ins}},\mathbf{X}_{\text{ref}})\) for\(i=1,\cdots,K\)do \(\mathbf{C}_{\text{ins}}^{(i)}\leftarrow\text{Paraphrase}(\mathbf{X}_{\text{ins}})\) \(Best\leftarrow\operatorname*{arg\,min}\{\) GenerateAndDetect\((\mathbf{C}_{\text{ins}}^{(i)},\mathbf{X}_{ref})\}_{i=1}^{K}\) return\(\mathbf{C}_{\text{ins}}^{(Best)}\) ``` **Algorithm 1** Search for the attacking prompt We illustrate our algorithm searching for \(\mathbf{X}_{p}\) in Algorithm 1. We use \(G\) to generate various \(\mathbf{X}_{\text{ins}}\) and \(\mathbf{X}_{\text{ref}}\) in each iteration and try to search for an optimal \(\mathbf{X}_{p}=[\mathbf{X}_{\text{ins}},\mathbf{X}_{\text{ref}}]\) following the objective in Eq. (1). Initially, we set \(\mathbf{X}_{\text{ins}}\) as a manually written instruction, "Meanwhile please imitate the writing style and wording of the following passage:". An initial value for \(\mathbf{X}_{\text{ref}}\) is not necessary. We also create and initialize a priority queue \(\mathcal{O}\) with \(n\) initial outputs generated from the \(n\) training examples without \(\mathbf{X}_{p}\). \(\mathcal{O}\) sorts its elements according to the detection scores from \(f\) and prioritize those with lower scores. In each iteration of the search, we have two steps: * Updating \(\mathbf{X}_{\text{ref}}\): We pop the top-\(K\) candidates from \(\mathcal{O}\). For each candidate, we combine it with the current \(\mathbf{X}_{\text{ins}}\) respectively as the potential candidates for \(\mathbf{X}_{p}\) in the current iteration. * Updating \(\mathbf{X}_{\text{ins}}\): We instruct model \(G\) to generate \(K\) variations of the current \(\mathbf{X}_{\text{ins}}\), inspired by Zhou et al. (2022) for automatic prompt engineering. And we combine them with the current \(\mathbf{X}_{\text{ins}}\) respectively as the potential candidates for \(\mathbf{X}_{p}\). For both of these two steps, we take the best candidate \(\mathbf{X}_{p}\) according to Eq. (2). When generating \(G([\mathbf{X}_{i},\mathbf{X}_{p}])\) in Eq. (2), we push all the generated outputs to \(\mathcal{O}\) as the candidates for \(\mathbf{X}_{\text{ref}}\) in the later rounds. We take \(T\) iterations and return the final \(\mathbf{X}_{p}=[\mathbf{X}_{\text{ins}},\mathbf{X}_{\text{ref}}]\) to be used at test time. ## 6 Experiments ### Experimental Settings Generative Models and Detectors.We consider a wide range of LLM-based generative models with detectors protecting the models. For the generative model \(G\), we consider GPT-2-XL (Radford et al., 2019), LLaMA-65B (Touvron et al., 2023), and ChatGPT (OpenAI, 2023b). For the detectors, watermarking and DetectGPT are applied to both GPT-2-XL and LLaMA-65B but not ChatGPT; classifier-based detectors include a fine-tuned RoBERTa-Large detector (Solaiman et al., 2019) for GPT-2 texts; and the OpenAI AI Text Classifier (OpenAI, 2023a) for ChatGPT texts. We use either LLaMA-65B or ChatGPT as \(G^{\prime}\) for generating perturbations in the attack. When \(G\) is LLaMA-65B or ChatGPT, we simply use itself as \(G^{\prime}=G\). And when \(G\) is GPT-2-XL, we use ChatGPT as \(G^{\prime}\) when the classifier-based detector or DetectGPT is used, and we use LLaMA-65B as \(G^{\prime}\) when watermarking are used, as we add watermarking to ChatGPT which is not open-source. Table 3 summarizes the protected LLM \(G^{\prime}\) used for all the generative models and the detectors in the experiment. DatasetsWe use two types of datasets in our experiments including text completion and instructional datasets. For text completion datasets, we use XSum (Narayan et al., 2018) and WikiText (Merity et al., 2016). We take the first sentence for XSum and the first 20 tokens for WikiText as the input prompt to the generative models. And we also use an instructional dataset, ELI5 (Fan et al., 2019), which is a long-form question-answering dataset collected from Reddit. To test the RoBERTa-Large detector specifically for detecting GPT-2 texts, we also adopt the GPT-2 output dataset (Solaiman et al., 2019). Since the OpenAI AI Text Classifier requires the text to contain at least 1000 characters, we filter all the datasets and only retain examples with human reference containing at least 1000 characters. We use the first 100 examples in the shuffled test set for each dataset. MetricsWe use several metrics for the detectors under attacks. Area Under the Receiver Operating Characteristic Curve (AUROC) scores summarize the performance of detectors under various thresholds. A detection rate (DR) is the true positive rates under a fixed threshold (positive examples mean LLM-generated texts), where we either tune the threshold to meet a particular false positive rate or follow the original thresholds of the detectors. For query-based word substitutions, we also use Attack Success Rate (ASR) which computes the rate that the attack successfully flips the prediction by the detector, out of all the postive examples on which the detector originally predicts correctly. ### Attack with Word Substitutions We apply word substitution-based attack on all the three detection methods including DetectGPT, classifier-based detectors, and watermarking. In each setting, we assume that both \(G\) and \(G^{\prime}\) are protected by the same detector \(f\). Attack against DetectGPTFor experiments on attacking DetectGPT, we follow Mitchell et al. (2023) to prompt GPT-2-XL with the first 30 tokens from the samples and ask LLMs to generate the rest. We set the maximum changes to be 10% of the sequence except for stop words, which leads to around 10 substituted tokens. For evolutionary searching, this requires 100 queries per instance with population size of 10. DetectGPT uses an external T5-3B model to do mask infilling that generates the perturbations and we fix the mask rate to be 15%. We adopt a more realistic cross-model setting, where we use GPT Neo (Black et al., 2021) as the detection model to estimate the log-likelihood. The results are shown in Table 4. On XSum and WikiText, DetectGPT's AUROC drops below random guessing to 25.9% and 31.2% respectively, after query-free substitutions which randomly select substitutions from the candidate pool. The AUROC scores further drop to only 3.9% and 6.1% respectively after the query-based evolutionary search. Note that both of the methods change around 10 tokens in a 1000 character paragraph. This demonstrates that using the likelihood of machine-generated texts may not be robust against malicious word changes. Attack against Classifier-based DetectorsWe experiment with the the two public classifier-based detectors: a RoBERTa-large model fine-tuned for detecting GPT-2 texts (Mitchell et al., 2023), and the OpenAI AI text detector (OpenAI, 2023b). For all the experiments, we keep the maximum number of substitutions to be 20% of the total lengths of paragraphs except for stop words. Results for attacking GPT-2 text detector are shown in Table 5. We find that the attack success rate (ASR) on detecting GPT-2 texts is close to 0 for both paraphrasing and query-free substitutions. We hypothesize that this is because the detector is specifically trained on detecting GPT-2 texts, it is hard to remove the patterns leveraged by those detectors by randomly selecting word substitutions or paraphrasing without querying the detector. Our evolutionary searching-based substitutions achieve much better ASR compared to the query-free methods. For the OpenAI AI Text Classifier shown in Table 7, query-free attacks are able to decrease the detection AUROC by 18.9 and 28.1 points on XSum and ELI5, respectively, while query-based ones further decrease them by 45.4 and 55.6 points to lower than random. Comparison with the attack with instructional prompts and more details are discussed in Section 6.3. Attack against WatermarkingWe implement watermarking mechanism introduced in (Kirchenbauer et al., 2023) on two language models including GPT-2-XL and LLaMA-65B. We use a T5-Large model to estimate the prediction entropy for each token and select 20% of tokens in the initial output \(\mathbf{Y}\). We use a watermarked LLaMA to suggest word substitutions with the prompt introduced in Table 2. We use \(\delta=1.0\) and \(\gamma=0.5\) in watermarking implementation. We only keep the word substitutions with fewer than 4 tokens to avoid invalid substitutions. We report AUROC and detection rates in Table 6. For detection rates, we set the threshold value to keep the false positive rate for human texts equal to 1%. As we can observe from the table, detection rate can be significantly decreased after query-free word substitution attack. We show an example for query-free attack against watermarking detectors in Table 9. \begin{table} \begin{tabular}{c c c} \hline \hline \multirow{2}{*}{Attack} & \multicolumn{2}{c}{AUROC} \\ & XSum & Wiki \\ \hline Unattacked & 92.5 & 69.4 \\ Paraphrasing & 45.7 & 37.5 \\ Query-free Substitutions & 25.9 & 31.2 \\ Query-based Substitutions & **3.9** & **6.1** \\ \hline \hline \end{tabular} \end{table} Table 4: AUROC (%) for DetectGPT (Mitchell et al., 2023). We compare DetectGPT before and after various attacks under the cross-model setting where the base model is GPT-2-XL and the detection model that estimates the likelihood is GPT-Neo. \begin{table} \begin{tabular}{c c c} \hline \hline Generative Model & Classifier-based Detector & Watermarking & DetectGPT \\ \hline GPT-2-XL & ChatGPT & LLaMA-65B & ChatGPT \\ LLaMA-65B & - & LLaMA-65B & LLaMA-65B \\ ChatGPT & ChatGPT & - & - \\ \hline \hline \end{tabular} \end{table} Table 3: The protected LLM \(G^{\prime}\) used in generating perturbations for each generative model \(G\) and the detectors. “-” indicates a combination of the generative model and the detector is not applicable. \begin{table} \begin{tabular}{c c} \hline \hline Attack & ASR \\ \hline Paraphrasing & 4\% \\ Query-free Substitutions & 2\% \\ Query-based Substitutions & **68\%** \\ \hline \hline \end{tabular} \end{table} Table 5: Attack Success Rate (ASR) for OpenAI RoBERTa-Large detector for GPT-2 texts (Mitchell et al., 2023). We conduct experiments for our attack using instructional prompts on ChatGPT as the generative model with the OpenAI AI Text Classifier as the detector. The detector is model-detect-v2 accessible in OpenAI APIs. Its output contains five classes, including "likely", "possibly", "unclear if it is", "unlikely" and "very unlikely", with thresholds 0.98, 0.90, 0.45, and 0.10 respectively. We follow these thresholds and use a threshold of 0.9 to determine detection rates. In Algorithm 1, we search for the instructional prompt using \(n=50\) training examples, \(T=5\) iterations, and \(K=5\) candidates for references and instructions respectively in each iteration. Table 7 shows the results. We also show our prompts and an example on ELI5 with various attacks in Appendix B. Our instructional prompts significantly reduce the the AUROC scores and detection rates compared to the unattacked version. We also find that paraphrasing with ChatGPT itself can also somewhat downgrade the detection but it is much less effective than our instructional prompts. Compared to word substitutions, the attack with instructional prompts achieves lower AUROC and 0 detection rate on XSum; on ELI5, the attack with instructional prompts has lower AUROC score than query-free word substitution but not query-based word substitution, and it has higher detection rates. Nevertheless, unlike word substitutions, the attack with instructional prompts does not query \(G^{\prime}\) or \(f\) for multiple times, and thus it is more efficient while also effective. ## 7 Conclusion and Discussion In this work, we studied the reliability of representative AI text detectors from three different categories: classifier-based, likelihood-based, and watermarking. We proposed two methods to prompt LLMs to modify texts and make them harder to be detected. The limitations revealed in our experiments urge the design of a more reliable detection mechanism. As an initial discussion around the defense against the word substitution attacks proposed in the paper, we discuss a few possible defense strategies. First, by fine-tuning a more specific classifier-based detector based on the target model. As we show in the experiments, if a classifier-based detector is specifically fine-tuned for detecting a target model (RoBERTa-Large for GPT-2-XL in this paper), it would be much more robust against query-free modifications. Second, by combining lexical watermarking with text likelihood estimation. This is based on the intuition that if a word substitution attack successfully evades a watermarking detector, it would need to change around 20% tokens from the greenlist tokens to redlist tokens that might not be of large probability. Thus, one may apply a watermarked LLM (Kirchenbauer et al., 2023) to a suspected text and then check the perplexity or the likelihood of all redlist tokens to detect machine-generated texts. More work on defense strategies will be deferred to future endeavors. \begin{table} \begin{tabular}{c c c c c c} \hline \hline \multirow{2}{*}{Generative Model} & \multirow{2}{*}{Dataset} & \multicolumn{2}{c}{Unattacked} & \multicolumn{2}{c}{Query-free Substitution} \\ & & AUROC & DR & AUROC & DR \\ \hline \multirow{2}{*}{GPT-2-XL} & XSum & 97.9 & 81.0 & 87.7 & 36.0 \\ & WikiText & 97.4 & 81.0 & 89.7 & 54.0 \\ \hline \multirow{2}{*}{LLaMA-65B} & XSum & 88.9 & 22.0 & 70.2 & 9.0 \\ & WikiText & 92.6 & 73.2 & 81.3 & 43.3 \\ \hline \hline \end{tabular} \end{table} Table 6: Attack against watermarking detector. We report both AUROC scores (%) and the detection rates (DR) (%) under the threshold value when false positive rate for human texts is 1%. \begin{table} \begin{tabular}{c c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{2}{c}{XSum} & \multicolumn{2}{c}{EL15} \\ & AUROC & DR & AUROC & DR \\ \hline Unattacked & 88.8 & 30.0 & 87.1 & 54.0 \\ ChatGPT Paraphrasing & 82.4 & 12.0 & 73.6 & 27.0 \\ Query-free Substitution & 69.9 & 2.0 & 59.0 & 2.0 \\ Query-based Substitution & 43.4 & **0.0** & **31.5** & **0.0** \\ Instructional Prompts & **11.2** & **0.0** & 43.6 & 18.0 \\ \hline \hline \end{tabular} \end{table} Table 7: AUROC scores (%) and detection rates (DR) (%) of the OpenAI AI Text Classifier on the original outputs by ChatGPT and outputs with various attacks respectively. ## Acknowledgement We thank UCLA-NLP for their invaluable feedback. The work is supported in part by CISCO.
2309.13841
On the Effectiveness of Adversarial Samples against Ensemble Learning-based Windows PE Malware Detectors
Recently, there has been a growing focus and interest in applying machine learning (ML) to the field of cybersecurity, particularly in malware detection and prevention. Several research works on malware analysis have been proposed, offering promising results for both academic and practical applications. In these works, the use of Generative Adversarial Networks (GANs) or Reinforcement Learning (RL) can aid malware creators in crafting metamorphic malware that evades antivirus software. In this study, we propose a mutation system to counteract ensemble learning-based detectors by combining GANs and an RL model, overcoming the limitations of the MalGAN model. Our proposed FeaGAN model is built based on MalGAN by incorporating an RL model called the Deep Q-network anti-malware Engines Attacking Framework (DQEAF). The RL model addresses three key challenges in performing adversarial attacks on Windows Portable Executable malware, including format preservation, executability preservation, and maliciousness preservation. In the FeaGAN model, ensemble learning is utilized to enhance the malware detector's evasion ability, with the generated adversarial patterns. The experimental results demonstrate that 100\% of the selected mutant samples preserve the format of executable files, while certain successes in both executability preservation and maliciousness preservation are achieved, reaching a stable success rate.
Trong-Nghia To, Danh Le Kim, Do Thi Thu Hien, Nghi Hoang Khoa, Hien Do Hoang, Phan The Duy, Van-Hau Pham
2023-09-25T02:57:27Z
http://arxiv.org/abs/2309.13841v1
On the Effectiveness of Adversarial Samples against Ensemble Learning-based Windows PE Malware Detectors ###### Abstract Recently, there has been a growing focus and interest in applying machine learning (ML) to the field of cybersecurity, particularly in malware detection and prevention. Several research works on malware analysis have been proposed, offering promising results for both academic and practical applications. In these works, the use of Generative Adversarial Networks (GANs) or Reinforcement Learning (RL) can aid malware creators in crafting metamorphic malware that evades antivirus software. In this study, we propose a mutation system to counteract ensemble learning-based detectors by combining GANs and an RL model, overcoming the limitations of the MalGAN model. Our proposed FeaGAN model is built based on MalGAN by incorporating an RL model called the Deep Q-network anti-malware Engines Attacking Framework (DQEAF). The RL model addresses three key challenges in performing adversarial attacks on Windows Portable Executable malware, including format preservation, executability preservation, and maliciousness preservation. In the FeaGAN model, ensemble learning is utilized to enhance the malware detector's evasion ability, with the generated adversarial patterns. The experimental results demonstrate that 100% of the selected mutant samples preserve the format of executable files, while certain successes in both executability preservation and maliciousness preservation are achieved, reaching a stable success rate. Evasion attack, adversarial attack, malware mutation, Generative Adversarial Networks, Reinforcement Learning, Ensemble Learning. ## I Introduction With the advancement and rapid development of information technology, computer systems, and networks have become crucial and widespread in our daily lives. Besides that, cyberattacks are always a threat to cybersecurity that comes with development. Malicious Software (malware) is one of the most effective cyberattacks used by attackers to perform malicious behaviors such as stealing sensitive information without permission, affecting information systems, and demanding a massive ransom. In 2018, Symantec reported that 246,002,762 new malware variants emerged [1]. Besides, among operating systems, Windows is the most wide-used one compared to other counterparts such as macOS, Android, Linux, etc. Hence, it has become the favorite target of attackers with malware in form of its PE (Portable Executable). According to Kaspersky Lab statistics at the end of 2020, an average of 360,000 malware are detected by Kaspersky every day, and more than 90% of them are Windows PE [2] malware. To propose effective protection against malware threat, many researchers have applied ML and deep learning (DL) to malware detection. Those cutting-edge techniques have achieved success in various fields, as well as in feature extraction and classification of malware [3]. However, ML and DL models are discovered to be vulnerable to adversarial attacks [2], generated by slightly shuffling legitimate inputs, leading to judgment fallacy for the targeted models. Hence, considering the capability of dealing with adversarial samples when evaluating ML/DL-based solutions is a rising research trend. Many works have focused on the task of generating such adversarial samples based on modifying the original ones via various promising methods. Some of them intend to make modifications and create a complete adversarial sample, while others may just produce the adversarial form of representative data of malware, such as a feature vector. A possible approach is the Generative Adversarial Networks (GANs) model introduced by Goodfellow et al. [4]. GANs has shown its potential in creating images, sounds, text and even in the field of information security to craft adversarial malware [5, 6]. However, using GANs still has limitations in generating adversarial samples when it only enables crafting adversarial features rather than executable malware samples. Meanwhile, Reinforcement Learning (RL) is another potential solution for creating mutants of metamorphic malware [7, 8, 9]. RL is a type of machine learning that involves an agent learning to interact with an environment through trial and error, receiving rewards for successful actions and punishments for unsuccessful ones. By using RL, it is possible to create malware that can adapt and evolve, making it more challenging for anti-malware systems to detect. On the other hand, GANs are primarily used for generating new data that resembles the training data, which may not be as effective in creating metamorphic malware. Though RL is still in the early stages of research in this area, it shows promise as a potential approach for creating more advanced and evasive malware. Moreover, while using GANs must be aware of the features to work on, RL can take action despite being totally unaware of this information but still produce modified samples. Meanwhile, to deal with these adversarial attacks, more general and powerful methods are constantly emerging and evolving, one of which is Ensemble learning. This is a technique that combines several learning algorithms to increase the performance of the overall prediction [10]. Not only applied in malware defense, but this technique also gets more attention in the field of generating effective adversarial malware samples. According to Deqiang and Qianm [1], there are two ensemble-based approaches to improve the effectiveness of adversarial samples: by using multiple attack methods and by attacking multiple classifiers. In the first approach, using multiple attack methods disturbs and increases the probability of misclassification in classifiers, such as the work of Tramer et al. [11]. For the second approach, they use multiple classifiers to enable adversarial samples to interact as much as possible to increase the evasion of the samples towards them. Liu et al. [12] proposed to improve the transferability of samples by attacking a group of combined DL models, rather than attacking a single model. Motivated by the above promising solutions, our work aims to build a system to enhance the evasive effectiveness of Windows malware using the combination of GANs model and RL. Our proposed FeaGAN, inherited from the work of Hu and Tan [13], is designed with the ensemble learning method for training to take advantage of multiple models to generate adversarial features. Besides, we use RL to merge mutant vectors from FeaGAN into the original malicious PE files. This improves the evasion capability as well as enables verifying the executability and maliciousness of the malware. The remainder of this work is organized as follows. In **Section II**, we give the background of PE malware, ensemble learning, RL, as well as generative approaches of mutating malicious software. **Section III** presents our method in crafting mutated malware against ensemble learning-based detectors by leveraging Reinforcement Learning (RL) and Generative Adversarial Networks (GAN). The experimental settings and results are given in **Section IV**. **Section V** discusses the related works of creating adversarial malware samples. Finally, in **Section VI**, we conclude the paper and discuss future directions for this work. ## II Background ### _ML-based Malware Detection_ Inspired from the success of ML/DL models in the fields of Computer Vision, Natural Language Processing (NLP), ML/DL-based malware detection methods have been proposed in PE malware detection. Due to their self-learning ability, ML/DL-based ones have a good generalization for unseen data. **Fig 1** illustrates the required steps to create an ML/DL-based malware detector, including data collection, feature extraction, model learning from data, and predictions. #### Ii-A1 Data Collection In fact, the quality of the data has a noticeable effect on the outcome of the ML/DL model [14]. However, there is a lack of specific data standards for malware PE data like computer vision or NLP, while most cyber-security companies consider PE samples as their private property and rarely release them to the public [2]. Some researchers also publicly provide their PE files [15, 16, 17], but not in any specific standard. Moreover, the malware labeling task is mainly based on VirusTotal which uses a variety of antivirus tools to detect malware. However, in some cases, the variety of used tools can result in inconsistent results for the same sample. Hence, several methods have been proposed [18, 19, 20, 21] to unify the labeling based on the reliability of antivirus or voting. #### Ii-A2 Feature Extraction Once PE samples have the appropriate labels, it is necessary to extract useful features from those files and then transform them into a suitable format to use as input for ML/DL models. In fact, many ML/DL models accept numeric input, so those features are often converted into numbers. Useful features can help models gain knowledge to recognize malware and benign software. There are many features that can be obtained from a PE sample, which can be divided into three main categories: static, dynamic, and hybrid Fig. 1: The general working flow of Malware Detection using ML/DL. [22, 23, 24], as shown in **Table I**. While static features are obtained directly from PE samples without running them, in some cases, the malware samples are executed in an isolated environment (sandbox, virtual machine) and all its behaviors that affect the environment are saved and extracted in terms of dynamic features. Meanwhile, hybrid features are extracted from the sample PE file by either static or dynamic approaches. #### Ii-A3 Model training and Predictions After extracting features from the samples and converting them into numeric values, it is necessary to choose a suitable ML/DL model for malware and benign classification. Many ML/DL models have been proposed, such as Decision Tree (DT), Random Forest (RF), MultiLayer Perceptron (MLP), Naive Bayes, Support Vector Machine (SVM), Convolutional Neural Networks (CNN), Graph Neural Networks (GNN), Recurrent Neural Networks (RNN), or Long short-term memory (LSTM). They have achieved great success in the field of Computer Vision, NLP, and even vulnerability discovery [25, 26]. When it comes to malware detection, those models are also considered as potential solutions regardless of the diversity of features obtained from PE files, as long as the input matches the requirement of the model. The models attempt to learn how to recognize malware based on numerous training samples by seeking the relationships between features and their binary labels. The obtained knowledge from training can be used to predict the malware or benign label of unseen samples. The malware detection effectiveness can be different from model to model. Hence, many ML/DL models need to be considered and experimented to figure out the most suitable one for PE malware detection. ### _Ensemble learning_ Ensemble learning, also known as multiple classifier systems or committee-based learning, aims to combine several base models to produce one optimal predictive model. The key idea of an ensemble learning method is to get benefits from different models by learning in an ensemble way. This can be a solution for cases of weak models or inconsistent results obtained from multiple models. Hence, putting them together in a suitable manner may result in a significantly better performance compared to using a single model. A simple form of the ensemble is to combine the results with majority voting. There are many methods to classify ensemble methods into many categories using different criteria. In the scope of this paper, we refer to the classification as depicted in **Fig. 2**, where ensemble methods are categorized based on the combination mechanism and the complexity. This results in two groups of single models and ensemble models. Many weaknesses of the single learning algorithm have motivated the development of ensemble methods. Most ensemble learning systems use learning models of the same type, which are called homogeneous ensembles. On the other hand, using different learning algorithms is called heterogeneous ensembles [27]. There are three main reasons which are statistical, computational, and representational [28]. Learning algorithms try to get the best hypothesis in space. Because the amount of training data or training data is limited compared to the size of the hypothesis space, the statistical problem arises. This leads to a learning algorithm getting different hypotheses in space, which gives the same accuracy. The ensemble method helps this situation by averaging their votes and then reduces choosing the incorrect classifier and thus getting good accuracy on training data. Besides that, sometimes, the learning algorithms get stuck in the local optima even if we have enough training data. Using the ensemble method and then running, a local search from various origin points can lead Fig. 2: ML-models division diagram based on how it combines weak algorithms and complexity (excluding Single Models). \begin{table} \begin{tabular}{c c c} \hline \hline **Features** & \multicolumn{2}{c}{**Type**} \\ \cline{2-3} & **Static** & **Dynamic** & **Hybrid** \\ \hline Byic Sequence & x & \\ \hline Readable Strings & x & \\ \hline Header Information & x & \\ \hline System Resource Information & x & \\ \hline File Information & x & \\ \hline Registry Information & x & \\ \hline Network Information & x & \\ \hline Opcode & x & \\ \hline System Calls/API & x \\ \hline Control Flow Graph & x & \\ \hline Function Call Graph & x & \\ \hline \hline \end{tabular} \end{table} TABLE I: Some common types of features in PE files to a better resemblance to the accurate unknown function as compared to the single base learner. In large cases of ML, it is hard to find a true function for the hypothesis space. By applying the various quantities of the hypothesis having weights, the space of representable functions can be expanded. One more reason is mentioned that ensemble methods are also so good when there is very little data as well as when there is too much [29]. In an ensemble model, there are many factors that to be considered to create a reasonable model with good output. We focus on the following three approaches [30]. #### Ii-B1 Model training An ensemble model must consider two principles of diversity and predictive performance. While the diversity expects the participating inducers to be diverse enough to achieve the desired prediction performance through using various "inductive biases", the predictive performance of each inducer should be as high as possible and should be at least as good as a random model. A model must have some inductive bias to be more useful when used with more data. The model's purpose is to fit most data, not just the sample data. As a result, inductive bias is critical. Furthermore, ensemble models with a variety of inducers may not always increase predictive performance. [31]. Input manipulationIn this scenario, each base model is trained with a separate training subset, resulting in different inputs for the several base models. It is useful when tiny changes in the training set result in a different model. Manipulated learning algorithmIn this approach, the use of each base model is changed. We can do this by modifying the way in which the base model traverses the hypothesis space. PartitioningDiversity can be obtained by dividing a large dataset into smaller sub-sets and then using each subset to train various inducers. Output manipulationThis approach discusses the techniques that combine many binary classifiers into a single multi-class classifier. Error-correcting output codes (ECOC) are a successful example of this way. Ensemble hybridizationThe idea is to combine at least two strategies when building the ensemble. The RF algorithm is probably the most well-known manifestation of the hybridization strategy. It not only manipulates the learning algorithm by selecting randomly a subset of features at each node, but also manipulates the instances when building each tree. #### Ii-B2 Output fusion Output fusion discusses the process of merging the base model outputs into a single result. There are two main types we can refer to: Weighting methodsWe can combine the base model outputs by assigning weights to each base model. The weighting method is most reasonable for instance where the performance of the base models is comparable. For classification problems, majority voting is the simplest weighting method. Another weighting strategy is to assign a weight that is proportional to the inducers' strengths. Meta-learning methodsMeta-learning models are different from standard ML models in that they involve more than one learning stage. In the meta-learning model, the individual inducer outputs are used as input to the meta-learner, which creates the final output. Meta-learning methods work well in cases where certain base models have different performances on various sub-spaces. Stacking is probably the most popular meta-learning method. #### Ii-B3 Framework We can divide it into two main types of, the dependent framework and the independent one. Regarding the dependent framework, the output of each inducer influences the construction of the next inducer. In this framework, information from the previous iteration guides learning in the next iteration. On the other hand, each inducer in the independent framework is independently built with other inducers. **Table II** shows the types of ensemble methods based on the above approaches. ### _Generative Adversarial Networks and Reinforcement Learning_ Generative Adversarial Networks (GAN) is a technique for both semi-supervised and unsupervised learning, proposed by Goodfellow [4]. The GAN architecture consists of two main parts including a generative model \(G\) and a discriminative model \(D\). While \(G\) tries to generate fake data with the aim of making it resemble the real one, \(D\) is responsible for distinguishing between real samples and generated samples from \(G\). Both networks are trained concurrently and compete in a minimax two-player game. When it comes to Reinforcement Learning (RL), it is also a branch of ML, besides supervised learning and unsupervised learning. This learning method relies on an _agent_ deciding to perform a suitable _action_ and then interacting with _the environment_ to get the best _reward_ in a specific state and get a new _state_ in return. More specifically, RL is distinct from supervised learning in that the training data is not pre-labeled, but rather learned through a process of trial and error. The RL model repeatedly attempts to solve the problem at hand and learns from the outcome of each attempt to gradually develop a suitable strategy. By contrast, supervised learning relies on pre-labeled training data to train the model. ## III Methodology ### _Threat models_ Based on the knowledge of the target that got in the attacker's hand when performing the attack, attack scenarios can be classified as white-box, gray-box, and black-box. In white-box scenarios, it is assumed that the attacker has complete \begin{table} \begin{tabular}{l l l l} \hline **Method name** & **Fusion method** & **Dependency** & \begin{tabular}{l} **Training** \\ **approach** \\ \end{tabular} \\ \hline Stacking & Meta-learning & Independent & \begin{tabular}{l} Manipulated \\ learning \\ \end{tabular} \\ \hline AdaBoost & Weighting & Dependent & \begin{tabular}{l} Input \\ manipulation \\ \end{tabular} \\ \hline Gradient Boosting (GB) machines & Weighting & Dependent & \begin{tabular}{l} Output \\ manipulation \\ \end{tabular} \\ \hline Random Forest & Weighting & Independent & \begin{tabular}{l} Ensemble \\ hybridization \\ \end{tabular} \\ \hline Bagging & Weighting & Independent & \begin{tabular}{l} Input \\ manipulation \\ \end{tabular} \\ \hline \end{tabular} \end{table} TABLE II: Ensemble method categories knowledge of the target including its training data, algorithms, and ML models along with training hyperparameters. Whereas in the gray-box scenarios, the attacker can only obtain limited or partial of that information. Opposite to white-box one, the attacker in the case of black-box is entirely oblivious to the target ML system without any knowledge. In fact, there are many opinions that it is impossible to perform an actual black-box attack. The reason is that the attacker must gather at least certain information, such as the location of the target ML model or its output, corresponding to a specific provided data. In the malware domain, a black-box attack typically refers to an attack on a target model where the attacker has only access to the target's input and output interfaces. Our threat model is defined in five following aspects. * _Knowledge of attacker:_ the obtained information about the target model of attacker. Our proposed method performs attacks in a black-box scenario. Mean that, the parameters and ensemble learning architecture of the malware detectors are not available to the attacker. Moreover, the attacker does not have access to any confidence score from the detectors. The only accessible information is whether the mutant samples can evade detection. * _Manipulation Space:_ the nature of creating adversarial samples, which can be customized in problem space or feature space. Our method aims to work on both those spaces. We apply GAN to create adversarial feature vectors based on the original ones (feature space). Then, those created vectors can be leveraged during the modification process to produce actual malware samples by RL (problem space). * _Attack Strategy:_ type of evasion attack. In our work, we utilize mimicry attack [32], an evasion technique that seeks to transfer the attack point into a benign area, or attempts to mimic a specified benign point. * _Target model:_ the detection model we focus to evade in the scope of this paper, which is Ensemble Learning-based detectors. - Availability) model and creating new feature vectors as well as malware samples that can fool the Ensemble Learning-based detectors. Besides, we also consider the transferability of adversarial attacks when evaluating the ability of mutant patterns generated during interaction with a given model to fool other ML models. The models in our study are single models and ensemble learning models. ### _Our strategy_ Our proposed method to generate adversarial samples is made of two main parts of FeaGAN and the RL model, as shown in **Fig. 3**. In this system, FeaGAN takes the responsibility of generating adversarial feature vectors. Our FeaGAN is inspired by MalGAN [13] with some improvements in the malware detection. This component only performs its duty on the feature vectors or feature space, without directly generating a complete malware sample. In this case, the RL model can be used to overcome this limitation when utilizing FeaGAN independently. By using the RL model, it helps us to decide the sequence of modifications to perform. If the agent chooses the actions to modify such as adding a new section, an import function, or changing a section name, the adversarial feature vectors generated from FeaGAN will be used. The actions of RL agent are all operations to transform a malicious software file without breaking its format. #### Iii-B1 FeaGAN model FeaGAN incorporates three essential components including the Generator, malware detector, and Discriminator. Our proposed model takes advantage of the API call list to generate deceptive PE malware samples. The underlying assumption of FeaGAN is that attackers have complete knowledge of the feature space of the target malware detector. Consequently, the model seeks to construct a Discriminator that closely emulates the target malware detector, Fig. 3: Overview of adversarial malware generation system. while concurrently training the Generator. The training process strengthens the generated adversarial malware by introducing spurious API calls to the original malware sample, which helps to deceive the Discriminator more efficiently. * **Generator:** In general, the Generator, which is indeed a multi-layer feed-forward neural network, is used to transform a malware feature vector \(m\) into its adversarial version by adding noise \(z\). Each element of \(m\) represents the presence or absence of a feature, while noise \(z\) is a 10-dimension vector randomized in the range [0, 1]. The Generator is designed with two hidden layers of 256 nodes and Leaky ReLU as the activation function (**Eq. (1)**). The output layer has 11,041 nodes for the corresponding number of features in the Malware Dataset. All these nodes have their output ensured in the range of (0,1) by the Sigmoid activation function. Moreover, we resolve the _exploding gradient_ problem by bounding the output of the Generator within the range \([\epsilon,1-\epsilon]\) with \(\epsilon=10^{-7}\). \[g(x)=\begin{cases}x&x\geq 0\\ \alpha x&\text{Otherwise}\end{cases}\] (1) * **Malware detector:** It plays a role as the third-party malware detector to label each input sample as either malware or benign. To implement this element, we use both single algorithms and ensemble learning ones to enable adversarial samples to interact with multiple classifiers to improve their evasiveness [1]. In the case of ensemble learning, the Stacking algorithm is chosen due to its popularity and the benefits gained from each base model. Its pseudocode is given in **Algorithm 1**. * **Discriminator:** The Discriminator is utilized to mimic the operations of the malware detector and supplies gradient information to train the Generator. It is also a multi-layer feed-forward neural network with the same structure as Generator. The main difference is the output layer, where Discriminator has only 1 node. It also uses the Sigmoid activation function to show the _probability_ of the input vector to be malware. #### Iii-B2 RL model We utilize the DQEAF [8] as an RL model to assist the FeaGAN to generate complete adversarial malware samples instead of adversarial vectors. Given the reward and the state of the environment as the input, an action is chosen based on a rational strategy for the agent to perform in the next round. Additionally, each component is described as a Markov Decision Process (MDP) model to apply DQEAF to this problem: * State \(s_{t}\) is a feature vector of the malware PE file. * Given a state \(s_{t}\), the agent will observe the state of the environment and choose an action \(a_{t}\) from action space \(A\) of available actions. In our RL model, we implement 10 actions defined as in **Table III**. * Reward \(r_{t}\) is determined for each training \(TURN\) based on the label from the detector and the number of performed actions. Moreover, \(MAXTURN\) indicates that the agent should claim failure if \(MAXTURN\) steps of modification have been taken. Reward \(r_{t}\) is 0 if the label is malware, and is calculated using **Eq. (2)** when the label is benign. \[r_{t}=20^{-(TURN-1)/MAXTURN}*100\] (2) The agent is trained to maximize the expectation of the cumulative discounted reward given by **Eq. (3)**. \[R=\sum_{t=1}^{T}\gamma^{t-1}r_{t}\] (3) where \(\gamma\in[0,1]\) is a factor discounting future rewards. * The optimal action-value function \(\hat{Q}\) which shown in **Eq. (4)** estimates the expected reward of taking an action \(a\) at state \(s_{t}\). \[\hat{Q}(s_{t},a_{t})=E\{r_{t}+\gamma\max_{a_{t+1}}\hat{Q}(s_{t+1},a_{t+1})\}\] (4) The agent optimizes the weight \(\theta\) to minimize the error estimated by the loss function (**Eq. (5)**) during learning process. \[\begin{split} loss_{t}(\theta_{t})=[(r+\gamma\max_{a_{t+1}}Q(s_{ t+1},a_{t+1};\theta_{t-1}))\\ -Q(s_{t},a_{t};\theta_{t})]^{2}\end{split}\] (5) \begin{table} \begin{tabular}{c l} \hline \hline **Action** & **Description** \\ \hline overlay\_append & Randomly add some bytes to the end of the file \\ \hline imports\_append & Randomly add an import function that is not in the file \\ \hline section\_rename & Rename a section contained in the file \\ \hline section\_add & Randomly add a section to the file \\ \hline section\_append & Randomly add some bytes to a section \\ \hline upx\_pack & Pack files using UPX \\ \hline upx\_unpack & Unpack the packaged executable using UPX \\ \hline remove\_signature & Remove the signed certificate for the executable \\ \hline remove\_debug & Remove debugging information in the file \\ \hline break\_header\_checksum & Modify (break) header checksum \\ \hline \hline \end{tabular} \end{table} TABLE III: Actions in action space of RL model #### Iii-C3 Cuckoo-based functional testing environment The functional testing environment is an isolated place from the system to capture and verify suspected malware. It is used to prevent these files from executing undesired behaviors affecting on the real system. We use the Cuckoo Sandbox, an open-source dynamic analysis system, as a testing environment for functional validation to evaluate the executability and maliciousness of created malwares. ## IV Experiments and Analysis ### _Environment implementation_ Our system is implemented on the Ubuntu 18.04 LTS virtual machine, with a detailed hardware configuration of 16-core Intel Xeon E5-2660 CPU clocked at 2.0 GHz with 16 GB of RAM and a 300 GB disk. The source code is written in Python using the main libraries such as Pytorch, Scikit-learn, LIEF, and some other support libraries. It should be mentioned that the Stacking approach requires Scikit-Learn library version 0.22 or higher, which is the library that we install and utilize ensemble strategies. #### Iv-A1 FeaGAN module The FeaGAN model is trained to generate new features for actions in the RL model, such as imports_append or section_add or section_rename. The Generator and the Discriminator are both multi-layer feed-forward neural networks consisting of 2 hidden layers with 256 nodes in each layer. The activation function is Leaky ReLU defined in **Eq. (1)** with \(\alpha=0.01\). The output layer has 11,041 nodes including 9,890 imports and 1,151 sections. This layer also uses a sigmoid activation function to ensure the output lies in the range (0, 1). The FeaGAN model is trained with 100 epochs, and a batch size of 32. #### Iv-A2 RL module We implement the same techniques as described in Fang's DQEAF framework [8], but with 2,350 dimensions as in the gym-malware [7], for a more comprehensive view of malware. The RL agent has 10 possible actions, as listed in **Table III** and is trained in a Deep Convolutional Q-network with two hidden layers with 256 and 64 nodes respectively. The activation function used in both layers is the ReLU function. The agent training gets through over 600 episodes with a discount factor \(\gamma\) of 0.99. In each episode, the agent is allowed to perform up to 80 actions on each PE file. If it receives a reward of 10 before reaching that limitation, it proceeds to the next episode to learn in a new state. #### Iv-A3 FeaGAN's malware detectors Note that, due to the high dimensions of the dataset [33] described in **Section IV-B**, we carefully choose suitable algorithms that can adapt to that requirement. To implement single learning-based detectors, we utilize 5 algorithms of Decision Tree (DT), Logistic Regression (LR), Kneighbors (KNN), Naive Bayes and Bernoulli. Meanwhile, ensemble algorithms are deployed in homogeneous and heterogeneous manner. More specifically, Random Forest, Bagging, AdaBoost, Gradient Boosting are used as homogeneous algorithms. Besides, in the Voting technique, all five above single algorithms work as estimators with soft voting for prediction. In the Stacking technique, there are 2 different implementations. The first one has base estimators consisting of the three best single algorithms. The second one employs all five single algorithms as base estimators. From the group of the top three single methods indicated above, the final estimator is chosen. #### Iv-A4 Target RL models To deploy target RL models, most of the other works employ a GB-based model and the gym-malware framework [7, 8]. In this paper, we also implement our GB-based target model, and then comparing with the GB-based one in gym-malware and other ensemble algorithms in the homogeneous approach as same as FeaGAN. Moreover, 5 algorithms of LR, DT, Naive Bayes, KNN and MLP are used as single models. Besides, in the heterogeneous approach, voting also uses all these five single algorithms. In the stacking method, we also have 2 cases of base estimators. The first one takes all five single algorithms as base estimators. Another one has base estimators of DT, RF, Adaboost, Bagging, GB. ### _Dataset_ Our dataset has 115,000 PE files extracted from [34], including 55,000 benign files collected from Windows 7 virtual machine (VM) and 60,000 malware files from VirusTotal, with labels of Adware, Trojan, Virus, Ransomware, Backdoor. From the given dataset, features including the import functions and the sections are extracted. More specifics, import functions are stored in a set in the format of \(<Function\,>:<\,Name\,\,of\,\,DLL\,\,library>\), for example, \(ReadFile:kernel32.dll\), and then concatenated with Section Name (.text,.data,.bss, etc.). All features are represented as binary ones, which is 1 when the feature exists in the sample, and 0s for vice versa. This dataset is divided into different parts for purposes of training or testing in various components in our model, as described in **Table IV**. In more detail, 5,000 benign files and 8,000 malware files are used in training and testing FeaGAN with different ratio. 80% of benign samples take part in the training process of detector, while the remain benign ones are used in testing of this component. Meanwhile, malware samples are split up to 60% and 20% for training and testing the detector respectively, when the other 20% plays its role in training GAN. We reuse the detector's testing data for the testing process in GAN. In the RL model, we reuse the GAN dataset to train and test the RL agent (i.e., 3,200 malicious samples in a 50:50 ratio for training and testing). We also utilize 50,000 benign files and 50,000 malware files to initially train and test target models with a ratio of 8:2. \begin{table} \begin{tabular}{c c c c} \hline \hline \multicolumn{2}{c}{**Dataset**} & \multicolumn{3}{c}{**Label**} \\ \cline{3-4} \multicolumn{2}{c}{} & \multicolumn{1}{c}{} & **Benign** & **Malware** \\ \hline \multirow{4}{*}{Dataset for FeaGAN} & Malware detector & Training & 4,000 & 4,800 \\ \cline{2-4} & Testing & 1,000 & 1,600 \\ \cline{2-4} & GAN & Training & 0 & 1,600 \\ \cline{2-4} & Testing & 0 & 1,600 \\ \hline \multirow{4}{*}{Dataset for RL} & Target model & Training & 40,000 & 40,000 \\ \cline{2-4} & Testing & 10,000 & 10,000 \\ \cline{2-4} & RL agent & Training & 0 & 1,600 \\ \cline{1-1} \cline{2-4} & Testing & 0 & 1,600 \\ \hline **Testset for the proposed system** & Testing & 0 & 2,000 \\ \hline \hline \end{tabular} \end{table} TABLE IV: Overview of data distribution After finishing the training components, we test our proposed system on the remaining 2,000 malicious samples against the target models. ### _Performance Metrics_ The malware recognition performance of the detector is represented via metrics including Area Under Curve (AUC), accuracy, precision, recall, and F1-score. The above metrics are calculated based on the attributes of the Confusion Matrix, which are True Positive (TP), True Negative (TN), False Positive (FP), and False Negative (FN). In security, where positive and negative represent malicious and benign samples, respectively, those attributes are defined as follows. * TP: quantity of malware samples are truly classified. * TN: quantity of benign samples are true classified. * FN: quantity of malware samples are classified as benign. * FP: quantity of benign samples are classified as malware. Based on those definitions, _Accuracy_ is the ratio of correct predictions \(TP,\ TN\) in the total resulted ones, as in **Eq. (6)**. Meanwhile, _Precision_, as in **Eq. (7)**, is the ratio of accurate malware \(TP\) to the total number of malware labeled by the detector. Another metric called _Recall_, which is defined in **Eq. (8)**, measures the proportion of \(TP\) over all malware instances in the testing dataset. _F1-Score_ reflects \(Precision\) and \(Recall\) in a single formula, as in **Eq. (9)**. Besides, AUC is a metric to measure the quality of the virtualization of the tradeoff between TP rate and FP rate. Higher values in the above metrics indicate a more powerful malware detector. \[Accuracy=\frac{TP+TN}{TP+TN+FP+FN} \tag{6}\] \[Precision=\frac{TP}{TP+FP} \tag{7}\] \[Recall=\frac{TP}{TP+FN} \tag{8}\] \[F1-score=2\times\frac{Precision\times Recall}{Precision+Recall} \tag{9}\] ### _Scenarios_ We implement three scenarios for evaluating the performance of the Ensemble Learning-Based Detector compared to the Single Learning-Based Detector. #### Iv-D1 Scenario 1 - Evaluation on performance of detector and target models This scenario aims to assess the capability of detecting malware of deployed detectors as well as target model, which are single or ensemble algorithms. #### Iv-D2 Scenario 2 - Targeted attack In this scenario, the Discriminator in FeaGAN has a similar algorithm (single or ensemble) to the targeted models. In other words, our model is trained to generate adversarial samples to fool the Discriminator or the known target models. #### Iv-D3 Scenario 3 - Transfer attack This is the case where FeaGAN and Target Model have different underlying algorithms. Our goal is to evaluate whether the malicious patterns generated by an ensemble learning algorithm remain evading ability against target detectors based on single algorithms and other ensemble algorithms. #### Iv-D4 Scenario 4 - Evaluation on the capability of malware preservation Preserving the integrity of mutant samples is a critical criterion to evaluate the performance of the generation method, which can be categorized into 3 aspects. * _Format-preserving_: All mutant malware samples need to be in the same PE format as the original ones. To verify this, we use a tool called _file_ installed in Cuckoo VM. * _Executability-preserving_: We aim to check where mutants are executable by running them in Cuckoo Sandbox. Then, we save information about the signatures of samples for evaluating executability-preserving, as well as later maliciousness-preserving. The mutant sample is considered executability-preserving when generating at least one dynamic signature (generating network traffic, accessing REGISTRY or files on the system, etc.). * _Maliciousness-preserving_: This evaluation is indicated by the number of similar signatures in the original malware and its mutant sample. So, the maliciousness is claimed to be preserved if the difference is less than a predefined threshold, which is 6 signatures in our study. ### _Experimental results and analysis_ #### Iv-E1 Scenario 1: Performance of malware detectorsThe performance of malware detectors in our proposed FeaGAN as well as target models are presented in **Table V**. As we can see, among single algorithms, LR achieves the best performance in most metrics. Besides, LR, KNN and DT are the best three ones with all metrics above 89.5%. In the case of homogeneous ensemble learning, RF seems to be better in performance than other algorithms. However, GB or Bagging still have their strength in some other metrics. In the heterogeneous ensemble learning, besides voting, we have stacking cases indicated by _Stacking(n, Algo)_, where \(n\) and _Algo_ are the number of base estimators and the algorithm of final estimator, respectively. For example, _Stacking(3, LR)_ represents Stacking ensemble learning using 3 base estimators and LR as the final estimator. Note that, algorithms in the 3-estimator stacking case are the 3 best single algorithms, which are LR, KNN and DT as mentioned before. Via the results in **Table V**, it seems unable to figure out the best model in the heterogeneous approach. Moreover, a change in the number of estimators does not result in a clear increase or decrease trend in performance of all models. The only notable case is that KNN has most of its metrics dropped significantly when having more base estimators, except Recall reaching the best value among all heterogeneous algorithms. This can be caused by the mismatch between the algorithm and the data set, as well as the lack of data to provide reliable statistics. However, it can be noted that the Stack technique outperforms the Voting one. Besides, in general, ensemble algorithms achieve a little better performance compared to their single ones. Performance of target models**Table VI offers the malware-detecting capability of target models in the RL model. Note that, due to the slight difference when changing the number of base estimators, stacking-based models are designed with 5 base estimators. Moreover, the two cases of stacking differ in the used base and final estimators. The first case utilizes single algorithms DT, LR, Naive Bayes, MLP, and KNN for base estimators, and the best of them - DT as the final. In the second case, underperforming single algorithms are substituted by ensemble ones to have DT, RF, Adaboost, Bagging, and GB as base estimators and the final estimator of RF. Clearly, DT and RF outperform other single and homogeneous algorithm-based models with the best values in most metrics. Moreover, the experimental results indicate that using ensemble algorithms in the stacking method yields more favorable outcomes. More specifics, the metrics of the second stacking case surpass those of the first stacking approach by 5.8% to 7.2%. Besides, based on the above results, to simplify the comparison in later scenarios, we chose the best algorithms with the highest performance to represent each case of the target models. Specifically, DT and KNN are used as the representatives for single learning detectors. Meanwhile, RF and GB (both ours and gym-malware-based) are examples of Homogeneous Ensemble Learning and Stacking(5, RF) is for Heterogeneous Ensemble Learning. After training, we also consider the performance of the target models made on testing 2,000 malware samples, as in **Table VII**. The results indicate that the target models can identify malware effectively, with low evasion rates of mostly under 10% and a high average score in VirusTotal. #### Iv-C2 Scenario 2 Initially, we used FeaGAN to create adversarial features. As shown in **Table VIII**, most algorithms have Recall significantly decreased to near 0 when dealing \begin{table} \begin{tabular}{c c c c c c} \hline \hline \multicolumn{2}{c}{**Algorithms**} & \multicolumn{2}{c}{**AUC Accuracy**} & \multicolumn{1}{c}{**Precision**} & \multicolumn{1}{c}{**Recall**} & \multicolumn{1}{c}{**F1-score**} \\ \hline \multirow{4}{*}{**Single**} & Bernoulli & 0.84906 & 0.73115 & 0.85839 & 0.67438 & 0.75534 \\ \cline{2-6} & Naive Bayes & 0.61137 & 0.69615 & 0.67442 & **0.97875** & 0.79857 \\ \cline{2-6} & DT & 0.91580 & 0.92721 & 0.95155 & 0.92063 & 0.93583 \\ \cline{2-6} & LR & **0.96459** & **0.92769** & **0.95315** & 0.92813 & **0.94047** \\ \cline{2-6} & KNN & 0.94977 & 0.90231 & 0.94335 & 0.89500 & 0.91854 \\ \hline \multirow{4}{*}{**Homogeneous**} & RF & **0.99278** & **0.93654** & 0.96320 & 0.93250 & **0.94746** \\ \cline{2-6} & Bagging & 0.97268 & 0.93115 & **0.96648** & 0.92188 & 0.94279 \\ \cline{2-6} & AdaBoost & 0.95289 & 0.88500 & 0.91249 & 0.89938 & 0.90589 \\ \cline{2-6} & GB & 0.96610 & 0.92423 & 0.93816 & **0.93878** & 0.93846 \\ \hline \multirow{4}{*}{**Heterogeneous**} & Voting & 0.99699 & 0.93000 & 0.94479 & 0.94125 & 0.94302 \\ \cline{2-6} & Stacking(3, LR) & **0.97138** & 0.93615 & 0.96021 & 0.93500 & 0.94594 \\ \cline{2-6} & Stacking(3, KNN) & 0.95943 & 0.93192 & **0.96231** & 0.92563 & 0.94361 \\ \cline{2-6} & Stacking(3, DT) & 0.90274 & 0.91192 & 0.94454 & 0.91000 & 0.92710 \\ \cline{2-6} & Stacking(5, LR) & 0.97576 & **0.93846** & 0.96038 & 0.93875 & **0.94943** \\ \cline{2-6} & Stacking(5, KNN) & 0.74160 & 0.78846 & 0.76382 & **0.95000** & 0.84680 \\ \hline \multirow{4}{*}{**Homogeneous**} & Stacking(5, DT) & 0.90568 & 0.91538 & 0.94344 & 0.91750 & 0.93029 \\ \cline{2-6} & \multicolumn{1}{c}{} & & & & \\ \end{tabular} \end{table} TABLE V: Performance of Malware Detector in FeaGAN \begin{table} \begin{tabular}{c c c c} \hline \hline \multirow{2}{*}{**Target model**} & \multicolumn{3}{c}{**Before the mutation**} \\ \cline{2-4} & **Detected** & **Exusion Rate (ER)** & **Average** \\ \cline{2-4} & DT & 0.97195 & **0.97195** & **0.96705** & **0.97220** & **0.97210** \\ \hline \multirow{2}{*}{**Single**} & Naive Bayes & 0.86779 & 0.75250 & 0.94236 & 0.53790 & 0.68487 \\ \cline{2-4} & MLP & 0.82203 & 0.82185 & 0.93641 & 0.69060 & 0.749494 \\ \cline{2-4} & KNN & 0.81440 & 0.76705 & 0.71321 & 0.89330 & 0.79316 \\ \hline \multirow{4}{*}{**Homogeneous**} & RF & **0.99899** & **0.9205** & **0.98897** & 0.95520 & **0.99297** \\ \cline{2-4} & AdaBoost & 0.99522 & 0.97330 & 0.95721 & 0.99060 & 0.97376 \\ \cline{2-4} & GB & 0.99877 & 0.98450 & 0.97259 & **0.99210** & 0.98469 \\ \cline{2-4} & Bagging & 0.99357 & 0.97870 & 0.97349 & 0.98420 & 0.97882 \\ \cline{2-4} & GB (gym-malware) & 0.99079 & 0.95200 & 0.96217 & 0.94100 & 0.95147 \\ \hline \multirow{4}{*}{**Heterogeneous**} & Voting & 0.96844 & 0.84550 & 0.96401 & 0.71780 & 0.82288 \\ \cline{2-4} & Stacking(5, DT) & 0.92471 & 0.92355 & 0.91890 & 0.92910 & 0.92397 \\ \cline{1-1} \cline{2-4} & Stacking(3, RF) & **0.99157** & **0.99275** & **0.97730** & **0.99870** & **0.98788** \\ \hline \hline \end{tabular} \end{table} TABLE VI: Performance of Target Model in DQEAF \begin{table} \begin{tabular}{c c c} \hline \hline \multicolumn{2}{c}{**Algorithms**} & \multicolumn{2}{c}{**Recall on adversarial features**} \\ \hline \multirow{4}{*}{**Single**} & Bernoulli & 0 \\ \cline{2-3} & Naive Bayes & 0 \\ \cline{2-3} & DT & 0 \\ \cline{2-3} & LR & 0 \\ \cline{2-3} & KNN & 0.215 \\ \hline \multirow{4}{*}{**Homogeneous**} & RF & 0 \\ \cline{2-3} & Bagging & 0.49625 \\ \cline{2-3} & AdaBoost & 0.40968 \\ \cline{2-3} & GB & 0 \\ \hline \multirow{4}{*}{**Heterogeneous**} & Voting & 0 \\ \cline{2-3} & Stacking(5, LR) & 0 \\ \cline{2-3} & Stacking(5, KNN) & 0 \\ \hline \hline \multirow{4}{*}{**Heterogeneous**} & Stacking(5, DT) & 0 \\ \cline{2-3} & Stacking(5, LR) & 0 \\ \cline{2-3} & Stacking(5, KNN) & 0.00625 \\ \cline{2-3} & Stacking(3, DT) & 0 \\ \hline \hline \end{tabular} \end{table} TABLE VIII: Recall metric on Adversarial features of Malware Detector in FeaGAN with these crafted features. Bagging and KNN seem to retain a capability to detect adversarial features compared to other algorithms, in which Bagging is the best one. Subsequently, the result adversarial features are used as raw information for training DQEAF to build complete adversarial samples. Then, those samples are used to perform a targeted attack on target models which are similar to the model in the training phase. The evasion rate of crafted malware on 5 representative models is given in **Table IX**. The higher the evasion rate is, the more effective adversarial samples are. Clearly, DT has obtained the highest ER with 64.45%. Moreover, compared to the performance in the first scenario of evaluating the target model, our system succeeds in fooling those models with the increased evasion rate. The most effective case is also the DT model, where the evasion rate climbs 61.7%. Meanwhile, using GB creates adversarial samples with the least effect, which only causes an increase of 0.7% in the evasion rate. Moreover, in most cases, regardless of the target model in FeaGAN, the average score in VirusTotal drops in classifying created samples, which indicates many classifier engines fail to detect these malware mutants. A more visualized view of those results is given in **Fig. 4**. #### Iv-B3 Scenario 3 The results of the transfer attack are shown in **Table X**. In general, regardless of the employed algorithm, which can be single or ensemble models, all target models have their evasion rate increased on mutant samples. This indicates that created samples are still effective in transfer attacks. Among the models used in generating mutants, the Stack-based ensemble algorithm produces the best results when causing 3 of 5 target models (using a single or ensemble algorithm) to get their highest evasion rates, of which two are transfer attack cases. Moreover, to clearly evaluate the effect of mutants on target models, we also consider the difference in ER before and after the transfer attack in terms of increased ER, as in **Fig. 5**. Note that, the models used in making the mutant samples are distinguished from the target models by the suffix "M". For example, the mutated samples generated by DT algorithms in GAN architecture are denoted by DT-M. Such DT-M samples can be tested in attacks on all ML algorithms including DT, KNN, RF, GB, Stacking(5, RF), and GB (gym-malware). Overall, the evasion rate of all malware samples increased after modification. The highest columns in the group of Stacking(5, RF)-M proves its effectiveness in crafting mutants to fool detectors. #### Iv-B4 Scenario 4 In this scenario, we randomly select 100 result samples from each model used in generating mutants to verify the preservation requirements for the file format. The results, presented in **Table XI**, indicate that 100% mutant samples guarantee format preservation. This is due to the fact that during mutation, only components that are deemed unrelated are affected, leaving the malicious file intact, thereby ensuring the preservation of the file format. Moreover, according to our definition of executability-preserving and maliciousness-preserving, the number of mutants meeting those requirements is acceptable. Clearly, it is more problematic to retain the malicious actions than to keep the executability, indicated in the low ratio of maliciousness-preserved mutants. Especially, DT and Stacking(5, RF), which are considered the best algorithms \begin{table} \begin{tabular}{l c c c c} \hline \hline \multirow{2}{*}{**Target model**} & \multicolumn{4}{c}{**After the mutation**} \\ \cline{2-5} & **Detected** & **ER** & \begin{tabular}{c} **Virus-** \\ **Total Score** \\ \end{tabular} & \begin{tabular}{c} **Increased** \\ **ER** \\ \end{tabular} \\ \hline **DT** & 711 & 64.45\% & 40/72 & 61.7\% \\ \hline **KNN** & 1,694 & 15.3\% & 52/71 & 1.5\% \\ \hline **RF** & 1,902 & 4.9\% & 48/72 & 2.6\% \\ \hline **GB** & 1,953 & 2.35\% & 46/71 & 0.7\% \\ \hline **Stacking(5, RF)** & 1,520 & 24\% & 53/71 & 23.15\% \\ \hline **GB (gym-malware)** & 1,837 & 8.15\% & 56/73 & 8\% \\ \hline \hline \end{tabular} \end{table} TABLE IX: The evasion result of 2,000 mutant samples on target models and VirusTotal in targeted attack Fig. 4: The evasion rate of mutant samples. to create the most evasive mutants, seem not to be able to ensure the proper operations of malware with only 7% and 1% of truly malicious mutants. ## V Related work There are three challenges of maintaining the semantics of adversarial PE malware for practical and realistic adversarial attacks against PE malware detection that attackers must notice, including format-preserving, executability-preserving, and maliciousness-preserving [2]. Unlike images, sounds, or even text, PE Malware must follow the strict formatting rules of a PE file. Therefore, in PE files, transformations in their problem space should be defined within the required format. However, even if we guarantee the format of the file (format-preserving), we cannot keep the executability for PE files (executability-preserving) and the same maliciousness for PE malware (maliciousness-preserving). To start with, ATMPA[35], proposed by Liu et al., is the white-box adversarial attack strategy in the domain of image-based malware classification tasks. In particular, ATMPA converted the malware sample to a binary texture grayscale image before modifying the corresponding adversarial sample with tiny perturbations provided by two existing adversarial attack techniques - FGSM and C&W. Experiment results indicated that adversarial noise can reach a 100% successful attack rate for CNN, SVM, and RF-based malware detectors. Furthermore, the rate of transferability of adversarial samples while attacking various malware detectors might reach 88.7%. However, the created adversarial grayscale image of the malware sample broke the structure of the original malware and hence could not be executed properly, making ATMPA unsuitable for real-world PE malware detection. In other work, Lucas et al. [37] presented a novel category of adversarial attacks based on binary diversification techniques that change binary instructions at the fine-grained function level using two types of functionality-preserving modifications (i.e., in-place randomization and code displacement). To guide the transformations implemented for the PE malware under the white-box option, they used a gradient ascent optimization to select the transformation only if it shifts the embeddings \begin{table} \begin{tabular}{c c c c} \hline \hline **Algorithm** & **Format** & **Executability** & **Maliciousness** \\ \cline{2-4} **for mutant** & **-preserving** & **-preserving** & **-preserving** \\ \hline **DT** & 100\% & 60\% & 7\% \\ \hline **KNN** & 100\% & 94\% & 62\% \\ \hline **RF** & 100\% & 99\% & 63\% \\ \hline **GB** & 100\% & 54\% & 29\% \\ \hline **Stacking(5, RF)** & 100\% & 81\% & 1\% \\ \hline **GB** & **(gym-malware)** & 100\% & 94\% & 62\% \\ \hline \hline \end{tabular} \end{table} TABLE XI: Capability to guarantee the challenges of 100 mutant samples \begin{table} \begin{tabular}{c c c c c c c c c c c c} \hline \hline \multirow{2}{*}{**Mutant sample by**} & \multicolumn{8}{c}{**Target model**} \\ \cline{2-13} & \multicolumn{2}{c}{**DT**} & \multicolumn{2}{c}{**KNN**} & \multicolumn{2}{c}{**RF**} & \multicolumn{2}{c}{**GB**} & \multicolumn{2}{c}{**Stacking(5, RF)**} & \multicolumn{2}{c}{**GB (gym-malware)**} \\ \cline{2-13} & **Detected** & **ER** & **Detected** & **ER** & **Detected** & **ER** & **Detected** & **ER** & **Detected** & **ER** & **Detected** & **ER** \\ \hline **DT-M** & **711** & **64.45** & 1,614 & 19.3 & 1,391 & 30.45 & **1,915** & **4.25** & 1,915 & 4.25 & 1,965 & 1.75 \\ \hline **KNN-M** & 1,889 & 5.55 & 1,694 & 15.3 & 1,902 & 4.9 & 1,957 & 2.15 & 1,965 & 1.75 & **1,836** & **8.2** \\ \hline **RF-M** & 1,905 & 4.75 & 1,692 & 15.4 & 1,902 & 4.9 & 1957 & 2.15 & 1,964 & 1.8 & 1,837 & 8.15 \\ \hline **GB-M** & 1,910 & 4.5 & 1,697 & 15.15 & 1,908 & 4.6 & 1,953 & 2.35 & 1,961 & 1.95 & 1,842 & 7.9 \\ \hline **Stacking(5, RF)-M** & 912 & 54.4 & **1.859** & **22.05** & **1.041** & **47.95** & 1,961 & 1.95 & **1,820** & **24** & 1,954 & 2.3 \\ \hline **GB (gym-malware)**-M** & 1,905 & 4.75 & 1,692 & 15.4 & 1,902 & 4.9 & 1,957 & 2.15 & 1,964 & 1.8 & 1,837 & 8.15 \\ \hline \hline \end{tabular} \end{table} TABLE X: Ability to detect mutant malware samples when performing a transfer attack (**total:** 2,000 samples) Fig. 5: The increased evasion rate of the malware samples after mutation. in a direction similar to the gradient of the attack loss function relating to its embeddings. It is evident that practically almost all white-box attacks against PE malware detection, such as the one described above, use optimization of gradient-based approaches as their attack methodologies, regardless of the adversary's space for mutating original samples in feature space or problem space. However, due to the problem-feature space dilemma, it is infeasible and impractical to directly utilize gradient-based optimization algorithms to build realistic adversarial PE malware patterns [39]. In comparison to white-box attacks, black-box attacks are more practicable and realistic in the wild because they rely less on the knowledge of attackers about the target malware detector. For instance, Rosenberg et al. [36] introduced BADGER, an end-to-end adversarial attack system comprised of a set of query-efficient black-box attacks designed to misclassify such API call sequence-based malware detectors while minimizing the number of queries. To preserve the original functionality, their attacks were restricted to only inserting API calls that had no impact or an insignificant impact. The authors presented various attacks with and without knowledge of output likelihood scores to tackle the problem of whether and where the API calls should be added. They conducted two types of adversarial attacks, including the score-based attack and the decision-based attack. For the score-based attack, it employed pre-trained SeqGAN initializing to replicate the API call sequences of benign samples to create the API call. Besides that, it also applied the self-adaptive uniform mixing evolutionary method to optimize the insertion site. Otherwise, the decision-based attack relied on randomness to insert the API call in the same position. To improve the query efficiency of the attacks, they injected objects having a maximum budget and then used a logarithmic backtracking mechanism for removing part of the added API calls while maintaining evasion. Regarding GAN-based attacks, Hu and Tan [13] introduced the MalGAN model to generate PE malware samples, which have the ability to bypass the classification of malware detector, based on the list of API call. Specifically, MalGAN assumed that the attacker knows the entire feature space of the target malware detector. The authors built an alternative detector with the same characteristics of the black-box target model. Then, MalGAN initialized a generation module to minimize the malicious probability of adversarial patterns predicted from the alternative detection machine by adding some unnecessary API calls into the original injected object. In other study, Kawai et al. [5] found some issues from a realistic viewpoint and proposed an improved model from MalGAN called Improved-MalGAN. For instance, Improved-MalGAN used various API call lists during the MalGAN and the training processes of black-box detectors, while the original MalGAN trained them with the same dataset. They also mentioned that the generation of adversarial patterns should not be carried out on diverse types of malware as it may impact the avoidance performance. Besides, Yuan et al. presented GAPGAN [6], a byte-level black-box adversarial attack system based on GAN against DL-based abnormal detection. Their system was built to keep its original functionality and had a high success rate with only short-inserted payloads. For RL-based attacks, according to Ebrahimi et al. [38], the actor-critic or DQN is typically used in RL-based adversarial attack methods against PE malware detection and has limitations when handling situations with a large combinatorial state space. By using the variational actor-critic, which has been \begin{table} \begin{tabular}{c c c c c c c c c c} \hline \hline \multirow{2}{*}{**Year**} & \multirow{2}{*}{**Authors**} & \multirow{2}{*}{\begin{tabular}{c} **Knowledge** \\ **of attacker** \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} **Manipulation** \\ **Space** \\ \end{tabular} } & \multirow{2}{*}{**Attack Strategy**} & \multirow{2}{*}{\begin{tabular}{c} **PE Malware** \\ **Detection** \\ \end{tabular} } & \multirow{2}{*}{**Datasets**} & \multirow{2}{*}{ \begin{tabular}{c} **Preeservation** \\ **Detection** \\ \end{tabular} } \\ \cline{5-10} & & & & & & & & & & \\ \hline 2017 & Hu and Tan [13] & BB & FS & GAN & \begin{tabular}{c} API call list based \\ malware detectors \\ \end{tabular} & Self-collected dataset & x & x & \\ \hline 2019 & Liu et al. [35] & WB & FS & FGSM, C\&W & \begin{tabular}{c} Visualization-based \\ malware detectors \\ \end{tabular} & \begin{tabular}{c} BIG 2015 \\ VirusTotal \\ Self-collected benign dataset \\ \end{tabular} & & \\ \hline 2019 & Kawai et al. [5] & BB & FS & GAN & \begin{tabular}{c} API call list based \\ malware detectors \\ \end{tabular} & \begin{tabular}{c} FFRI Dataset 2018 \\ \end{tabular} & x & x & \\ \hline 2019 & Fang et al. [8] & BB & FS & RL & \begin{tabular}{c} GB-based \\ malware detector \\ \end{tabular} & VirusTotal & x & x & x \\ \hline 2020 & Rosenberg et al. [36] & BB & FS & Evolutionary Algorithm & \begin{tabular}{c} API call sequence based \\ malware detectors \\ \end{tabular} & Self-collected dataset & x & x & \\ \hline 2020 & Yuan et al. [6] & BB & PS & GAN & MalConv & \begin{tabular}{c} VirusTotal \\ BIG 2015 \\ Checkorley \\ \end{tabular} & x & x & \\ \hline 2021 & Lucas et al. [37] & WB & PS & Gradient-based & \begin{tabular}{c} VirusTotal \\ VirusBase \\ AttackNet \\ Self-collected benign dataset \\ \end{tabular} & x & x & x \\ \hline 2021 & Ebrahimi et al. [38] & BB & PS & RL & \begin{tabular}{c} EMBER \\ MalConv \\ \end{tabular} & VirusTotal & x & x & \\ \hline 2021 & Labaca-Castro et al. [9] & BB & PS & RL & \begin{tabular}{c} LightG-based \\ malware detector \\ \end{tabular} & Self-collected dataset & x & x & \\ \hline & Ours & BB & PS & GAN + RL & \begin{tabular}{c} Single algorithm-based \\ malware detectors \\ Ensemble algorithm-based \\ malware detectors \\ \end{tabular} & VirusTotal & & \\ \hline \hline \end{tabular} \end{table} TABLE XII: The summary of related works on malware mutation shown to perform at the cutting edge in managing situations with combinatorial large state space, they offered an improved RL-based adversarial attack framework of AMG-VAC based on gym-malware [7, 40]. In addition, DQEAF, a different framework presented by Fang et al. [8] that employed DQN to evade PE malware detection, was nearly identical to gym-malware in strategy with a few implementations to increase the effectiveness of the model. Labaca-Castro et al. [9] also offered AIMED-RL, an RL-based adversarial attack framework. The primary distinction between AIMED-RL and other RL-based adversarial attacks is that AIMED-RL provides a novel penalization to the reward function to increase the diversity of the mutated transformation sequences while minimizing the corresponding lengths. In terms of property preservation including the format, executability, and maliciousness, most adversarial attacks against PE malware detector can only retain the format rather than executability or maliciousness [2]. Several adversarial attack methods, like ATMPA[35], may damage the fixed layout and structure of the PE format, which is required to load and execute the PE file. Furthermore, there are ways that interact with the feature space [35, 36, 13, 5] without interacting with the problem space like [37, 6, 38, 8, 9], which is highly likely to be in trouble due to the problem-feature space dilemma. However, it is worth noting that several studies on adversarial attacks such as [13, 35, 5, 36, 6, 38, 9] have yet to empirically demonstrate whether the generated adversarial PE malware retains the same level of maliciousness as the original PE malware. These studies are reported and summarized in **Table XII**. Furthermore, several authors propose a variety of techniques to improve the effectiveness of adversarial malware variants. There are two techniques for improving the efficiency of the adversarial samples, including using the variety of attack methods and attacking many classifiers [1]. In the first strategy, they employ the variety of attack methods to disturb and increase the probability of misclassification of classifiers. Tramer et al. [11] suggested employing several mutants to attack a classifier. It produces efficient outcomes in evading the classifier. In the second strategy, they attack the variety of classifiers such that adversarial patterns interact with them as much as possible to maximize the likelihood of avoiding patterns directed at them. In another, Liu et al. [12] also proposed improving pattern transferability by attacking a group of mixed DL models rather than a single model. Also, Luca et al. [41] offered a framework that can make a step towards a more systematic and scalable attacking approach for ML algorithms. Their framework is composed of two major blocks. First, the practical manipulations must be defined within the provided application-specific constraints. Secondly, the optimizer is utilized to fine-tune them. It mitigates the four challenges of application-specific, semantics-preserving, automatable, fine-tunable that hinder the application of attacks. The previously mentioned studies have provided motivation and inspiration for the development of an approach to create mutant malware samples that preserve key properties, such as format, executability, and maliciousness. In this work, we propose a framework that combines FeaGAN and DQEAF to evade anti-malware engines, with a focus on the transferability of adversarial malware samples. Fang's research team has demonstrated the effectiveness of the DQEAF framework, effectively addressing all three challenges [8]. To improve the evasion capability further, we employ an ensemble method for the substitute black-box detector. This approach enhances the interaction of adversarial malware samples and increases their ability to evade the ensemble method-based detector. To ensure the executability and maliciousness of the samples, we test them using the Cuckoo sandbox. The proposed framework aims to create mutant malware samples that maintain their maliciousness while evading detection by anti-malware engines. By combining FeaGAN and DQEAF, we can generate adversarial samples that are transferable across different anti-malware engines. The ensemble method for the substitute black-box detector improves the evasion capability of the samples, making them more effective at avoiding detection. The Cuckoo sandbox testing provides assurance that the samples are both executable and malicious. Overall, the proposed approach offers a promising solution for the creation of mutant malware samples that evade anti-malware engines while maintaining their maliciousness. The use of FeaGAN and DQEAF, along with the ensemble method for the substitute black-box detector, enables the generation of transferable adversarial samples that are effective at avoiding detection. The Cuckoo sandbox testing provides a reliable means of verifying the executability and maliciousness of the generated samples. ## VI Conclusion The arms race of malware detection and evasion never comes to an end due to their opposite purposes. Malware can be mutated to create adversarial samples to bypass detectors. Investigating malware generation methods can motivate more proper prevention approaches to deal with mutants. In fact, there is a lack of adversarial sample generation methods operating on the problem space to create a real malware entity instead of a malicious feature vector. In this work, the combination of 3 ML methods: GANs, RL, and ensemble learning to generate adversarial malware samples against ensemble learning-based detectors has achieved certain success. Our proposed FeaGAN and DQEAF have been proven to be effective in crafting malware mutants via experiments while preserving the properties of Windows malware. We also evaluate the effect of the chosen single or ensemble learning algorithms on the overall performance in targeting ensemble learning-based detectors. In the future, we intend to extend the action space for mutating malware samples to diversify the effectiveness of adversarial samples. Also, other RL algorithms will be investigated to take the least number of actions into original samples for crafting the malware mutants. All collected samples will help malware analysts in-depth insight understand and mitigate the wide spread of the metamorphic malware in critical infrastructures.
2302.14284
Rethink Long-tailed Recognition with Vision Transformers
In the real world, data tends to follow long-tailed distributions w.r.t. class or attribution, motivating the challenging Long-Tailed Recognition (LTR) problem. In this paper, we revisit recent LTR methods with promising Vision Transformers (ViT). We figure out that 1) ViT is hard to train with long-tailed data. 2) ViT learns generalized features in an unsupervised manner, like mask generative training, either on long-tailed or balanced datasets. Hence, we propose to adopt unsupervised learning to utilize long-tailed data. Furthermore, we propose the Predictive Distribution Calibration (PDC) as a novel metric for LTR, where the model tends to simply classify inputs into common classes. Our PDC can measure the model calibration of predictive preferences quantitatively. On this basis, we find many LTR approaches alleviate it slightly, despite the accuracy improvement. Extensive experiments on benchmark datasets validate that PDC reflects the model's predictive preference precisely, which is consistent with the visualization.
Zhengzhuo Xu, Shuo Yang, Xingjun Wang, Chun Yuan
2023-02-28T03:36:48Z
http://arxiv.org/abs/2302.14284v2
# Rethink Long-Tailed Recognition with Vision Transformers ###### Abstract In the real world, data tends to follow long-tailed distributions w.r.t. class or attribution, motivating the challenging Long-Tailed Recognition (LTR) problem. In this paper, we revisit recent LTR methods with promising Vision Transformers (ViT). We figure out that 1) ViT is hard to train with long-tailed data. 2) ViT learns generalized features in an unsupervised manner, like mask generative training, either on long-tailed or balanced datasets. Hence, we propose to adopt unsupervised learning to utilize long-tailed data. Furthermore, we propose the Predictive Distribution Calibration (PDC) as a novel metric for LTR, where the model tends to simply classify inputs into common classes. Our PDC can measure the model calibration of predictive preferences quantitatively. On this basis, we find many LTR approaches alleviate it slightly, despite the accuracy improvement. Extensive experiments on benchmark datasets validate that PDC reflects the model's predictive preference precisely, which is consistent with the visualization. Zhengzhuo Xu\({}^{1*}\), Shuo Yang\({}^{1*}\), Xingjun Wang\({}^{1}\), Chun Yuan\({}^{1,2}\)+\({}^{1}\)Tsinghua Shenzhen International Graduate School, Tsinghua University, \({}^{2}\)Shenzhen Peng Cheng Lab metric, long-tailed learning, vision transformers, representation learning, imbalanced data. Footnote †: Equal Contribution, \(\dagger\): Corresponding Author. This work was supported by the National Key R&D Program of China (2022YFB4701400/4701 402), SZSTC Grant(JCYJ 20190809172201639, WDZC2020082020065500 1), Shenzhen Key Laboratory (ZDSYS20210623092001004). ## 1 Introduction With rapid advances in visual classification, deep models tend to depend on balanced large-scale datasets more seriously [1, 2]. However, the number of instances in real-world data usually follows a Long-Tailed (LT) distribution w.r.t. class. Many tail classes are associated with limited samples, while a few head categories occupy most of the instances [3, 4, 5, 6]. The model supervised by long-tailed data tends to bias toward the head classes and ignore the tail ones. The tail data paucity makes the model hard to train with satisfying generalization. It is still a challenging task to overcome Long Tailed Recognition (LTR) and utilize real-world data effectively. Recent literature mainly adopt two approaches to tackle LT data, i.e., feature re-sampling and class-wise re-weighting. The re-sampling methods balanced select the training data by over-sampling the tail or under-sampling the head. Some effective proposals replenish the tail samples via generation or optimization with the help of head instances[7, 8]. The re-weighting ones punish different categories with data number relevant weight or logit bias[9, 10]. Although the aforementioned methods have greatly mitigated the LT problem, the conclusions hold on the ResNet-based backbones [11, 12]. In recent years, many transformer-based backbones [13] have surpassed the performance of CNN. DeiT[14] proposes an effective receipt to train ViT with limited data, and MAE[15] adopts a masked autoencoder to pre-train the ViT. However, there is limited research on how ViTs perform on LTR. Motivated by this, we rethink the previous LT works with ViT. We figure out that it is hard to train ViTs with long-tailed data while the unsupervised pretraining manner ameliorates it by a large margin. The unsupervised pretraining ViTs will learn meaningful feature (c.f. Figure 2) and generalize well on downstream tasks (c.f. Table 2), either on long-tailed or balanced datasets. Numerous studies have demonstrated that the model supervised by the LT dataset will inevitably exhibit prediction bias to the head[5, 16, 17, 18]. The predictor will simply classify the inquiry image to the head to attain a low misclassification error. The previous metrics, like accuracy on the validation dataset, are difficult to evaluate the model's predictive preference directly. The same accuracy may come at the cost of a different number of predictions (c.f. Figure 1). Although some works show models' prediction distribution by visualization qualitatively[5, 11, 19], a metric is required to evaluate it quantitatively. In this paper, we propose Prediction Distribution Calibration (PDC) to fill this gap. Specifically, if we view the prediction number and target instance number of each class as probability distributions, we can measure the distance between the two probability distributions. Considering the imbalance degree of training samples, we take the training label into account as well. To summarize, our main contributions are: **1)** We figure out that it is difficult to train ViT with long-tailed data, which can be tackled with unsupervised pretraining. **2)** We propose PDC to provide a quantitative view to measure how the proposal ameliorates the model predictive preference. **3)** We conduct extensive experiments to analyze LTR proposals' performance on ViT with our proposed PDC, which will accurately indicate the model's predictive bias and is consistent with the visualization results. ## 2 The Proposed Approach ### Long Tail Recognition Given an \(C\)-classes labeled dataset containing \(N\) training instances, \(\mathbf{D}=\{\left(x_{1},y_{1}\right),\left(x_{2},y_{2}\right),\ldots,\left(x_{n},y_{n}\right)\}\), where \(y_{i}\in\mathcal{C}=\{1,...,C\}\) and the distribution is \(\mathbb{P}(\mathbf{x},\mathbf{y})\). In this paper, we define a base classification model as \(\mathcal{M}_{\theta}\), which is parameterized by \(\theta\). For each input image \(x\), the output logits as \(\mathbf{z}_{\theta}(x)=\mathcal{M}(x|\theta)=\{z_{1},...,z_{c}\}\). The goal is to optimize the parameters \(\theta\) to get the best estimation of \(\mathbb{P}(\mathbf{x},\mathbf{y})\). Generally, one adopts _softmax_ function to map the output \(\mathcal{M}_{(}x|\theta)\) as the conditional probability: \[p\left(\mathbf{y}\mid\mathbf{x};\theta\right)=\frac{e^{\mathcal{M}(\mathbf{x} |\theta)_{\mathbf{y}}}}{\sum_{i}e^{\mathcal{M}(\mathbf{x}|\theta,\theta)_{ \mathbf{y}_{i}}}} \tag{1}\] We get the posterior estimates \(\mathbb{P}(y|x):=p\left(\mathbf{y}|\mathbf{x};\theta\right)\) by maximum likelihood \(\mathbb{P}(x|y)\) estimation, which is represented by model parameters \(\theta\). In LTR, we train the model with long-tailed distributed training data \(\mathbb{P}_{s}(x,y)\) while evaluating it with uniform ones \(\mathbb{P}_{t}(x,y)\). The label prior distribution \(\mathbb{P}_{s}(y)\) will be different for each class while keeping consistent in the test dataset, i.e., \(\mathbb{P}_{t}(y):=1/C\). For a tail class \(i\), \(\mathbb{P}_{s}(y_{i})\ll\mathbb{P}_{t}(y_{i})\). According to the Bayesian Theory, the posterior is proportional to the prior times the likelihood. Considering the same likelihood, i.e., \(\mathbb{P}_{s}(x|y)=\mathbb{P}_{t}(x|y)\), we have the posterior on the target dataset: \[\mathbb{P}_{t}(y_{i}|x)=\mathbb{P}(x|y_{i})\cdot\mathbb{P}_{s}(y_{i})/\mathbb{ P}_{t}(y_{i}) \tag{2}\] With the Eq.2 and balanced target distribution \(\mathbb{P}_{t}(y):=1/C\), we have \(\mathbb{P}_{t}(y_{i}|x)\propto\mathbb{P}_{s}(y_{i})\). Therefore, models tend to _predict a query image into head classes_ to satisfy the train label distribution \(\mathbb{P}_{s}(y_{i})\), which is called **predictive bias**. Such a mismatch makes the generalization in LTR extremely challenging, and the traditional metrics, e.g., mean accuracy on the training dataset, exacerbates biased estimation when evaluating models on the balanced test set. ### Vision Transforms ViT reshapes a image \(x\in\mathbb{R}^{H\times W\times C}\) into a sequence (length \(L=H\times W/P^{2}\)) of flattened 2D patches \(x_{P}\in\mathbb{R}^{L\times(P^{2}\cdot C)}\), where \(H\times W\) are the resolution of \(x\), \(C\) is channels, \(P\) is the patch resolution. Although ViTs perform well on numerous visual tasks, we figure out that _it is hard to train ViTs with long-tailed data, and the performance is unsatisfactory_. Recent work trains ViTs without label supervision by the encoder (\(\mathcal{E}\)) decoder (\(\mathcal{D}\)) architecture and random mask M: \[\hat{\mathbf{x}}=\mathcal{D}\left(\mathcal{E}(\mathbf{M}\odot\mathbf{x})\right) \tag{3}\] We pinpoint that the _ViTs will learn generalized feature extraction by Eq.3, either on long-tailed or balanced datasets_. Such an observation inspires us to adopt it as a strong baseline to evaluate the performance with ViTs. ### Predictive Distribution Calibration In LTR, recent works try to compensate for the mismatch of \(\mathbb{P}_{s}(y)\) and \(\mathbb{P}_{t}(y)\), which is described in section 2.1. However, they all adopt the Top1-accuracy to evaluate their proposals, which fails to show whether the mismatch is fixed. To fill the gap and measure it intuitively, we propose the Predictive Distribution Calibration (PDC) to quantitative analyze the model's predictive bias. **Step 1**: Here, we view the prediction number w.r.t. class as the predictive distribution \(\hat{\mathbb{P}}_{t}(y)\). Considering the balanced label distribution \(\mathbb{P}_{t}(y)\), we can calculate the _distance_ between the above two distributions. Considering to measure this _distance_ via Kullback-Leibler divergence (KL), we have: \[D(\mathbb{P}_{t},\hat{\mathbb{P}}_{t})=\frac{1}{C}\sum_{y_{i}\in\mathcal{C}} \mathbb{P}_{t}(y_{i})\cdot\left[\log\mathbb{P}_{t}(y_{i})-\log\hat{\mathbb{P}} _{t}(y_{i})\right] \tag{4}\] **Step 2**: Generally, the larger gap between \(\mathbb{P}_{s}(y)\) and \(\mathbb{P}_{t}(y)\), the more difficult to overcome the model predictive bias. To eliminate it, we take the training label distribution \(\mathbb{P}_{s}(y)\) into Figure 1: Visualization of ViT-B on CIFAR100-LT (IF=100). **Acc trap**: the accuracy can not reflect predictive bias. A class can obtain on par accuracy with another one with much more predictions. Take class 0 and 99 for an illustration. consideration, which can be written as \(D(\mathbb{P}_{t},\mathbb{P}_{s})\): \[\begin{split}& PDC(\mathcal{M}_{\theta},\mathbf{D})=D(\mathbb{P}_{t },\hat{\mathbb{P}}_{t})/D(\mathbb{P}_{t},\mathbb{P}_{s})\\ &=\frac{\sum_{y_{i}\in\mathcal{C}}\mathbb{P}_{t}(y_{i})\cdot \log\mathbb{P}_{t}(y_{i})-\mathbb{P}_{t}(y_{i})\cdot\log\hat{\mathbb{P}}_{t}(y _{i})}{\sum_{y_{i}\in\mathcal{C}}\mathbb{P}_{t}(y_{i})\cdot\log\mathbb{P}_{t}(y _{i})-\mathbb{P}_{t}(y_{i})\cdot\log\mathbb{P}_{s}(y_{i})}\end{split} \tag{5}\] **Step 3**: Notice that \(D(\mathbb{P}_{t},\mathbb{P}_{s})\) will be zero when the target label distribution is consistent with the training label distribution. Hence, we add an extra \(\varepsilon=1e-6\) to \(D(\mathbb{P}_{t},\mathbb{P}_{s})\) for numerical stability. ### Further Analysis Previous work evaluates the model predictive bias in the following manners: **Group Acc** divides \(\mathcal{C}\) into several groups \(\{\mathcal{G}_{1},\mathcal{G}_{2},...,\mathcal{G}_{n}\}\) according to the \(\mathbb{P}_{s}(y)\), where \(\forall i,\mathcal{G}_{i}\subseteq\mathcal{C}\). A widely adopted group type is \(\{\textit{Many}\), _Medium_, _Few_} and the accuracy of each group can be calculated by: \[Acc(\mathcal{G})=\frac{1}{N_{\mathcal{G}}}\sum_{y\in\mathcal{G}}\mathbb{I} \left(y=\operatorname*{argmax}_{y_{i}\in\mathcal{G}}\mathcal{M}(\mathbf{x}| \theta)_{y_{i}}\right), \tag{6}\] where \(N_{\mathcal{G}}\) is the sum instance number in \(\mathcal{G}\) and \(\mathbb{I}\left(\cdot\right)\) is indicator function. However, the weakness is obvious: 1) \(Acc(\mathcal{G})\) heavily depends on \(\mathbb{P}_{s}(y)\) and the definition of group \(\mathcal{G}\). 2) The _Few_ accuracy can not avoid the acc trap (see Figure 1). **Confusion matrix** is used to visualize the classification situation for each class. However, 1) it can not quantitatively measure how much the predictive bias the methods alleviate. 2) It will be unintuitive when the class number \(C\) gets larger. As a comparison, our PDC is plug-and-play with negligible computation operation. With fixed model structure and datasets, we can compare proposed methods quantitatively. ## 3 Experiments ### Datasets **CIFAR100-LT**[22] is created from the original CIFAR datasets that have 100 classes with 60K images. The skewness of the dataset is controlled by an Imbalance Factor (IF), which is the ratio between the most and the least frequent classes. We follow previous work[21, 3] to utilize the dataset with \(IF=[10,50,100]\) for comprehensive comparisons. **iNaturalist 2018**[23] is the large-scale real-world dataset for LTR with 437.5K images from 8,142 classes. It is extremely imbalanced, with an imbalance factor of 500. We use the official training and validation split in our experiments. ### Implement Details We use a pre-trained ViT-Base model from MAE and fine-tune it with \(32\) (CIFAR-LT) and \(128\) (iNat18) resolution. We use AdamW optimizer with momentum \(\beta_{1}=0.9\) and \(\beta_{2}=0.999\). We train the model for 100 epochs with an effective \begin{table} \begin{tabular}{c|c c c|c c c|c c c|c} \hline Imbalance & \multicolumn{3}{c|}{10} & \multicolumn{3}{c|}{50} & \multicolumn{3}{c|}{100} & \multicolumn{3}{c|}{var.} \\ \hline Method & Acc@R & Acc@V & PDC\(\downarrow\) & Acc@R & Acc@V & PDC\(\downarrow\) & Acc@R & Acc@V & PDC\(\downarrow\) & var. \\ \hline CE & 55.70 & 66.02 & 0.34 & 43.90 & 54.78 & 0.62 & 38.30 & 50.59 & 0.64 & 0.03 \\ BCE [20] & - & 64.63 & 0.46 & - & 50.65 & 1.31 & - & 45.50 & 2.25 & 0.80 \\ CB [3] & 57.99 & 66.30 & 0.06 & 45.32 & 56.18 & 0.08 & 45.32 & 50.63 & 0.13 & 0.00 \\ LDAM [21] & 56.91 & 63.99 & 0.47 & 45.00 & 54.53 & 0.61 & 39.60 & 50.39 & 0.82 & 0.03 \\ MiSLAS [11] & **63.20** & 66.65 & 0.26 & **52.30** & 55.90 & 0.56 & 47.00 & 50.62 & 0.94 & 0.12 \\ LADE [10] & 61.70 & **68.32** & 0.07 & 50.50 & 60.03 & 0.10 & 45.40 & **57.25** & 0.10 & 0.00 \\ IB [12] & 57.13 & 65.12 & 0.06 & 46.22 & 43.78 & 1.09 & 42.14 & 42.30 & 0.46 & 0.27 \\ BalCE [9] & 63.00 & 68.11 & **0.04** & 49.76 & **60.67** & **0.04** & **50.80** & 56.86 & **0.05** & **0.00** \\ \hline \end{tabular} \end{table} Table 1: Performance on CIFAR100-LT. Acc@R: Top1 accuracy with ResNet32. Acc@V: Top1 accuracy with ViT-B. Figure 2: Reconstruction visualization of MAE. LT: pretrain with long-tailed data. BAL: pretrain with balanced data. LT and BAL have the same total instances. U: add unmasked patch. ViTs pretrained on both LT and BAL learn meaningful features. batch size 1024 and weight decay 0.1. The base learning rate is \(1e-3\), which follows cosine decay with 5 warmup epochs. We use Mixup (0.8) and Cutmix (1.0) as augmentation and set the drop path of ViT to 0.1. ### Compared Methods We adopt the recipe in vanilla ViT[13], DeiT III[14], and MAE[15] to train ViTs. In view of MAE's excellent performance and low computation consumption, we adopt MAE for our following evaluation. We adopt vanilla CE loss, Binary CE[14], BaICE[9], CB[24], LDAM[21], MiSLAS[11], LADE[17], and IB loss[12] for comprehensive comparisons. We ignore the multi-expert (heavy GPU memory) and contrastive learning (contradictory to MAE) methods. Table 2 shows the results of different training manners. With the same training image number, the LT is lower than BAL for all recipes. MAE achieves the best on two datasets and learns meaningful features on both datasets (Figure 2). Hence, we select MAE for the following experiments. ### LTR Performance with ViT It is challenging to train ViTs directly on LTR datasets (Table 4), because it is difficult to learn the inductive bias of ViTs and statistical bias of LTR (Eq.2) simultaneously (Table 2). In Table 1, we mainly re-rank different losses of LTR on ViT-Base, which is based on the pre-trained weights on ImageNet. The results in Table 3 are trained _from scratch_ in the MAE manner without pre-trained weights to show the performance gap between ResNet and ViT. We only conduct the architecture comparisons on the iNat18 because ViTs are hard to train from scratch with limited data and resolution, like CIFAR. As Table 1 & 3 show, BaICE achieves satisfying performance on both datasets, which indicates its effectiveness and generalization. Compared to the performance on ResNet, MiSLAS shows poor Acc and PDC, which means its special design is hard to generalize on ViT. In addition, IB is difficult to train for its numerical instability and thus results in worse performance (7%\(\downarrow\)). For most proposals, the performance of PDC keeps consistent with Top-1 Acc and Few Acc. However, LDAM has better accuracy and worse PDC compared to CB, which means it alleviates predictive bias slightly. We additionally calculate the variance of PDC for the different unbalanced degrees, as shown in Table 1. From this point of view, BCE obtains the maximum variance with decreasing performance, which suggests its weak adaptability. Figure 3 presents the visualization with confusion matrix. A larger PDC indicates more centralized off-diagonal elements(e.g., BCE). BaICE makes more balanced predictions with a smaller PDC, which demonstrates PDC is a precise quantitative metric to measure prediction distributions. ## 4 Conclusion In this paper, we rethink the performance of LTR methods with Vision Transformers and propose a baseline based on unsupervised pre-train to learn imbalanced data. We re-analyze the reasons for LTR methods' performance variation based on ViT backbone. Furthermore, we propose the PDC to measure the model predictive bias quantitatively, i.e., the predictors prefer to classify images into common classes. Extensive experiments demonstrate the effectiveness of PDC, which provides consistent and more intuitive evaluation. \begin{table} \begin{tabular}{l|c c|c c|c c} \hline \multirow{2}{*}{Method} & \multicolumn{4}{c|}{CIFAR 100-LT} & \multicolumn{4}{c}{iNat18} \\ \cline{2-7} & \multicolumn{2}{c|}{IF=10} & \multicolumn{2}{c|}{IF=100} & \multicolumn{2}{c}{} \\ \cline{2-7} & Acc & PDC & Acc & PDC & Acc & PDC \\ \hline CE & 18.69 & 12.20 & 11.34 & 6.74 & 39.02 & 1.30 \\ BCE[14] & 17.26 & 13.76 & 9.86 & 7.24 & 42.01 & 1.25 \\ MiSLAS[11] & 18.70 & 12.40 & 11.40 & 9.06 & 40.23 & 1.19 \\ BalCE[9] & 20.93 & 1.95 & 15.89 & 1.26 & 42.01 & 0.43 \\ \hline \end{tabular} \end{table} Table 4: Performance of ViT-B w/o pretrained weights (CIFAR) or MAE pretraining (iNat18, 128 resolution). Figure 3: Confusion matrix for each model on CIFAR-100-LT (IF=100). x-axis: predicted label. y-axis: ground truth. \begin{table} \begin{tabular}{l|c c c c|c|c} \hline Method & Many & Med. & Few & Acc & Acc* & PDC \\ \hline CE & 72.45 & 62.16 & 56.62 & 61.03 & 57.30 & 0.76 \\ BCE[20] & **73.67** & 65.23 & 60.41 & 64.19 & 59.80 & 0.66 \\ CB[3] & 55.90 & 62.77 & 59.07 & 60.60 & 61.12 & 0.42 \\ LDAM[21] & 72.61 & 67.29 & 63.78 & 66.45 & 64.58 & 0.49 \\ MiSLAS[11] & 72.53 & 64.70 & 60.45 & 63.83 & **71.60** & 0.64 \\ LADE[10] & 64.77 & 63.49 & 62.20 & 63.11 & 70.00 & 0.39 \\ IB[12] & 54.51 & 61.91 & 60.75 & 60.69 & 65.39 & 0.35 \\ BalCE[9] & 67.82 & **68.36** & **67.34** & **67.90** & 69.80 & **0.27** \\ \hline CE\(\downarrow\) & 81.35 & 72.37 & 67.45 & 71.35 & - & 0.50 \\ BCE[20]\(\uparrow\) & 82.54 & 74.85 & 70.42 & 73.89 & - & 0.42 \\ BaICE[9]\(\uparrow\) & **77.83** & **77.73** & **76.95** & **77.43** & - & **0.18** \\ \hline \end{tabular} \end{table} Table 3: ViT-B Performance on iNaturalist 2018. Bold indicates the best. \(\uparrow\): 224 resolution. *:ResNet50 performance.
2309.10390
A density-fitting implementation of the density-based basis-set correction method
This work reports an efficient density-fitting implementation of the density-based basis-set correction (DBBSC) method in the MOLPRO software. This method consists in correcting the energy calculated by a wave-function method with a given basis set by an adapted basis-set correction density functional incorporating the short-range electron correlation effects missing in the basis set, resulting in an accelerated convergence to the complete-basis-set limit. Different basis-set correction density-functional approximations are explored and the complementary-auxiliary-basis-set single-excitation correction is added. The method is tested on a benchmark set of reaction energies at the second-order M{\o}ller-Plesset (MP2) level and a comparison with the explicitly correlated MP2-F12 method is provided. The results show that the DBBSC method greatly accelerates the basis convergence of MP2 reaction energies, without reaching the accuracy of the MP2-F12 method but with a lower computational cost.
Andreas Heßelmann, Emmanuel Giner, Peter Reinhardt, Peter J. Knowles, Hans-Joachim Werner, Julien Toulouse
2023-09-19T07:41:28Z
http://arxiv.org/abs/2309.10390v2
# A density-fitting implementation of the density-based basis-set correction method ###### Abstract This work reports an efficient density-fitting implementation of the density-based basis-set correction (DBBSC) method in the MOLPRO software. This method consists in correcting the energy calculated by a wave-function method with a given basis set by an adapted basis-set correction density functional incorporating the short-range electron correlation effects missing in the basis set, resulting in an accelerated convergence to the complete-basis-set limit. Different basis-set correction density-functional approximations are explored and the complementary-auxiliary-basis-set single-excitation correction is added. The method is tested on a benchmark set of reaction energies at the second-order Moller-Plesset (MP2) level and a comparison with the explicitly correlated MP2-F12 method is provided. The results show that the DBBSC method greatly accelerates the basis convergence of MP2 reaction energies, without reaching the accuracy of the MP2-F12 method but with a lower computational cost. basis-set convergence; Moller-Plesset perturbation theory; density-functional theory; reaction energies ## I Introduction One of the main goals of quantum chemistry is the accurate prediction of molecular properties, which requires to tackle the electron correlation problem. For this, there are two main families of computational electronic-structure methods: wave-function theory (WFT) [1] which targets the complicated \(N\)-electron wave function, and density-functional theory (DFT) [2] which uses the simpler one-electron density. While DFT has become the workhorse of quantum chemistry thanks to its appealing balance between computational cost and accuracy, the lack of a systematic scheme to improve the quality of the density-functional approximations has renewed the interest in the development of WFT methods in the last few decades. A serious limitation of WFT methods is their slow convergence of the correlation energy with the size of the one-electron basis set. This slow convergence originates from the short-range singularity of the Coulomb electron-electron repulsion which induces a derivative discontinuity in the exact eigenstate wave functions, known as the electron-electron cusp condition [3]. There are two main approaches for dealing with this problem. The first approach consists in extrapolating the results to the complete-basis-set (CBS) limit by using increasingly large basis sets [4; 5]. The second approach consists in using explicitly correlated R12 or F12 methods which incorporate in the wave function a correlation factor reproducing the electron-electron cusp (see, e.g., Refs. [6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16]). An alternative approach to accelerate basis-set convergence was recently proposed, which we will refer as the density-based basis-set correction (DBBSC) method [17]. It consists in correcting the energy calculated by a WFT method with a given basis set by an adapted basis-set correction density functional incorporating the short-range electron correlation effects missing in the basis set, resulting in an accelerated convergence to the CBS limit. In practice, this basis-set correction density functional is constructed from range-separated DFT [18] by defining a basis-dependent local range-separation parameter which provides a local measure of the incompleteness of the basis set. This DBBSC method was validated for configuration-interaction and coupled-cluster calculations of atomization energies [19; 20; 21], excitation energies [22], dissociation energy curves [23], and dipole moments [24; 25]. It was also extended to GW calculations [26] and to linear-response theory [27], and some mathematical aspects of the method were studied in detail on a one-dimensional model system [28]. In this work, we report an efficient implementation of the DBBSC method in the MOLPRO software [29; 30; 31] in which density fitting [32] is used to alleviate the computational bottleneck of the method, namely the calculation of the local range-separation parameter. This allows us to use the DBBSC method on larger molecular systems than what was previously possible. We thus apply the DBBSC method for correcting the basis-set errors in the molecular reaction energies of the FH51 benchmark set [33; 34] at the second-order Moller-Plesset (MP2) level. We also test different basis-set correction density-functional approximations, as well as the addition of a single-excitation correction for one-electron basis-set errors. Finally, we compare the performance of the DBBSC method with the explicitly correlated MP2-F12 method [11]. The paper is organized as follows. In Section II, we explain the theory of the present implementation of the DBBSC method. Section III provides computational details for the cal culations on the FH51 benchmark set. In Section IV, we give and discuss our results on the reaction energies. Finally, Section V contains our conclusions. ## II Theory For simplicity, we give the equations for closed-shell states and we assume real-valued HF spatial orbitals \(\{\varphi_{p}\}\). ### The DBBC method at the MP2 level Given the MP2 total energy \(E^{\mathcal{B}}_{\text{MP2}}\) in a basis set \(\mathcal{B}\), we apply the non-self-consistent basis-set correction [17; 19] as \[E^{\mathcal{B}}_{\text{MP2+DFT}}=E^{\mathcal{B}}_{\text{MP2}}+\bar{E}^{ \mathcal{B}}[n^{\mathcal{B}}_{\text{HF}}], \tag{1}\] where \(\bar{E}^{\mathcal{B}}[n^{\mathcal{B}}_{\text{HF}}]\) is the basis-correction density functional evaluated the active HF density \(n^{\mathcal{B}}_{\text{HF}}\) (i.e., excluding core orbitals in case of frozen-core calculations). In order not to affect the CBS limit, this functional \(\bar{E}^{\mathcal{B}}[n]\) must be such that it vanishes when the basis set \(\mathcal{B}\) is complete. Moreover, provided a good enough approximation is used for \(\bar{E}^{\mathcal{B}}[n]\), the basis-set corrected MP2 energy, referred to as "MP2+DFT", is expected to converge faster to the MP2 CBS limit. ### Local range-separation parameter The dependence on the basis set of the basis-correction density functional \(\bar{E}^{\mathcal{B}}[n]\) comes from the local range-separation parameter \(\mu^{\mathcal{B}}(\mathbf{r})\). It is defined as [17; 19] \[\mu^{\mathcal{B}}(\mathbf{r})=\frac{\sqrt{\pi}}{2}W^{\mathcal{B}}(\mathbf{r}), \tag{2}\] where \(W^{\mathcal{B}}(\mathbf{r})\) is the on-top value of the effective interaction localized with the HF wave function \[W^{\mathcal{B}}(\mathbf{r})=\begin{cases}\frac{\sqrt{\pi}}{n^{ \mathcal{B}}_{\text{HF}}(\mathbf{r})},&\text{if }n^{\mathcal{B}}_{2,\text{HF}}(\mathbf{r})\neq 0, \\ \infty,&\text{otherwise}.\end{cases} \tag{3}\] In Eq. (3), \(n^{\mathcal{B}}_{2,\text{HF}}(\mathbf{r})\) is the HF on-top pair density \[n^{\mathcal{B}}_{2,\text{HF}}(\mathbf{r})=\frac{n^{\mathcal{B}}_{\text{HF}}( \mathbf{r})^{2}}{2}, \tag{4}\] with the active HF density \(n^{\mathcal{B}}_{\text{HF}}(\mathbf{r})=2\sum_{i}^{\text{act}}\varphi_{i}( \mathbf{r})^{2}\), and \(f^{\mathcal{B}}_{\text{HF}}(\mathbf{r})\) has the expression \[f^{\mathcal{B}}_{\text{HF}}(\mathbf{r})=2\sum_{p,q}^{\text{all}}\sum_{i,j}^{ \text{act}}\varphi_{p}(\mathbf{r})\varphi_{i}(\mathbf{r})(\varphi_{p}\varphi_ {i}|\varphi_{q}\varphi_{j})\varphi_{q}(\mathbf{r})\varphi_{j}(\mathbf{r}), \tag{5}\] where \(p\) and \(q\) run over all (occupied + virtual) HF spatial orbitals, \(i\) and \(j\) run over active HF spatial orbitals, and \((\varphi_{p}\varphi_{i}|\varphi_{q}\varphi_{j})\) are the two-electron Coulomb integrals in chemists' notation. We recall that by active orbitals we mean occupied orbitals without the frozen-core orbitals, in case of frozen-core calculations. The local range-separation parameter \(\mu^{\mathcal{B}}(\mathbf{r})\) provides a local measure of the incompleteness of the basis set. A straightforward calculation of \(f^{\mathcal{B}}_{\text{HF}}(\mathbf{r})\) in Eq. (5) requires to first calculating the molecular-orbital two-electron integrals (\(\varphi_{p}\varphi_{i}|\varphi_{q}\varphi_{j}\)) with a dominant scaling of \(O(N_{\text{act}}N_{\text{all}}^{4})\), and then performing the sums at each grid point which scales as \(O(N_{\text{act}}^{2}N_{\text{all}}^{2}N_{\text{grid}}^{2})\), where \(N_{\text{act}}\) is the number of active orbitals, \(N_{\text{all}}\) is the total number of orbitals in the basis, and \(N_{\text{grid}}\) is the number of spatial grid points. This is the computational bottleneck of the basis-set correction calculation. This scaling can be reduced by density fitting [32]. Introducing an auxiliary fitting basis set \(\{\chi_{A}\}\), the orbital product is approximated as \[\varphi_{p}(\mathbf{r})\varphi_{i}(\mathbf{r})\approx\sum_{A}^{ \text{fit}}d^{pi^{i}}_{A}\chi_{A}(\mathbf{r}), \tag{6}\] where \(d^{pi^{i}}_{A}\) are the Coulomb-fitting coefficients \[d^{pi^{i}}_{B}=\sum_{A}^{\text{fit}}(\varphi_{p}\varphi_{i}|\chi_{A})[ \mathbf{J}^{-1}]_{AB}, \tag{7}\] with \[J_{AB}=\iint\frac{\chi_{A}(\mathbf{r}_{1})\chi_{B}(\mathbf{r}_{2})}{\| \mathbf{r}_{2}-\mathbf{r}_{1}\|}\text{d}\mathbf{r}_{1}\text{d}\mathbf{r}_{2}, \tag{8}\] and \[(\varphi_{p}\varphi_{i}|\chi_{A})=\iint\frac{\varphi_{p}(\mathbf{r}_{1}) \varphi_{i}(\mathbf{r}_{1})\chi_{B}(\mathbf{r}_{2})}{\|\mathbf{r}_{2}- \mathbf{r}_{1}\|}\text{d}\mathbf{r}_{1}\text{d}\mathbf{r}_{2}. \tag{9}\] Orthonormalizing the auxiliary fitting basis functions with respect to the metric \(\mathbf{J}\), \[\tilde{\chi}_{A}=\sum_{B}^{\text{fit}}[\mathbf{J}^{-1/2}]_{AB}\,\chi_{B}, \tag{10}\] we can approximate the two-electron integrals as \[(\varphi_{p}\varphi_{i}|\varphi_{q}\varphi_{j})\approx\sum_{A}^{ \text{fit}}(\varphi_{p}\varphi_{i}|\tilde{\chi}_{A})(\tilde{\chi}_{A}|\varphi_{ q}\varphi_{j}), \tag{11}\] and the quantity \(f^{\mathcal{B}}_{\text{HF}}(\mathbf{r})\) in Eq. (5) as \[f^{\mathcal{B}}_{\text{HF}}(\mathbf{r})\approx 2\sum_{A}^{\text{fit}}\left[ \sum_{p}^{\text{all}}\sum_{i}^{\text{act}}\varphi_{p}(\mathbf{r})\varphi_{i}( \mathbf{r})(\varphi_{p}\varphi_{i}|\tilde{\chi}_{A})\right]^{2}. \tag{12}\] Thus, with density fitting, there is no need to build explicitly the two-electron integrals anymore and the calculation of \(f^{\mathcal{B}}_{\text{HF}}(\mathbf{r})\) in Eq. (12) now scales as \(O(N_{\text{act}}N_{\text{all}}N_{\text{fit}}N_{\text{grid}})\) where \(N_{\text{fit}}\) is the number of auxiliary fitting basis functions. In practice, the same auxiliary fitting basis sets optimized for density fitting in MP2 can be used here. ### Approximate basis-correction density functional We approximate the basis-correction density functional with the local form [19] \[\bar{E}^{\mathcal{B}}[n]\approx\int\bar{e}^{\mathrm{sr}}_{\mathrm{c,md}}(n( \mathbf{r}),\nabla n(\mathbf{r}),\mu^{\mathcal{B}}(\mathbf{r}))\mathrm{d} \mathbf{r}, \tag{13}\] where \(\bar{e}^{\mathrm{sr}}_{\mathrm{c,md}}(n,\nabla n,\mu)\) is the complementary multi-determinant short-range correlation functional energy density [19; 35] \[\bar{e}^{\mathrm{sr}}_{\mathrm{c,md}}(n,\nabla n,\mu^{\mathcal{B}})=\frac{e_{ \mathrm{c}}(n,\nabla n)}{1+\frac{e_{\mathrm{c}}(n_{\mathrm{c}},\nabla n)}{c_{ \mathrm{c}}\,n_{\mathrm{c}}(n_{\mathrm{c}})}\mu^{3}}, \tag{14}\] where \(c=(2\sqrt{\pi}(1-\sqrt{2}))/3\) and \(n_{2}(n)\) is a model of the on-top pair density. In Eq. (14), \(e_{\mathrm{c}}(n,\nabla n)\) is a standard Kohn-Sham correlation functional energy density. As in previous works, the default choice is the PBE correlation functional [36]. In this work, we also test using the LDA [37], LYP [38], TPSS [39], and SCAN [40] correlation functionals. Note that the TPSS and SCAN functionals are meta-GGA functionals, i.e. they depend also on the non-interacting positive kinetic energy density \(\tau(\mathbf{r})=(1/2)\sum_{i}^{\mathrm{sct}}|\nabla\varphi(\mathbf{r})|^{2}\), and thus constitute a slight extension of Eqs. (13) and (14). The default choice [19] for \(n_{2}(n)\) is to use the on-top pair density of the uniform-electron gas (UEG) \[n_{2}^{\mathrm{UEG}}(n)=n^{2}g_{0}(n), \tag{15}\] where the on-top pair-distribution function \(g_{0}(n)\) is parametrized in Eq. (46) of Ref. [41]. In this work, we also explore two other on-top pair-density models. The first one is the CS model [42; 43; 44] \[n_{2}^{\mathrm{CS}}(n)=\frac{n^{2}}{2}\Phi_{\mathrm{CS}}(n)^{2}, \tag{16}\] where \[\Phi_{\mathrm{CS}}(n)=\frac{\sqrt{\pi}\;\beta(n)}{1+\sqrt{\pi}\;\beta(n)}, \tag{17}\] and \[\beta(n)=q\;n^{1/3}, \tag{18}\] where \(q\) is an empirical parameter. The second one is the Hollett-Pegoretti (HP) model [45] \[n_{2}^{\mathrm{HP}}(n)=\frac{n^{2}}{2}\Phi_{\mathrm{HP}}(n), \tag{19}\] where \[\Phi_{\mathrm{HP}}(n)=\frac{2\sqrt{\pi}\;\beta(n)^{2}}{2\beta(n)e^{-\frac{1}{ \phi_{\mathrm{HP}}(n)^{2}}}+\sqrt{\pi}\left(1+2\beta(n)^{2}\right)\left[1+ \mathrm{erf}\left(\frac{1}{2\beta(n)}\right)\right]}. \tag{20}\] We may choose the value of the parameter \(q\), e.g., by imposing that the integral of the model on-top pair density equals the integral of the exact on-top pair density, \(\int n_{2}^{\mathrm{model}}(n(\mathbf{r}))\mathrm{d}\mathbf{r}=\int n_{2}^{ \mathrm{exact}}(\mathbf{r})\mathrm{d}\mathbf{r}\), in the helium atom. Estimating \(n_{2}^{\mathrm{exact}}(\mathbf{r})\) with a highly accurate 418-term Hylleraas-type wave function [46; 47; 48], we find \(q=1.88\) for the CS model and \(q=2.05\) for the HP model. When these on-top pair-density models are used with the PBE correlation functional in Eq. (14), we call the resulting basis-set correction functionals PBE-CS and PBE-HP, respectively. As a first test, we compare in Fig. 1 these different basis-set correction density-functional approximations for the basis-set convergence of the total MP2 ground-state energy of the He atom with cc-pV\(n\)Z basis sets [49] (abbreviated as _vnz_). We see that all these density-functional approximations lead to a big acceleration of the MP2 total energy toward its CBS limit. We thus conclude at this point that all the proposed density-functional approximations provides a reasonable basis-set correction, at least for the total energy. ### CABS single-excitation correction For small basis sets \(\mathcal{B}\), the HF energy can have a substantial basis-set error. This HF basis-set error is not corrected by the approximate basis-set correction functionals in Section II.3 since they only correct for missing short-range correlation. The HF basis-set error can however be easily corrected by using the complementary auxiliary basis set (CABS) [10] used in explicitly correlated R12/F12 methods. In this approach, a large orthonormal basis set is formed by the occupied+virtual HF orbitals obtained in the normal basis set \(\mathcal{B}\) and an additional set of virtual orbitals obtained from the CABS. The HF energy correction due to the addition of the CABS is estimated by second-order perturbation theory, leading to the expression, in a closed-shell formalism, [50; 51; 12] \[\Delta E^{\mathcal{B},\mathrm{CABS}}_{\mathrm{HF}}=2\sum_{i}^{\mathrm{act}} \sum_{\alpha}^{\mathrm{vir}}t_{\alpha}^{i}f_{i}^{\alpha}, \tag{21}\] where \(i\) runs over active HF orbitals and \(\alpha\) runs over all virtual orbitals (obtained in the normal basis set \(\mathcal{B}\) and from the Figure 1: Basis-set convergence of the total MP2 ground-state energy of the He atom with different basis-set correction density-functional approximations (evaluated at the HF density) using _vnz_ basis sets. CABS). In Eq. (21), \(f_{i}^{\alpha}\) are Fock matrix elements and \(i_{\alpha}^{i}\) are single-excitation coefficients found by solving the first-order perturbation equations \[f_{\alpha}^{i}=\sum_{j}^{\rm{act}}i_{\alpha}^{j}f_{j}^{i}-\sum_{\beta}^{\rm{vir} }f_{\alpha}^{\beta}i_{\beta}. \tag{22}\] The correction is often referred to as the CABS single-excitation correction. The total basis-set corrected MP2 energy is thus \[E_{\rm{MP2+CABS+DFT}}^{\cal{B}}=E_{\rm{MP2}}^{\cal{B}}+\Delta E_{\rm{HF}}^{ \cal{B},\rm{CABS}}+\tilde{E}^{\cal{B}}[n_{\rm{HF}}^{\cal{B}}], \tag{23}\] and will be referred to as "MP2+CABS+DFT". For comparison, we will also present MP2 results only corrected by the CABS single-excitation correction, which will referred to as "MP2+CABS". ## III Computational Details The DBBSC method with density fitting has been implemented in the MOLPRO software [29; 30; 31]. We have performed tests on the FH51 benchmark set. The FH51 set [33; 34] is a set of 51 reaction energies for various organic molecules. It is included in the GMTKN55 database [52]. As regards the basis set \(\cal{B}\), we use the aug-cc-pV\(n\)Z basis sets [53] for first-row atoms and the aug-cc-pV(\(n\)+d)Z basis sets [54] for second-row atoms, which we jointly abbreviate as avnz, for \(n=2\) (d), 3 (t), 4 (q), and 5. We perform canonical-orbital density-fitting HF [55] and density-fitting MP2 [32] calculations with the frozen-core approximation. The MP2/CBS reference values are estimated from the two largest basis sets (\(n=4\) and \(n=5\)) by using the two-point extrapolation formula of Ref. [56] for the HF energy and the standard two-point extrapolation formula of Refs. [4; 5] for the MP2 correlation energy. We calculate the basis-set correction with different functionals evaluated at the active HF density, and including the CABS single-excitation correction [50; 12; 51]. The basis-set correction is consistently calculated in the frozen-core approximation, corresponding to using only active orbitals in Eq. (5) and in the HF density used in Eq. (1). For the \(n=2,3\), for comparison, we also perform canonical-orbital density-fitting MP2-F12 (in the default 3C(F) variant) [11] calculations, implicitly including the CABS single-excitation correction. For a given basis set \(\cal{B}\), the density-fitting basis sets used are the corresponding \(\cal{B}\)/JKFIT and \(\cal{B}\)/MP2FIT basis sets of Weigend _et al._[57; 58] (and their extensions [51]) for the HF and MP2 calculations, respectively. The \(\cal{B}\)/JKFIT basis set is also used as CABS for the CABS single-excitation correction. We have checked the density-fitting errors and found them to be insignificant. For large systems, density-fitting calculations of the basis-set correction can be orders of magnitude faster than non-density-fitting calculations. ## IV Results The errors on the reaction energies of the FH51 set with respect to MP2/CBS calculated with MP2, MP2+CABS, MP2+CABS+PBE, and MP2-F12 are reported in Fig. 2. With the avdz basis set, MP2 can have quite large basis errors for some reaction energies, up to about 12 kcal/mol. Obtaining MP2 reaction energies with all basis errors below 1 kcal/mol requires the use of the av5z basis set. The CABS single-excitation correction is crucial to reduce the largest basis errors on MP2 reaction energies obtained with the avdz basis set. Even with larger basis sets, the CABS single-excitation correction still help reducing the basis errors for some reaction energies. Adding the PBE-based basis-set correction further reduces the basis errors, albeit not always in a systematic way since there are a few cases where the basis error increases. It is noteworthy that the basis errors of the MP2+CABS+PBE reaction energies are all smaller than 1 kcal/mol with the avtz basis set and larger basis sets. MP2-F12 globally outperforms MP2+CABS+PBE, giving reaction energies with basis errors below about 1 kcal/mol already with the avdz basis set. In Table 1, we report the mean absolute errors (MAEs) on the reaction energies of the FH51 set with respect to MP2/CBS obtained with the methods already discussed, as well as with additional basis-set correction functionals, namely LDA, LYP, TPSS, SCAN, PBE-CS (\(q=1.88\)), and PBE-HP (\(q=2.05\)). For the methods already discussed, the mean errors are consistent with the observations made previously. For the avdz basis set, we go from a MAE of 2.07 kcal/mol for uncorrected MP2 to a MAE of 0.60 kcal/mol for MP2+CABS+PBE and a MAE of 0.33 kcal/mol for MP2-F12. For the avtz basis set, we go from a MAE of 0.73 kcal/mol for uncorrected MP2 to a MAE of 0.21 kcal/mol for MP2+CABS+PBE and a MAE of 0.15 kcal/mol for MP2-F12. For the avqz and av5z basis sets, the PBE-based basis-set correction is still effective in reducing the basis errors, as we go from MAEs of 0.25 and 0.12 kcal/mol, respectively, for uncorrected MP2 to MAEs of 0.12 and 0.07 kcal/mol, respectively, for MP2+CABS+PBE. Thus, MP2+CABS+PBE with an avnz basis set globally gives uncorrected MP2 reaction energies with slightly higher av(\(n+1\))z quality, whereas MP2-F12 with an avnz basis set roughly gives uncorrected MP2 reaction energies with slightly lower av(\(n+2\))z quality. With the other basis-set correction functionals tested, the MAEs are very similar, except for the LYP correlation functional which gives much larger basis errors. We have also tested optimizing the parameter \(q\) in the CS and HP on-top pair density-density models in Eq. (18) and the parameter \(c\) in front of the on-top pair density in Eq. (14), but we did not obtain significant improvements. Thus, if we set aside LYP, we find a rather small sensitivity of the method to the underlying correlation functional for calculating reaction energies. Finally, as regards computational costs, we consistently observe, for all basis sets, that MP2+CABS+PBE is approximately 10 times faster than MP2-F12 in the default 3C variant. However, we note that MP2-F12 can be made faster using the 3*A approximation [11] without losing much accuracy in most cases, and MP2+CABS+PBE is only approximately 3 to 4 times faster than this cheaper MP2-F12 variant. Of course, the relative gains in computational cost would be much less for more expensive wave-function methods such as CCSD(T). ## V Conclusion We have reported an efficient density-fitting implementation of the DBBSC method in the MOLPRO software using different basis-set correction density-functional approximations and including the CABS single-excitation correction. We have tested the method on the FH51 benchmark set of reaction energies at the MP2 level and provided a comparison with the explicitly correlated MP2-F12 method. For the smallest basis sets, the CABS single-excitation correction provides an important correction on reaction energies which is not included in the basis-set correction density-functional approximations. The basis-set corrected reaction energies are quite insensitive to the choice of the basis-set correction density-functional approximation, with the notable exception of the LYP functional which gives much worse results. This point should be further analyzed in the future. Overall, the basis-set corrected MP2 reaction energies calculated with a \(n\)-zeta basis set are of slightly higher quality than uncorrected MP2 reaction energies calculated with \((n+1)\)-zeta quality. However, the explicitly correlated MP2-F12 method is consistently more accurate, with reaction energies calculated with a \(n\)-zeta basis set being of slightly lower quality than uncorrected MP2 reaction energies calculated with \((n+2)\)-zeta quality. We believe that the DBBSC method is still valuable for accelerating the basis convergence of MP2 due to the fact that it has a lower computational cost than MP2-F12. Finally, let us mention that the present implementation of the DBBSC method can be applied to any other wave-function methods, such as CCSD(T), with expected similar gains in accuracy. \begin{table} \begin{tabular}{l c c c c} \hline \hline & avdz & avdz & avdz & avSz \\ \hline MP2 & 2.07 & 0.73 & 0.25 & 0.12 \\ MP2+CABS & 0.92 & 0.55 & 0.22 & 0.11 \\ MP2+PBE & 1.72 & 0.35 & 0.15 & 0.07 \\ MP2+CABS+PBE & 0.60 & 0.21 & 0.12 & 0.07 \\ MP2+CABS+LDA & 0.64 & 0.19 & 0.11 & 0.07 \\ MP2+CABS+LYP & 1.10 & 0.46 & 0.23 & 0.23 \\ MP2+CABS+TPSS & 0.61 & 0.21 & 0.12 & 0.07 \\ MP2+CABS+SCAN & 0.64 & 0.24 & 0.12 & 0.07 \\ MP2+CABS+PBE-CS \((q=1.88)\) & 0.60 & 0.22 & 0.11 & 0.06 \\ MP2+CABS+PBE-HP \((q=2.05)\) & 0.61 & 0.22 & 0.12 & 0.07 \\ MP2-F12 & 0.33 & 0.15 & 0.08 & 0.06 \\ \hline \hline \end{tabular} \end{table} Table 1: Mean absolute errors (in kcal/mol) on reaction energies of the FH51 set with respect to MP2/CBS with avnz basis sets. Figure 2: Errors on reaction energies of the FH51 set with respect to MP2/CBS calculated with MP2, MP2+CABS, MP2+CABS+PBE, and MP2-F12 with avnz basis sets. The order of reactions is the one from Refs. [33; 34]. ###### Acknowledgements. It is a pleasure to dedicate the present paper to Carlo Adamo on the occasion of his 60th birthday. ## Conflict of interest It is a pleasure to dedicate the present paper to Carlo Adamo on the occasion of his 60th birthday.
2306.00060
Nitrogen enrichment and clustered star formation at the dawn of the Galaxy
Anomalously high nitrogen-to-oxygen abundance ratios [N/O] are observed in globular clusters (GCs), among the field stars of the Milky Way (MW), and even in the gas in a $z\approx 11$ galaxy. Using data from the APOGEE Data Release 17 and the Gaia Data Release 3, we present several independent lines of evidence that most of the MW's high-[N/O] stars were born in situ in massive bound clusters during the early, pre-disk evolution of the Galaxy. Specifically, we show that distributions of metallicity [Fe/H], energy, the angular momentum $L_z$, and distance of the low-metallicity high-[N/O] stars match the corresponding distributions of stars of the Aurora population and of the in-situ GCs. We also show that the fraction of in-situ field high-[N/O] stars, $f_{\rm N/O}$, increases rapidly with decreasing metallicity. During epochs when metallicity evolves from $\rm [Fe/H]=-1.5$ to $\rm [Fe/H]=-0.9$, the Galaxy spins up and transitions from a turbulent Aurora state to a coherently rotating disk. This transformation is accompanied by many qualitative changes. In particular, we show that high N/O abundances similar to those observed in GN-z11 were common before the spin-up ($\rm [Fe/H]\lesssim -1.5$) when up to $\approx 50\%-70\%$ of the in-situ stars formed in massive bound clusters. The dramatic drop of $f_{\rm N/O}$ at $\rm [Fe/H]\gtrsim -0.9$ indicates that after the disk emerges the fraction of stars forming in massive bound clusters decreases by two orders of magnitude.
Vasily Belokurov, Andrey Kravtsov
2023-05-31T18:00:02Z
http://arxiv.org/abs/2306.00060v1
# Nitrogen enrichment and clustered star formation at the dawn of the Galaxy ###### Abstract Anomalously high nitrogen-to-oxygen abundance ratios [N/O] are observed in globular clusters (GCs), among the field stars of the Milky Way (MW), and even in the gas in a \(z\approx 11\) galaxy. Using data from the APOGEE Data Release 17 and the _Gaia_ Data Release 3, we present several independent lines of evidence that most of the MW's high-[N/O] stars were born in situ in massive bound clusters during the early, pre-disk evolution of the Galaxy. Specifically, we show that distributions of metallicity [Fe/H], energy, the angular momentum \(L_{z}\), and distance of the low-metallicity high-[N/O] stars match the corresponding distributions of stars of the _Aurora_ population _and_ of the in-situ GCs. We also show that the fraction of in-situ field high-[N/O] stars, \(f_{\rm N/O}\), increases rapidly with decreasing metallicity. During epochs when metallicity evolves from \([{\rm Fe/H}]=-1.5\) to \([{\rm Fe/H}]=-0.9\), the Galaxy spins up and transitions from a turbulent Aurora state to a coherently rotating disk. This transformation is accompanied by many qualitative changes. In particular, we show that high N/O abundances similar to those observed in GN-z11 were common before the spin-up (\([{\rm Fe/H}]\lesssim-1.5\)) when up to \(\approx 50\%-70\%\) of the in-situ stars formed in massive bound clusters. The dramatic drop of \(f_{\rm N/O}\) at \([{\rm Fe/H}]\gtrsim-0.9\) indicates that after the disk emerges the fraction of stars forming in massive bound clusters decreases by two orders of magnitude. keywords: stars: kinematics and dynamics - Galaxy: evolution - Galaxy: formation - Galaxy: abundances - Galaxy: stellar content - Galaxy: structure ## 1 Introduction The dawn of the Universal galaxy assembly can now be explored in tantalizing detail via direct high-resolution infrared JWST observations of high redshift galaxies (see, e.g., Donnan et al., 2023; Harikane et al., 2023; Finkelstein et al., 2023; Robertson et al., 2023). Thanks to the combined power of JWST's NIRCam and NIRSpec instruments, galaxies beyond \(z=8\) are revealed to be small, dense, metal-poor, and actively star-forming (see, e.g., Ono et al., 2022; Tacchella et al., 2023; Robertson et al., 2023; Bouwens et al., 2023; Curtis-Lake et al., 2023). In the Milky Way (MW), independent constraints on the physics of high-\(z\) star formation can be obtained using Galactic archaeology which interprets properties of individual ancient, low-metallicity stars from large surveys. The two approaches are complementary: direct observations of high-\(z\) galaxies probe the star formation and state of the interstellar medium enriched by the nucleosynthetic products from the first generations of massive stars, while Galactic archeology explores properties of the surviving low-mass stars that formed in the low-metallicity environment of the MW's progenitor. With the NIRSpec instrument, the brightest of the high-\(z\) galaxies are now amenable to unprecedented levels of scrutiny, going far beyond a simple spectroscopic redshift measurement. One such example is GN-z11, identified previously with the HST and _Spitzer_(Oesch et al., 2016). The follow-up observations with NIRSpec reported measurements of oxygen, carbon, and neon lines, as well as unusually prominent levels of nitrogen emission, indicating high relative [N/O] abundance at moderately low oxygen abundance [O/H] (Bunker et al., 2023). Using these measurements Cameron et al. (2023) estimated the relative nitrogen abundance in GN-z11 to be \(\rm log(N/O)\gtrsim-0.25\), which is much higher than this ratio in the Sun, i.e. \(\rm log(N/O)_{\odot}=-0.86\)(see Lodders, 2019). In the Galaxy's stellar populations such nitrogen over-abundance is rare but exists: stars with high [N/O] ratio are numerous in Globular Clusters (see, e.g., Bastian & Lardo, 2018; Gratton et al., 2019; Milone & Marino, 2022). This similarity in the abundance patterns and possible connection between enrichment pathways in local globular clusters and GN-z11 has been recently pointed out in several studies (Cameron et al., 2023; Senchyna et al., 2023; Charbonnel et al., 2023). Nitrogen over-abundance in the MW GCs is so blatant that it has been used routinely as a chemical fingerprint to identify field stars that were born in clusters (see, e.g., Martell & Grebel, 2010; Carollo et al., 2013; Martell et al., 2016; Schiavon et al., 2017; Fernandez-Trincado et al., 2017; Tang et al., 2019; Horta et al., 2021b; Phillips et al., 2022). The cluster origin of such field stars is evidenced by their correlations and anti-correlations of abundances of different chemical elements similar to those observed in GCs: e.g., depleted [O/Fe] and [Mg/Fe] and enhanced [Al/Fe] (see e.g. Lind et al., 2015; Schiavon et al., 2017; Fernandez-Trincado et al., 2020; Horta et al., 2021). Curiously, not all high-[N/O] stars show the rest of the GC-specific chemical pattern. For example, a population of N-enhanced giants discovered by Fernandez-Trincado et al. (2020) in the Magellanic Clouds is consistent with the typical MW field in other projections of the chemical abundance space. Given that the GC-born stars are straightforward to pick out in the field, a number of studies estimated the overall fraction of Galactic stellar mass contributed by clusters under various assumptions. For example, most studies agree that the overall observed fraction of stars with high [N/O] (hereafter high-[N/O] stars) at \(\rm[Fe/H]<-1\) is rather low, \(\approx 2\%-3\%\), but somewhat higher estimates can be obtained depending on the threshold in nitrogen enrichment, the metallicity range and the location in the Galaxy used (see, e.g., Martell et al., 2016; Schiavon et al., 2017; Koch et al., 2019; Horta et al., 2021). To convert the observed high-[N/O] fraction into the total stellar mass born in clusters assumptions are made as to the initial mass of the star clusters disrupted by \(z=0\). Another factor is the role of the so-called "first population" (1P) GC stars in clusters. The 1P stars themselves have chemical abundances indistinguishable from the rest of the Galaxy's field but they are assumed to directly contribute to the anomalous chemistry of the "second generation" (2P), which is manifested in N, Na and Al enrichment and C, Mg and O depletion (Bastian & Lardo, 2018; Gratton et al., 2019; Milone & Marino, 2022). Using the hypothesis that the 1P stars could have made up to \(\sim\)90% of the initial cluster's mass (e.g., D'Ercole et al., 2008; Conroy, 2012; Bastian et al., 2013), the measured \(\sim\)2% implies that \(\sim\)20% of the field stars is contributed by clusters (e.g., Martell et al., 2011). Assuming a more conservative value of the fraction of 1P stars, \(f_{\rm 1P}\approx 0.5\) comparable to a typical fraction observed in surviving MW clusters (Milone et al., 2017), the inferred GC contribution to the Galactic halo's stellar mass is reduced considerably to \(\sim 5\%\)(see Koch et al., 2019). Interpretation of such estimates, however, is not straightforward. In the metallicity range of the high-[N/O] stars, stellar population is a mix of stars brought in by other galaxies (accreted) and stars born in the MW's main progenitor (in-situ). Thus, the fraction of high-[N/O] halo stars is an average of stars born in clusters in all of the progenitor galaxies that contributed to the MW's stellar halo. To address this, several studies attempted to assign high-[N/O] stars to distinct halo components based, for example, on the [Al/Fe] ratio (Kisku et al., 2021; Fernandez-Trincado et al., 2022). However, given the generally anomalous chemical abundances of many of the GC-born stars, including [Al/Fe], such an assignment is bound to be biased. Orbital information has also been used but with inconclusive results (Fernandez-Trincado et al., 2020; Tang et al., 2020; Fernandez-Trincado et al., 2022). Several studies reported a prominent population of N-rich stars residing in the Milky Way's bulge (see Schiavon et al., 2017; Fernandez-Trincado et al., 2020). The increased incidence of N-rich stars towards the Galactic centre is also reported in Horta et al. (2021). Unfortunately, the star's presence in the bulge does not elucidate where and when it formed. Multiple origins remain viable: bulge stars can be part of either accreted or in-situ halo population or even belong to the bar. Recently, data from the _Gaia_ satellite has helped to clarify and systematize the make-up of the Galactic stellar halo. In confirmation of the hypothesis put forward by Deason et al. (2013), the bulk of the accreted debris within 40 kpc from the Galactic centre appears to be donated via a single, massive and ancient merger event known as the _Gaia_ Sausage/Enceladus (GS/E, Belokurov et al., 2018; Helmi et al., 2018). A sizeable population of the Milky Way GCs was shown to belong to the GS/E progenitor dwarf galaxy by Myeong et al. (2018). Details of the Galactic GC classification have been considered and re-evaluated many a time since (e.g., Massari et al., 2019; Kruijssen et al., 2019; Myeong et al., 2019; Forbes, 2020; Callingham et al., 2022). A consensus amongst these works is that many of the GCs formed in situ can be identified based on either their location in the age-metallicity space or by their high angular momentum. Several recent efforts have started to verify this nomenclature via high-resolution spectroscopic studies (e.g. Koch-Hansen et al., 2021; McKenzie et al., 2022; Monty et al., 2023). Not surprisingly, across the above classification efforts, the in-situ GCs have lower average energy compared to the accreted ones. Sometimes a fraction of the low-energy GCs is assigned to a separate accretion event (Massari et al., 2019; Forbes, 2020; Kruijssen et al., 2020; Callingham et al., 2022). It is unclear however what makes these low-energy GCs distinct from the rest of the in-situ clusters. In the original analysis of Massari et al. (2019), which inspired many of the follow-up works mentioned above, the in-situ GCs were split into two groups, the tightly bound "bulge" and the clearly rotating "disk" clusters. Recently, however, the metal-poor (\(\rm[Fe/H]<-1\)) portion of the in-situ stellar halo has been shown to have a much wider range of azimuthal velocities compared to the accreted component, at least in the vicinity of the Sun (Belokurov & Kravtsov, 2022, hereafter BK22). Discovered through chemical tagging, this component, dubbed _Aurora_, is the oldest Milky Way stellar population formed before the Galaxy had a coherently rotating disk. Aurora stellar population spans a range of energies, from the lowest levels typical for the stars near the Galactic centre to that of the Sun, but its density beyond the Solar radius falls sharply. The distribution of azimuthal velocities of the Aurora stars is significantly broader than that of the GS/E. However, unlike GS/E's debris which has little net rotation, Aurora has a modest net spin of \(\sim 50\) km s\({}^{-1}\). BK22 show that at higher metallicities (\(\rm[Fe/H]>-1\)) the kinematic behaviour of the ancient MW stars exhibits a clear trend: the azimuthal velocity increases sharply as the Galaxy spins up to become a disk (see also Conroy et al., 2022; Rix et al., 2022). Aurora's stars also exhibit a large scatter in most elements but in particular in Al, N, O, and Si, i.e. the same elements that are considered as the best GC markers due to their anomalous behaviour. Consequently, BK22 conclude that Aurora's chemistry likely bears signs of a large contribution from massive star clusters, a hypothesis strengthened in the analysis of Myeong et al. (2022). Thus there are multiple indications that instead of an agglomeration of numerous fragments of distinct origin, the metal-poor stellar halo inside the Solar radius is dominated by one prominent population formed in-situ at early epochs. Throughout this Paper we refer to this pre-disk component as _Aurora_ to emphasize its in-situ origin thus accepting that it may contain most or all of the alleged Kraken/Koala/Heracles structure (Kruijssen et al., 2019; Horta et al., 2021; Forbes, 2020). The main reasons to consider such a monolithic, single-origin classification scheme for the bulk of the metal-poor portion in the inner MW halo are twofold. First, detailed high-resolution chemical studies show little difference between Aurora and Kraken/Koala/Heracles (Belokurov & Kravtsov, 2022; Naidu et al., 2022; Myeong et al., 2022; Horta et al., 2023). Second, a look at the chemo-kinematics of these stars shows a clear continuity in their orbital properties across a wide range of angular momenta, from retrograde to mildly rotating and a wide range of energies, from the most bound, "bulge"-like to approximately Solar (Arentsen et al., 2020; 2020; 2020; 2022; 2022; 2022; 2022; 2022; 2022; 2022). Consequently, in our study, field stars and Galactic GCs classified as in-situ have a broader range of total energies and angular momenta than considered previously. In this paper we aim to consistently identify the in-situ and accreted components of the MW's stellar population in the metallicity range probed by the APOGEE survey and of the MW's population of globular clusters. To this end, we use both the APOGEE measurements of chemical abundances of elements and _Gaia_ EDR3 measurements of proper motions of stars and globular clusters. We use the resulting classification to estimate the fraction of stars with enhanced nitrogen abundance in the in-situ and accreted populations and the fraction of low-metallicity stars born in bound clusters. Given that we select such stars using the [N/O] ratio we will refer to these stars in the context of this study as high-[N/O] stars. The paper is structured as follows. Section 2 presents the details of our selection of field stars, high-[N/O] stars and likely GC members. In Section 3, we analyze distributions of the selected stellar populations, decipher the origin of the field high-[N/O] stars and estimate the contribution of GC-like objects to star formation in the early Galaxy. We discuss the implications of our inference in Section 4 where we list the salient changes accompanying the MW's transition from the chaotic Aurora state to the stable disk. Section 5 lists our conclusions. ## 2 Data and sample selection We use element abundances from the APOGEE Data Release 17 (Abdurro'uf et al., 2021), as recorded in the allStarLite catalogue provided on the survey's website. Following BK22, we remove stars with flags: STAR_BAD, TEFF_BAD, LOG_BAD, VERY_BRIGHT_NEIGHBOR, LOW_SNR, PERSIST_HIGH, PERSIST_JUMP_POS, SUSPECT_RV_COMBINATION, PERSIST_JUMP_NEG, as well as duplicate with EXTRATARG flag. Distances are taken from the AstroNN value-added catalogue (see Leung & Bovy, 2019; Mackerth & Bovy, 2018). We rely on _Gaia_ EDR3 proper motions (Gaia Collaboration et al., 2021; Lindegren et al., 2021) and convert observed heliocentric stellar coordinates into the Galactocentric left-handed reference frame, assuming that the Sun is at \(X=R_{\odot}=8\) kpc from the Galactic Centre (c.f. a slightly larger value from Gravity Collaboration et al., 2022), and has Galactic \(Z_{\odot}=0\). Following Gaia Collaboration et al. (2022), we assume that the Sun's velocity is \(v_{\odot}=\{-9.3,251.5,8.59\}\) km s\({}^{-1}\). Total energies \(E\) are calculated in a three-component (bulge, disk and DM halo) Galaxy potential identical to that used in Belokurov et al. (2023). In what follows, energy is reported in units of \(10^{5}\) km\({}^{2}\) s\({}^{-2}\) and the vertical component of the angular momentum \(L_{z}\) in units of \(10^{3}\) kpc km s\({}^{-1}\). ### Sample of field red giants with low \(V_{\phi}\) Our base sample of field stars is selected as follows. First, we remove stars within 1.3 degree of all known Galactic satellites (in particular, globular clusters) as well as all objects with PROGRAMNAME=magclouds. Additionally, we consider only stars within 10 kpc from the Sun that are consistent with being red giants by using the following cuts: \(D<10\) kpc, \(\log(g)<3\) and \(T_{\rm eff}<5300\) K. We also cull stars with tangential velocity errors larger than 50 km s\({}^{-1}\) and [Fe/H], [N/Fe] and [O/Fe] errors larger than 0.25 dex. Note that removing measurements with large uncertainties can bias [N/O] ratios high at low metallicities. We have checked for the presence of such bias by re-running the entirety of our analysis without a cut on abundance errors and report that any changes in the measurements reported are within their associated uncertainties. Finally, to get rid of the fast-rotating, young stars in the Galaxy's thin disk, we apply a cut on the tangential component of stellar velocity \(V_{\phi}<160\) km s\({}^{-1}\). The combination of the above cuts leaves a total of \(\sim 30,000\) stars. Given the tangential velocity cut applied, stars in this sample are predominantly halo [Fe/H]\(<-1\) and high-\(\alpha\) (thick) disk at [Fe/H]\(>-1\). ### Sample of globular cluster stars We search the APOGEE DR17 catalog for likely Galactic globular cluster members using the following strategy. For each cluster, likely members are selected within 1.5 times the GC's tidal radius from its centre as reported in the 2010 version of the GC catalog of Harris (2010). Only stars with cluster membership probabilities above 0.5, as calculated by Vasiliev & Baumgardt (2021), are kept. Finally, we apply the same \(\log(g)\) and \(T_{\rm eff}\) cuts as above and retain only those stars whose [Fe/H] in APOGEE is within 0.3 dex of the GC catalog value. This selection procedure yields \(\sim 4,200\) candidate GC members. When available, we use globular cluster distances from Baumgardt & Vasiliev (2021) and model estimates of their initial masses from Baumgardt & Makino (2003), fractions of the 1st cluster population from Milone et al. (2017), and GC isochronal ages are from VandenBerg et al. (2013). Total energy and \(L_{z}\) angular momentum for each cluster are computed using the same assumptions about the Galaxy as described in the beginning of Section 2. ## 3 Results ### High-[N/O] field giants in the Milky Way Our selection of stars with high nitrogen abundances is inspired by previous APOGEE-based studies (e.g. Schiavon et al., 2017; Horta et al., 2021) with two minor tweaks, as illustrated in Figure 1. Instead of applying a cut on [N/Fe], we require [N/O]\(-\sigma_{[{\rm N}/{\rm O}]}>0.55\). Motivation for using [N/O] instead of [N/Fe] is twofold. First, there is a small but noticeable upward [N/Fe] trend with decreasing metallicity (see panel d of Figure 1) meaning that a selection based on a single [N/Fe] threshold is not viable. Second, we are looking to compare chemical properties of the MW stars to the extragalactic gas-phase abundances referenced to oxygen. We choose the [N/O] threshold i) to match approximately the highest [N/O] ratios in the metal-rich disk population (see panel c of Figure 1) and ii) to reach the N/O levels observed in the high-redshift galaxy GN-z11 (see Section 3.10). Note, however, that the exact value of the adopted [N/O] threshold does not affect our results significantly because the derived fractions of GC-born stars are computed self-consistently using a calibration on surviving clusters (see Sections 3.4 and 3.6). Panel a of Figure 1 shows the distribution of [N/O] as a function of [Al/Fe] in our sample of GC stars. Stars in GCs exhibit both anomalously high [N/O] and [Al/Fe] ratios. Therefore, for GC-like high-[N/O] stars, we use both the N/O threshold as well as a cut on aluminium, [Al/Fe]\(>-0.1\) - these are shown in panel a with solid black lines. Panel b of Figure 1 shows the distribution of field giants (selected using the criteria described above) in the space of [N/O] and [Al/Fe]. Although a proportionally much smaller number of high-[N/O] stars is observed in the field, most of them follow a correlation between [N/O] and [Al/Fe] very similar to GC stars. In fact, the overall distribution of GC stars in panel a and the field giants in panel b is quite similar for all values of [N/O] and [Al/Fe]. This is consistent with the conclusion we reached from the analyses presented in this paper that a large fraction of the low-metallicity field stars were born in massive bound clusters. As demonstrated by panel c of the Figure, the number of high-[N/O] GC-like stars varies significantly with metallicity [Fe/H]. Black lines give the 20th, 50th and 80th percentiles of the [N/O] distribution as a function of [Fe/H]. At higher metallicities, i.e. at [Fe/H]\(>-0.5\), the average [N/O] level starts to climb up due to the increased nitrogen contribution from intermediate-mass Asymptotic Giant Branch (AGB) stars (see, e.g., Kobayashi et al., 2020; Johnson et al., 2023). These nitrogen-rich and metal-rich stars are clearly distinct from the GC-born high-[N/O] stars because they do not exhibit any strong (anti)correlations typical of clusters (e.g., between Mg and Al). The bulk of these high-metallicity AGB-contributed disk stars with elevated [N/O] are removed by our \(V_{\phi}<160\) km/s cut. Also note a slight upward [N/O] trend with decreasing [Fe/H]. At least in part this is caused by culling measurements with high uncertainties. This weak [N/O] trend is however much flatter than a more noticeable increase in median [N/Fe] at low [Fe/H] as shown in panel d of the Figure (see also Figure 1 of Schiavon et al., 2017). We proceed by classifying the field high-[N/O] GC-like stars into those born in-situ in the Milky Way (including Aurora) and those formed in dwarf galaxies (mainly GS/E) and subsequently incorporated into the accreted stellar halo. ### Distinguishing accreted and in-situ stars and clusters Due to a relatively slow pace of star formation and as a result of a strong metallicity dependence of the Al yield, dwarf galaxies never experience an over-abundance of Al compared to Fe, with the ratio [Al/Fe] staying low across a wide range of [Fe/H]. On the contrary, stars born in the Milky Way exhibit a rapid increase in [Al/Fe] around \(-1.5<\rm[Fe/H]<-0.9\). As shown in Hawkins et al. (2015), distinct behaviour of [Al/Fe] (and [Na/Fe]) can be utilized to separate accreted and in-situ halo components (see also Das et al., 2020). Specifically, BK22 classify stars with \(\rm[Al/Fe]>-0.075\) as in-situ and those with \(\rm[Al/Fe]<-0.075\) as accreted. This approximate classification is supported by the observed abundance trends in the surviving massive MW dwarf satellites that typically have \(\rm[Al/Fe]<-0.1\), as reported by Hasselquist et al. (2021). At low metallicities, \(\rm[Fe/H]\lesssim-1.5\), the use of [Al/Fe] is not viable as the in-situ and the accreted sequences start to merge. More importantly, [Al/Fe]-based classification into accreted and in-situ is not viable for GC-like high-[N/O] stars due to their anomalously high [Al/Fe] ratios, \(\rm[Al/Fe]>-0.1\). We thus adopt a different two-stage approach to classify stars and GCs into in-situ and accreted populations. We first determine a boundary in the \(E-L_{z}\) space that separates in-situ and accreted objects for the stars without anomalous ratios and at metallicities where [Al/Fe]-based classification is robust (\(-1.4<\rm[Fe/H]<-1.1\)), as shown in the left panel of Figure 2. The panel shows \(E\) and \(L_{z}\) distributions of stars classified as accreted (primarily GS/E, orange) and in-situ (Aurora, blue) using the \(\rm[Al/Fe]=-0.075\) threshold. In addition, following BK22 we apply the cuts of \(\rm[Mg/Fe]<-0.3\,[Fe/H]-0.1\) to the accreted stars. As the left panel of Figure 2 reveals, the distribution of accreted (GS/E) stars has a narrow range of \(z-\)component of angular momenta \(|L_{z}|<0.6\) and is limited in energy to \(E\gtrsim-1.4\). This is in agreement with Belokurov et al. (2023), where a stellar sample based on the _Gaia_ DR3 data was used for the analysis. The in-situ stars (blue points) on the other hand have a broader \(L_{z}\) distribution at energies similar to the lowest levels reached by the GS/E stars, i.e. \(E\sim-1.4\) where the two groups have a small overlap. In-situ stars continue to lower energies where the occurrence of accreted stars (classified with [Al/Fe] cut) is negligible and is likely to an occasional scatter of [Al/Fe] values below the threshold. The solid line approximates the boundary separating the in-situ and accreted populations in the \(E-L_{z}\) space visible in the left panel and is described by the following equation: \[\begin{split} L_{z}<-0.58:\;E&=-1.3\\ -0.58<L_{z}<0.58:\;E&=-1.4+0.3L_{z}^{2}\\ L_{z}>0.58:\;E&=-1.325+0.075L_{z}^{2},\end{split} \tag{1}\] where \(E\) is in units of \(10^{5}\,\rm km^{2}\,s^{-2}\) and \(L_{z}\) is in units of \(10^{3}\,\rm kpc\,\rm kms^{-1}\). We then test that the same boundary separates in-situ and accreted populations at other metallicities. Middle panel of Figure 2 considers high metallicity stars, i.e. \(-1.1<\rm[Fe/H]<-0.5\). While some GS/E debris is still visible, the \(E\), \(L_{z}\) distribution is dominated by the in-situ stars, mainly high-\(\alpha\) disk and the Splash (see Belokurov et al., 2020). Finally, Figure 1: Stars with anomalous abundances in APOGEE DR17. **Panel a:** Greyscale shows the density of globular cluster stars in the space of [N/O] and [Al/Fe]. Horizontal (vertical) line is the chosen [N/O] ([Al/Fe]) threshold for the selection of high-[N/O] stars with GC like chemical abundances. **Panel b:** Same as the previous panel but for field giants with \(V_{\phi}<160\) km/s. Small black points are the high-[N/O] stars with GC-like abundances. **Panel c:** Same as panel b, but for [N/O] vs [Fe/H]. Here and in the next panel, solid black lines show 20th, 50th and 80th percentiles of the abundance ratio distribution as a function of metallicity. **Panel d:** Same as panel c, but for [N/Fe] vs [Fe/H]. Note a stronger upward trend at low metallicity. the right panel of the Figure gives the distribution of stars with \(\rm[Fe/H]>-0.5\) and, unsurprisingly, shows no presence of any accreted debris. As we can see the same boundary of equation 1, shown by the solid line, separates the in-situ and accreted components well. Thus, in what follows, we use this boundary to classify both stars and GCs of all metallicities into in-situ and accreted. We note that the dominance of high-\(\alpha\) and low-[Fe/H] Aurora at low energies explains the trends reported in Donlon & Newberg (2023) without invoking additional accretion events. Although the boundary in Equation 1 is categorical and was derived as a simple approximation to the distribution of the in-situ and accreted stars, we have also tested it using machine learning classification. Specifically, we used the "Gradient Boosted Trees" (GBT) machine learning method (Friedman, 2001), implemented in the GradientBoostingClassifier class in the Sci-kit Learn package (Pedregosa et al., 2011), which allows for overlap between classes by assigning a class probability for stars in the overlap region. The method was trained with 80% of the sample of stars with reliable [Al/Fe]-based classification, while the remaining 20% of stars was used to test classification accuracy. We then computed the classification accuracy obtained with GBT and with the categorical boundary of Equation 1 for the test sample finding that both methods result in \(\approx 96\%\) accuracy in classification. We also tested other machine learning methods, such as Extremely Random Trees and artificial neural networks finding comparable or lower accuracy. Thus, Figure 3: Properties of globular clusters classified as accreted (orange) and in-situ (including Aurora, blue). **Left:** Distribution of all Galactic globular clusters (small points) in the space of \(E\) and \(L_{z}\) (similar to left panel of Figure 2. Accreted (in-situ, including Aurora) GCs with high-quality \(f_{\rm N/O}\) measurements are shown as orange (blue) filled circles. Blue (orange) contours show density of in-situ (accreted) stars from the left panel of Figure 2. **Middle:** Median [Mg/Fe] abundances with associated uncertainties as a function of cluster’s [Fe/H] for accreted and Aurora GCs, classified as shown in the Left panel. Only GCs with median Mg uncertainties less than 0.025 dex are shown. Note that GCs classified as in-situ have systematically higher [Mg/Fe] compared to those classified as accreted. Orange (blue) bands show median stellar [Mg/Fe] abundance ratios (with associated uncertainties) for field stars as a function of metallicity. These stars have been classified using the same \(E\), \(L_{z}\) boundary shown in the left panel. Note a clear peak in [Mg/Fe] of both in-situ GCs and in-situ field stars around [Fe/H]\(\approx-1\) the Spin-up, marked by a grey vertical band). **Right:** Orange (blue) points show median [Mg/Fe] as a function of median [Al/Fe] for stars in the Galactic GCs observed by APOGEE and classified as accreted (in-situ). Figure 2: Orbital properties of halo giants. **Left:** Distribution of Aurora (blue, selected with \(\rm[Al/Fe]>-0.075\)) and accreted (orange, selected with \(\rm[Al/Fe]<-0.075\) and an additional [Mg/Fe] cut - see text for details) giants with \(-1.4<\rm[Fe/H]<-1.1\) in the space of energy \(E\) (\(\times 10^{5}\)) and vertical component of angular momentum \(L_{z}\) (\(\times 10^{3}\)). Solid black line marks the decision boundary to separate stars into accreted (high energy) and Aurora (in-situ, low energy populations. Grey solid lines correspond to the maximal angular momentum at fixed energy, i.e. orbits with circular velocity \(V_{\phi}=V_{\rm circ}\). \(\odot\) marks the location of the Sun in the chosen potential. **Middle:** Same as Left but for stars with \(-1.1<\rm[Fe/H]<-0.5\). **Right:** Same as previous panels but for \(-0.5<\rm[Fe/H]<0\), note that in this metallicity range no accreted stars are visible. the accuracy of classification with Equation 1 is comparable with the accuracy of classification with machine learning methods. ### Distinct chemical properties of accreted and in-situ GCs Figure 3 tests our accreted/in-situ classification based on the position in the \(E,L_{z}\) space on the Galactic GCs. Left panel of the Figure shows positions of all GCs with measured orbital properties as small coloured dots. Orange marks the accreted GCs and blue the in-situ ones (including Aurora). Out of 149 Galactic GCs considered, 98, or \(\approx 2/3\) are classified as in-situ. GCs with sufficient APOGEE measurements are shown as large filled circles coloured according to their classification. There are 25 in-situ and 13 accreted GCs in our APOGEE sample. For further analysis we only consider GCs with sufficient number of measurements. For example, at least 3 GC member stars are required when studying GC properties reported in this Section; when the high-[N/O] stars are concerned, at least 10 candidate members are required as well as the relative uncertainty on the fraction of the high-[N/O] stars \(f_{\rm N/O}\) (selected according to the conditions stipulated in Section 3.1), less than 50%. The latter combination of cuts leaves only 28 clusters, of which 11 are classified as accreted and 17 as in-situ. The middle panel of Figure 3 shows the median [Mg/Fe] ratios (with associated uncertainties) for the accreted and in-situ GCs as a function of metallicity. Note that in this Figure we show the median metallicity of the cluster's APOGEE member stars. There is little overlap between the two groups. The accreted GCs typically have median [Mg/Fe] ratios lower by some 0.15 dex than those of the in-situ clusters. The [Mg/Fe] values in Galactic GCs can be compared to the halo stars across the same metallicity range separated into accreted (orange band) and in-situ (blue band) using the same \(E,L_{z}\) criteria. Reassuringly, halo stars follow the same trends in the space of [Mg/Fe] and [Fe/H], in particular, at [Fe/H]\(>-1.5\), median [Mg/Fe] for in-situ stars is \(\sim 0.15\) dex higher than that for the accreted population (a similar conclusion is reached in Horta et al., 2020). There is a striking trend of [Mg/Fe] with increasing metallicity exhibited by both field in-situ stars and the MW in-situ GCs in the middle panel of Figure 3. At \(\rm[Fe/H]\approx-1.3\), [Mg/Fe] starts to rise and exhibits a broad peak at \(\rm[Fe/H]\approx-1\div-0.7\). The magnitude of this [Mg/Fe] increase is modest, just under 0.1 dex. Beyond \(\rm[Fe/H]\approx-0.7\), the [Mg/Fe] ratio is decreasing. As traced by field stars, this [Mg/Fe] peak can also be seen in Figure 7 of BK22 (see also the left panel of Figure 5 below) and is discussed at length in Conroy et al. (2022). Here for the first time, we show that the same pattern is displayed by the Galactic in-situ GCs. As elucidated in Weinberg et al. (2017), bumps in \(\alpha\)-element ratios of the order of \(0.1-0.3\) dex are a tell-tale sign of a star-formation burst in a system converting a sizeable portion of its gas into stars, whilst retaining the core-collapse supernova enrichment products. \(\alpha\)-bumps similar to that reported here for the Galactic in-situ stars have been seen in several massive MW dwarf satellites by Hasselquist et al. (2021). Conroy et al. (2022) suggest that the MW in-situ \(\alpha\)-bump can be explained by models in which the star formation efficiency increases sharply by a factor of \(\sim\)10 around \(\rm[Fe/H]\approx-1.8\). As the middle panel of Figure 3 demonstrates, below \(\rm[Fe/H]=-1.8\), both the in-situ and the accreted stars show a systematic and noticeable increase in [Mg/Fe]. We believe that this [Mg/Fe] rise at low [Fe/H] may not be a genuine characteristic of the Galactic field stars but rather a sign of the APOGEE pipeline struggling with measurements of low and intermediate \(\alpha\) abundances at low [Fe/H]. This hypothesis is based on the following tests: i) the media GC values show a less pronounced rise compared to the field stars, ii) the increases at low metallicities is reduced if no cuts are applied on abundance uncertainties or other effective signal-to-noise filters, and iii) [Si/Fe] shows a much flatter behaviour at [Fe/H]\(<-1.5\). Right panel of Figure 3 gives the distribution of the in-situ (blue) and accreted (orange) GCs in the space of median [Al/Fe] and [Mg/Fe]. GCs with [Fe/H]\(<-1.8\) are excluded due to the suspected bias in [Mg/Fe] measurements mentioned above. The in-situ and accreted GC populations occupy distinct regions of [Mg/Fe]-[Al/Fe] space and show little overlap. On average, the in-situ GC have higher values of [Al/Fe]. At fixed [Al/Fe], the in-situ GCs have higher [Mg/Fe] ratios. The only two objects that buck this trend are NGC 6388 (classified as in-situ, blue point at \(\rm[Mg/Fe]\approx 0\)) and NGC 288 (classified as accreted, orange point with \(\rm[Mg/Fe]>0.25\)). The peculiar properties of these two clusters have been noted before and are discussed in the literature (see e.g. Myeong et al., 2019; Massari et al., 2019; Horta et al., 2020). Most recently, Carretta & Bragaglia (2022) argued for the in-situ origin of NGC 6388 based on the abundance pattern of iron-peak elements. For NGC 288, Monty et al. (2023) show that it does not follow the chemical trends of other GS/E GCs. As revealed by GCs' detailed chemical properties, the misclassification rate is at a level similar to that indicated by our experiments with the GBT ML method. Additionally, we note that including a rotating bar can and will affect some of the GCs' orbits and consequently might change their membership in the accreted/in-situ groups. However, when the effect of the bar is included, only 3 out of 40 clusters considered in the study of Perez-Villegas et al. (2020) have been singled out as potential outer halo interlopers. Notwithstanding these intricacies, the middle and right panels of Figure 3 show that the simple \(E,L_{z}\)-based classification of stars and GCs into accreted (primarily GS/E) and in-situ (primarily Aurora) using boundary defined by equation 1 works well. In particular, this classification results in populations of GCs with distinct chemical properties: the in-situ GCs exhibit systematically higher Mg abundance and the two GC classes occupy distinct regions in the \(\rm[Al/Fe]-[Mg/Fe]\) space. ### Trends of high-N fraction with cluster properties Figure 4 explores the behaviour of the fraction of high-[N/O] stars in the Galactic GCs observed by APOGEE DR17. We have excluded NGC 6715 (M 54) residing in the core of the Sgr dwarf galaxy, as well as NGC 5139 (\(\omega\) Cen). The left panel of the Figure shows \(f_{\rm N/O}\) as a function of the cluster's present-day mass. While there is a considerable scatter, there is a clear trend of \(f_{\rm N/O}\) increasing with increasing cluster mass, from \(\approx 10\%\) in \(M\approx 10^{5}M_{\odot}\) clusters to \(f_{\rm N/O}\approx 50\%\) in clusters of \(M\approx 10^{6}M_{\odot}\). There is also an indication that the trend is more pronounced for the in-situ GCs (shown in blue). A second-degree polynomial fit (\(\pm 0.075\)) to the in-situ \(f_{\rm N/O}\) values is shown as a blue band. The middle panel of the Figure shows that there is also a trend of increasing \(f_{\rm N/O}\) with the GC's initial mass. The dependence on the initial mass appears tighter and stronger compared to the correlation with the present-day mass shown in the left panel. The quadratic fit to all of the available data is shown as a grey band which includes \(\pm 0.075\) scatter around the mean model. Finally, the right panel of Figure 4 shows that \(f_{\rm N/O}\) decreases with decreasing fraction of the 1st stellar population in clusters (where available, as measured by Milone et al., 2017). There is no obvious difference between in-situ and accreted GCs in terms of the trends of \(f_{\rm N/O}\) as a function of GC mass or fraction of 1st population in agreement with results of Milone et al. (2020). The trends shown in Figure 4 are consistent with those previously reported in the literature (see e.g. Bastian & Lardo, 2018; Gratton et al., 2019). Note however that the above dependence on the initial GC mass allows us to calibrate the \(f_{\rm N/O}\) (computed, as many previous studies, according to a somewhat arbitrary threshold) and thus link it directly to the Galaxy's original cluster population. ### The majority of high-[NO] field stars belong to Aurora Left panel of Figure 5 shows metallicity distributions of in-situ (blue), accreted (orange), and high-[N/O] stars (black histogram). The high-[N/O] sample shows a sharp truncation at \(\rm[Fe/H]\approx-1\), a broad peak at \(-1.5\lesssim\rm Fe/H]\lesssim-1\) and a considerable drop off towards \(\rm[Fe/H]\approx-2\). All populations, i.e. in-situ, accreted and high-[N/O] stars have approximately the same slope of the metallicity distribution at \(\rm[Fe/H]<-1\) (see discussion in Rix et al., 2022). At \(\rm[Fe/H]\approx-1\), the in-situ distribution shows a sharp upward inflection matching the location of the cliff in the [Fe/H] distribution of the high-[N/O] stars (see also Figure 4 in Belokurov & Kravtsov, 2022). At \(\rm[Fe/H]\approx-1\), the slope of the in-situ metallicity distribution changes abruptly from \(\rm d\log(n)/d[Fe/H]\approx 1\) to \(\rm d\log(n)/d[Fe/H]\approx 2.3\). The metallicity distribution of the accreted stars does not show any obvious features around \(\rm[Fe/H]\approx-1\), instead it exhibits a continuous downward slope from \(\rm[Fe/H]\approx-1.3\) to \(\rm[Fe/H]\approx-0.7\) where it is sharply truncated (in agreement with other studies, see, e.g, Belokurov et al., 2018; Mackereth et al., 2019; Belokurov et al., 2020; Feuillet et al., 2020; Sanders et al., 2021). The middle panel of Figure 5 shows the distribution of the high-[N/O] stars in the \(E-\rm[Fe/H]\) space. Apart from the sharp truncation in the number of high-[N/O] stars at \(\rm[Fe/H]\approx-1\) at all energies, there does not seem to be any strong correlation between \(E\) and Figure 4: Fraction of high-[N/O] stars \(f_{\rm N/O}\) in the Milky Way globular clusters. **Left:**\(f_{\rm N/O}\) as a function of the present-day GC mass. Blue dashed line and band show quadratic fit to measurements of the in-situ GCs. **Middle:**\(f_{\rm N/O}\) as a function of the GC’s initial mass, as estimated by Baumgardt & Makino (2003). The dashed blue line shows the quadratic fit to the correlation with present-day mass in the left panel. The grey band shows an approximate quadratic fit to all GCs shown in this panel. **Right:**\(f_{\rm N/O}\) as a function of the fraction of the GC’s 1st population stars, as measured by Milone et al. (2017). Figure 5: **Left:** Metallicity distribution of high-[N/O] field giants (black) is compared to the metallicity distribution of field giants classified as accreted (orange) and in-situ (blue). Only stars with low azimuthal velocities are used. Around \(\rm[Fe/H]\approx-1\) (Spin-up, marked by the vertical grey band) there is a sharp upward bend in the slope of the distribution of the in-situ field stars (as indicated by the blue arrow) and a sharp truncation in the metallicity distribution of the high-[N/O] stars. **Middle:** Energy of high-[N/O] stars as a function of metallicity. The horizontal black line shows the approximate energy threshold below which the majority of stars are of in-situ nature. **Right:**\(E,\)\(Lz\) distribution of high-[N/O] stars (black dots). Blue (orange) contours show the density of in-situ (accreted) stars from the left panel of Figure 2. [Fe/H]. Note, however, that at \({\rm[Fe/H]}<-1.7\) the selected high-[N/O] stars do not span the entire range of energies of the high-metallicity counterparts. It is difficult to assess whether this is a sign of a genuine change in the properties of the high-[N/O] population or is simply a result of small statistics of stars in our sample. The right panel of Figure 5 shows the \(E-L_{z}\) distribution of the GC-like high-[N/O] stars selected as described in Section 3.1. The absolute majority of the stars fall within the blue contours marking the distribution of the in-situ stars. This indicates that most of high-[N/O] stars belong to the Aurora population of low-metallicity in-situ stars. This is further clarified in Figure 6, which shows the fraction of the Aurora, accreted, and high-[N/O] stars as a function of their total energy \(E\). Note that at low energies a few stars designated as accreted are likely misclassified in-situ stars. Due to the sharp rise in [A]/Fe] with increasing [Fe/H], some of the low-metallicity Aurora stars drop below the nominal threshold. In the intermediate range of \(-1.4<E<-1.3\), the typical contribution of Aurora stars is \(\approx 45\%\). The fraction of high-[N/O] stars is always low, typically \(\sim 1\%\). Figure 6 shows that in the metallicity range considered \(\approx 90\%\) of stars with \(E>-1.2\) belong to the accreted population. The striking fact shown in Figure 6 is that trends of \(f_{\rm N/O}\) and \(f_{\rm Aurora}\) with energy are remarkably similar. As the energy decreases, the fraction of in-situ stars grows sharply, reaching \(\approx 80\%\) at \(E\approx-1.5\). This is a clear indication that these stellar populations are closely related. ### Fraction of field high-[N/O] stars as a function of [Fe/H] Figure 7 shows the fraction of high-[N/O] field stars in the Milky Way as a function of metallicity. At \({\rm[Fe/H]}<-1\), the fraction is flat at approximately \(1\%<f_{\rm N/O}<2\%\). In contrast, at \({\rm[Fe/H]}\sim-0.9\), \(f_{\rm N/O}\) drops abruptly, decreasing with increasing [Fe/H] by an order of magnitude. This dramatic change in the \(f_{\rm N/O}\) with metallicity mirrors chemo-kinematic trends observed in the in-situ stars in BK22. They associate a rapid change in the chemical abundance spreads and the overall spin of the Galaxy's stars with the Milky Way transitioning from a chaotic pre-disk state into a coherently rotating disk. Indeed, Figure 8 shows that the sharp increase in \(f_{\rm N/O}\) is due entirely to the stars in the Aurora population, while \(f_{\rm N/O}\) of the accreted (primarily GS/E) population does not change significantly with metallicity. Moreover, the in-situ population (Auroza) shows \(\approx 5\) times higher fraction of high-[N/O] stars compared to that of the GS/E. This increase somewhat depends on the selection method of the Aurora population (using \(E,L_{z}\) or [A/Fe]), as shown in the right panel of Figure 7. Reassuringly, the two methods (\(E\), \(L_{z}\)-based and [A]/Fe]-based yield \(f_{\rm N/O}\) curves that are similar across a wide range of metallicity, i.e. for \({\rm[Fe/H]}>-1.3\). Around \({\rm[Fe/H]}\approx-1.4\) the [A/Fe] ratio of the in-situ stars starts to drop below the chosen threshold (see Figure 2 in BK22) and, as a consequence, the [A]/Fe]-based in-situ \(f_{\rm N/O}\) is biased high compared to the \(E,L_{z}\) one at this metallicity. However, these differences are much smaller than the overall trend itself and can be viewed as uncertainty with which \(f_{\rm N/O}\) is estimated for the Aurora population. The sharp increase of the fraction of high-[N/O] Aurora stars \(f_{\rm N/O}\) with decreasing [Fe/H] shows that processes that produced anomalous amounts of nitrogen in the regions where high-[N/O] stars were born were much more prevalent at lower metallicities. If massive star clusters are plausible sites of high-[N/O] star formation, we can expect that the metallicity distribution of the high-[N/O] stars is similar to that of the globular clusters. In principle, if high-[N/O] were produced in clusters, their metallicity distribution should trace distribution of metallicity of the disrupted clusters. Moreover, given that \(f_{\rm N/O}\) is higher in larger mass clusters (Figure 4) and if the fraction of massive clusters had a dependence on metallicity, the metallicity distribution of field high-[N/O] stars could be somewhat different from the metallicity distribution of clusters. However, models indicate that the fraction of disrupted clusters is nearly independent of metallicity (O. Gnedin, priv. communication) and there is no significant trend of cluster mass distribution with metallicity for observed surviving GCs. Thus, we can generally expect the metallicity distribution of high-[N/O] stars and _surviving_ clusters to be quite similar. Indeed, the upper right panel in Figure 9 shows that cumulative metallicity distributions of the in-situ high-[N/O] stars and in-situ MW's globular clusters are quite similar for stars and GCs with \({\rm[Fe/H]}<-1\) and differ at larger metallicities. For this comparison we applied a cut of \(V_{\phi}\) to the sample of GCs to account for a similar cut used in selecting high-[N/O] stats. The similarity of metallicity distributions is confirmed by the Kolmogorov-Smirnov (KS) probability that the two distribution agree at \({\rm[Fe/H]}<-1\). The 95% range of this probability evaluated using bootstrap resampling is extending to probability of \(0.21\). In contrast, the metallicity distribution of the in-situ high-[N/O] stars is different from that of the ex-situ GCs and the 95% range of the KS probability extends only to 0.01 (with the mean value of \(\approx 10^{-3}\)). The middle and right panels of Figure 9 show a comparison of the cumulative distributions of total energy and \(z\)-component of the angular momentum of the high-[N/O] stars and GCs with metallicities \({\rm[Fe/H]}<-1\). It clearly shows that distributions of the energy and angular momentum of the in-situ stars and GCs agree, while in-situ Figure 6: Fraction of Aurora (accreted) stars shown in blue (orange) and fraction of high-[N/O] GC-like giants (multiplied by 50) shown in black as a function of energy. The accreted, in-situ and high-[N/O] samples are the same as shown in Figure 2. Note the similarity of \(f_{\rm N/O}\) and \(f_{\rm Aurora}\) dependencies on energy. For comparison, dashed grey line gives the energy distribution of high-[N/O] stars with low-[A/Fe], i.e. those likely not born in GCs. stars are inconsistent with the distributions of ex-situ GCs both in metallicity and energy. We also find that distributions of metallicity, energy, and angular momentum of the ex-situ high-[N/O] stars and ex-situ GCs are consistent. The uncertainties in this comparison are larger, however, as there are only 25 ex-situ high-[N/O] stars in our sample. Still, distribution of ex-situ stars and in-situ stars is strongly inconsistent in energy (probability to be drawn from the same distribution is 0.0), which is not surprising because boundary between in-situ and ex-situ objects indicated by \([{\rm{A}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{ /}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{ /}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{ /}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{ \rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{ \rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{ \rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{ \rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{ \rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{ \rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{ \rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{ \rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{ \rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{ \rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{ \rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{ \rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{ \rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{ \rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{ \rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{ \rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{ \rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{ \rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{ \rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{ \rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{ \rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{ \rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{ \rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{ \rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{ \rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{ \rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{ \rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{ \rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{ \rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{ \rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{ \rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{ \rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{ \rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{ \rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{ \rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{ \rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{ \rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{ \rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{ \rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{ \rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{ \rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{ \rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/}}{\rm{/} Aurora stars born in clusters ranges from \(\approx 15\%\) to \(\approx 70\%\). The range corresponding to 50% of all stars born in clusters is shown as a horizontal gray band in the middle panel of Figure 8. Aurora stars of higher metallicity \([{\rm Fe}/{\rm H}]\approx-1\) have \(f_{\rm N/O}\approx 1\%\) and fractions of stars born in clusters is correspondingly four times smaller: \(\approx 4-17\%\). These estimates can be compared to the GS/E progenitor, where a considerably smaller fraction of \(2-10\%\) is estimated to be born in clusters independent of metallicity. Note that our estimate of \(f_{\rm N/O,cl}\) implicitly assumes that all of the stellar mass initially in GCs is now in the field. This is supported by modelling results of Rodriguez et al. (2023, see their Fig. 3) and Gieles & Gnedin (2023, see their Fig. 3) which indicate that the majority of the clusters contributing large numbers of high-[N/O] stars disrupt by \(z=0\). Furthermore, according to the initial mass estimates for the surviving in-situ clusters by Baumgardt & Makino (2003), that population lost \(\approx 84\%\) of its initial combined mass. Thus, within the uncertainties of our estimate, the fraction of stellar mass born in GCs that is still in surviving clusters is negligible. ### Estimate of mass of the Aurora stellar population The estimate of mass fraction of the in-situ stars born in clusters presented above can be used to estimate the stellar mass of the Aurora population - the in-situ stars with metallicities \([{\rm Fe}/{\rm H}]\lesssim-1\). This is useful because MW globular cluster population is more or less complete, while Aurora stars are observed over a limited volume in a sample with a fairly uncertain selection function. Indeed, this mass can be estimated as \[M_{\rm Aur}=\frac{1}{1-f_{\rm disrupt}}\sum_{i=1}^{N_{\rm GC,inx}}\frac{M_{i,{ \rm ini}}}{f_{i,{\rm cl}}}, \tag{2}\] where \(f_{\rm disrupt}\) is the fraction of the initial clusters that were disrupted, \(N_{\rm GC,inx}\) is the number of observed in-situ clusters with metallicity \([{\rm Fe}/{\rm H}]<-1\), \(f_{i,{\rm cl}}([{\rm Fe}/{\rm H}])\) is the fraction of star formation that occurred in bound star clusters at metallicity \([{\rm Fe}/{\rm H}]_{\rm i}\), and \(M_{i,{\rm ini}}\) is the initial mass of the observed cluster \(i\), respectively. We use the initial GC mass estimates from Baumgardt & Makino (2003) and approximation to \(f_{\rm cl}([{\rm Fe}/{\rm H}])=f_{\rm N/O}([{\rm Fe}/{\rm H}])/0.14\), where \(f_{\rm N/O}([{\rm Fe}/{\rm H}])\) is approximation to the trend shown in Figure 7: \[f_{\rm N/O}=\frac{a}{(1+x^{b})^{g/b}}, \tag{3}\] where \(x=10^{[{\rm Fe}/{\rm H}]+1.2}\), \(a=3\times 10^{-2}\), \(b=6\), \(g=3.25\). Using eqs 2 and 3 and our classification of the in-situ clusters in the Baumgardt & Vasiliev (2021b) catalog, we estimate stellar mass of the Aurora population \(M_{\rm Aur}\approx 4.6\times 10^{8}\)\(M_{\odot}\). The uncertainty of this estimate is due to uncertainty in \(f_{\rm cl}([{\rm Fe}/{\rm H}])\), the estimate of the fraction of high-[N/O] stars in clusters, the uncertainty in the estimate of initial cluster masses and uncertainty in the fraction of initial globular cluster population that was disrupted before \(z=0\). A rough estimate of the first uncertainty can be obtained by changing the value of constant \(a\) in equation 3 from \(2\times 10^{-2}\) to \(4.5\times 10^{-2}\) to roughly encompass the uncertainties of \(f_{\rm N/O}\) measurements in Figure 9: Cumulative distributions of metallicity ([Fe/H]), energy and \(L_{z}\) component of the angular momentum of the high-[N/O] stars (gray solid lines and shaded bands) and MW globular clusters (blue dashed lines and shaded bands). To account for the velocity cut \(V_{\phi}<160\) km/s for the stars, we applied the same cut to GCs for this comparison. The gray and blue shaded bands show corresponding 95% range estimated using bootstrap resampling. The top (bottom) row compares distributions of the in-situ high-[N/O] stars with in-situ (ex-situ) GCs. Note that [Fe/H] distributions are compared for the full range of metallicity, while \(E_{\rm tot}\) and \(L_{z}\) distributions are compared for stars and GCs in the metallicity range where metallicity distributions of in-situ stars and GCs agree (upper left panel): \([{\rm Fe}/{\rm H}]<-1\). The figure shows that distributions of metallicity, energy, and momentum of the in-situ GCs and in-situ high-[N/O] stars are similar in this metallicity range, which indicates that these stars originated from low-metallicity GCs formed in the Milky Way. Figure 7. This gives estimates of the Aurora population mass ranging from \(3\times 10^{8}\,M_{\odot}\) to \(6.9\times 10^{8}\,M_{\odot}\). Thus, the mass of this population is approximately \[M_{\rm Aur}=5\pm 2\times 10^{8}(1-f_{\rm disrupt})^{-1}\,M_{\odot}, \tag{4}\] although the uncertainty in this estimate only includes uncertainty in \(f_{\rm N/O}\) and not the other sources listed above. The uncertainty in the initial cluster mass estimate is a systematic uncertainty of this estimate. The unknown fraction of disrupted clusters is represented by the factor \(1-f_{\rm disrupt}\) accounts for the fact that we use surviving clusters to make the estimate, while \(f_{\rm disrupt}\) fraction of the initial mass in cluster was returned to the field due to cluster disruption. GC evolution discussed in the previous section indicate that the fraction of disrupted clusters is \(f_{\rm disrupt}\approx 2/3\). The actual mass of the Aurora population is thus larger by the corresponding factor than the above estimate. Notwithstanding the uncertainties, this estimate shows that the mass of the in-situ low-metallicity population of stars that were born before MW disk started to form and that have now a spheroidal distribution is comparable to the mass of the MW stellar halo of \(\approx 1.4\pm 0.4\times 10^{9}\,M_{\odot}\)(e.g., Deason et al., 2019). Given that this in-situ population is more concentrated towards the centre of the Galaxy than the overall halo (Rix et al., 2022; see comparison of the distance distributions the Aurora stars and halo stars in Figure 10), it is expected to dominate the spheroidal stellar component at \(r\lesssim 10\) kpc. By definition the Aurora population is the population of in-situ stars formed before the spin-up of the Milky Way disk. Its mass estimate above is thus also the estimate of the Milky Way progenitor's stellar mass at the time when disk started to form and indicates that this occurred when Milky Way's stellar mass was between the mass of the Small and Large Magellanic Clouds. ### Radial distribution of high-[N/O] stars Horta et al. (2021) show that the fraction of the fraction of N-rich stars among halo stars increases by a factor of about six from \(r\approx 10\) kpc to \(r=2\) kpc. They put forward a hypothesis that such an enhancement of GC-born stars may be linked with an increase in accretion and disruption of dwarf galaxies in the high-\(z\) Milky Way. Note, however, that a much flatter trend out to \(\sim 40\) kpc is reported (see Koch et al., 2019; Horta et al., 2021). Figure 10 explores the radial dependence of the fraction of high-[N/O] stars in the Galactic halo (black line, grey band). We also compute the ratio of the number of high-[N/O] stars to the number of stars in the GS/E (\(N_{\rm N/O}/N_{\rm OS/E}\), orange) and Aurora (\(N_{\rm N/O}/N_{\rm Aur}\), blue) components. Here we resort to the [Al/Fe]-based in-situ/accreted classification as the energy-based selection will induce a bias in the radial distribution. Therefore, the radial profiles shown in Figure 10 are limited to stars with \(-1.4<\)[Fe/H]\(<-1.1\), where [Al/Fe] selection is most reliable (see discussion above). As was shown in the right panel of Figure 7, the difference for samples selected by energy and [Al/Fe] is much smaller than the trends we are considering. While the details of our selection of high-[N/O] stars are different to that in Horta et al. (2021), the halo fraction in these stars shows a very similar trend with Galacto-centric radius to the one estimated in that study. As the black curve indicates, there are \(\sim 6\) times more high-[N/O] GC-like halo stars in the centre of the Galaxy compared to their relative counts around the Sun. A much more dramatic radial change in the fraction of high-[N/O] stars can be seen when their numbers are compared to the counts of the accreted debris (composed mostly of the GS/E stars). Relative to the GS/E stars, the high-[N/O] population increases by a factor of \(\sim 25\) from 10 kpc to 1 kpc. This is in agreement with recent studies of the inner stellar halo which indicate that the fraction of the GS/E stars decreases within \(\sim 3\) kpc from the Galactic centre (see Iorio and Belokurov, 2021; Belokurov et al., 2023). The Galacto-centric radial distribution of the tidal debris from an accreted satellite depends on the mass of the dwarf and the merger time (see Deason et al., 2013; Horta et al., 2023). Note that the GS/E progenitor galaxy was likely of sufficient mass to have experienced strong dynamical friction and profound loss of the orbital energy and angular momentum during its interaction with the Milky Way (see discussion in Vasiliev et al., 2022). Such rapid orbital radialization is often accompanied by "explosive" mass loss resulting in a complete disruption of the satellite before it manages to reach the centre of the Galaxy. The drop in relative density of the GS/E debris inward of the Solar radius can also be gleaned from Figure 2 where the number of orange points quickly decreases below the Sun's energy level. Large changes in the relative fraction of high-[N/O] stars when compared to the stellar halo overall or to its accreted component signal one thing: high-[N/O] GC-like population is not a typical member of either of these. Instead, as shown by the blue curve in Figure 10, the radial distribution of the high-[N/O] stars is very similar Figure 10: Ratios of the number of high-[N/O] stars to the number of stars in different Galactic components as a function of Galactocentric radius. Only stars with \(-1.45<\)[Fe/H]\(<-1\) are considered. Grey shows the ratio (and the associated uncertainty) relative to all halo stars as a function of \(r\). The halo fraction of high-[N/O] stars increases by a factor of \(\approx 6\) from the Solar radius to the Galactic centre, in agreement with results of Horta et al. (2021). The orange band shows the ratio of high-[N/O] stars to the accreted halo population (mainly GS/E) and exhibits a much steeper, a factor of \(\approx 40\), increase. This is because the GS/E debris does not populate the inner regions of the MW and thus has a shallower density profile compared to the halo overall and Aurora in particular. The blue band is the ratio relative to the Aurora stars. This ratio is flat at \(\approx 4\%\), indicating that the high-[N/O] stars in this metallicity range belong to the Aurora population. to that of the Aurora stars and they are thus likely to be a component of the Aurora population. This is in complete agreement with the above comparisons between the trends of the numbers of high-[N/O] and Aurora stars as a function of energy (Figure 2) and metallicity (Figure 7). Turning the argument around, the above discussion cements the view of the Aurora population as a centrally concentrated stellar halo component, as hypothesised by BK22 based on a dataset limited to the Solar vicinity. Such a radial density (strongly peaked at \(r=0\)) and an extent (limited to approximately Solar radius) is in agreement with the results of numerical simulations of the Milky Way formation as illustrated in Figure 13 of BK22 and observed in the all-sky view of the metal-poor component of the Galaxy (see Rix et al., 2022). These recent studies (see also Conroy et al., 2022; Myeong et al., 2022) together with the trends discussed here indicate that at low metallicities, the central stellar halo is dominated by the in-situ formed Aurora stars rather than by stars brought in by other accreted galaxies. Given the indications that high-[NO] stars originated in clusters that we discussed above, it is also instructive to compare the radial distribution of these stars with the radial distribution of in-situ globular clusters. Such comparison is shown in Figure 11 where the cumulative distribution of the galactocentric distance of high-[N/O] stars estimated by Horta et al. (2021) is compared to cumulative distance distributions of the in-situ globular clusters in our classification in [Fe/H] ranges \([-3,-1.5]\), \([-1.5,-1]\) and \([-1,-0.5]\). Cumulative distributions for both clusters and high-[N/O] stars were constructed using objects with distances in the range \(r\in[2-10]\) kpc because this was the distance range of the Horta et al. (2021) sample). The figure shows that the distance distribution of GCs with \(\rm[Fe/H]>-1.5\) is quite similar to that of the high-[N/O] stars. In particular, the distributions are closest for clusters with metallicities \(\rm[Fe/H]\in[-1.5,-1]\), which corresponds to the range of metallicities of most of the high-[N/O] stars. Recently, Gieles & Gnedin (2023) have also shown that the distance distribution of stars from disrupted massive GCs in their model matches the distribution of N-rich stars (see their Section 6.2 and Figure 12) and argued that this is an indication that these stars originated in clusters. In principle, the distance distribution of the surviving clusters and disrupted clusters can be different, but our results indicate the difference is not large. The largest difference between distance distributions in Figure 11 is for in-situ GCs with \(\rm[Fe/H]<-1.5\), although the difference is not very significant as the number of clusters in this metallicity range is fairly small. The similarity of metallicity, energy, and angular momentum distributions of high-[N/O] stars in clusters shown in Figure 9 and the similarity of distance distributions of these stars with the Aurora stars and in-situ GCs shown in Figures 10 and 11 strongly indicate that _the majority of high-[N/O] stars with \(\rm[Fe/H]<-1\) originated in globular clusters formed in-situ in the low-metallicity Aurora population._ ### High-redshift Milky Way vs GN-z11 Figure 12 presents the [N/O] abundance ratio as a function of the oxygen-based metallicity [O/H]. This view is similar to that shown in panel c of Figure 1, albeit here we show the absolute abundance ratios, i.e. not referenced to the solar values. Instead, the solar values are marked with the \(\odot\) symbol. Below \(12+\rm log(O/H)\approx 8.2\) both accreted (orange) and in-situ (blue) populations are present. Above \(12+\rm log(O/H)>8.2\) the stars are overwhelmingly of in-situ origin. Up to the Solar value of \(12+\rm log(O/H)=8.69\), the N/O is almost flat with a small bend. Around the Solar metallicity, the N/O abundances start to rise noticeably. Conservative (fiducial) measurements reported by Cameron et al. (2023) for the GN-z11 galaxy are shown with grey (black) lines. The Figure shows a large number of APOGEE field giants with N/O and O/H abundance ratios similar to those reported for GN-z11. There is a total of 100 high-[N/O] low-metallicity (\(\rm[Fe/H]<-0.7\)) stars marked with larger filled circles. Of these 73 are classified unambiguously as in-situ and 27 as accreted. Given our conclusion that the majority of the in-situ high-[N/O] stars originated in massive bound clusters and that the clusters, in turn, constituted a significant fraction of star formation at early epochs, the implication of these results is that a significant fraction of star formation in GN-z11 also occurs in bound compact star clusters. ## 4 Discussion Results presented above strongly indicate that 1) the majority of low-metallicity high-[N/O] stars originated in compact bound star clusters and 2) up to \(f_{\rm cl}\approx 50\%\) of star formation at early epochs before MW formed its disk (\(\rm[Fe/H]\lesssim-1.5\)) occurred in bound massive star clusters. This fraction decreases rapidly with increasing metallicity to \(10-20\%\) at \(\rm[Fe/H]\approx-1\) and to \(\lesssim 1\%\) at \(\rm[Fe/H]\gtrsim-0.7\). A number of recent studies have estimated the fraction of field N-enhanced stars in the Milky Way and showed it to be small, of the order of \(\sim 2\%\)(e.g., Martell et al., 2016; Schiavon et al., 2017; Koch Figure 11: Cumulative distribution of the galactocentric distance for the MW in-situ GCs in three metallicity intervals (thick lines) and radial distribution of the \(N\)-rich stars from Horta et al. (2021) with \(\rm[Fe/H]<-1\) shown by the grey band. The band represents the uncertainty in the slope of the derived profile of \(N\)-rich stars. Given that the distribution of the high-[N/O] stars was derived for the range \(d_{\rm GC}=[2,10]\) kpc only GCs in the same distance range are included in the cumulative distribution. The plot shows that the distance distributions of the high-[N/O] stars and in-situ GCs with \(\rm[Fe/H]>-1.5\) are consistent indicating that these stars may have originated from the in-situ clusters in this metallicity range, which is also where metallicities of most high-[N/O] stars in the Horta et al. (2021) sample lie. et al., 2019; Horta et al., 2021b), consistent with the measurements presented here. Some of these studies have also used the estimated fraction to derive the fraction of low-metallicity stars that formed in bound clusters, obtaining values \(f_{\rm cl}\approx 5-20\%\)(e.g., Martell et al., 2011; Koch et al., 2019; Horta et al., 2021b). As we discussed in Section 1, these estimates involve the total stellar population without accounting for the contribution of in-situ and ex-situ stars. Thus, the estimated fraction is averaged over stars formed in the accreted dwarf galaxies and the main MW progenitor, which likely had very different environments and evolution. Interpretation of such an average is not straightforward. Furthermore, the estimated \(f_{\rm cl}\) uses an uncertain factor to convert the estimated fraction of N-enhanced stars into the fraction of the first generation stars in clusters. Finally, the threshold used to select N-rich stars is somewhat arbitrary and is not calibrated on the real GCs, without correction to their initial mass, and without accounting for the GC mass function. In this study, we estimate \(f_{\rm cl}\) rectifying all of these issues. Estimates of the contribution of the UV luminosity from the young progenitors of modern globular clusters to the total UV luminosity function of galaxies (Ricotti, 2002; Boylan-Kolchin, 2018) provide an independent line of evidence that compact massive clusters were a significant fraction of star formation at \(z\gtrsim 4\). Indeed, direct estimates using observed globular cluster in the ultra-faint galaxy Eridanus II (Weisz et al., 2023) show that the cluster contributed up to \(\approx 10\%\) of the galaxy stellar mass at birth, while Zick et al. (2018) show that \(\approx 20\%\) of star formation in the Fornax dwarf occurred in bound _surviving_ clusters. Similarly, we estimate that for the GS/E the fraction of stellar mass formed in GCs is \(f_{\rm cl}\approx 0.06\,f_{\rm GC,GS/E}\,M_{\star,8}^{-1}\), where \(M_{\star,8}^{-1}\) is GS/E's stellar mass at the time of the merger in \(5\times 10^{8}\)\(M_{\odot}\) and \(f_{\rm GC,GS/E}\) is the fraction of MW's ex-situ clusters contributed by GS/E. However, \(\approx 55\%\) of ex-situ GCs mass is in clusters with \(\rm[Fe/H]\leq-1.5\), while only \(\approx 15\%\) of GS/E stars are at these metallicities (see lower panel of Fig. 4 in BK22). If we assume that all ex-situ GCs were formed in GS/E, this corresponds to \(f_{\rm cl}\approx 25\%\) in the GS/E at \(\rm[Fe/H]\leq-1.5\). We also see a direct record of GC formation in the linear relation between mass in globular cluster population in galaxies and mass of the parent halo, \(M_{\rm GCS}=\eta_{\rm GC}M_{\rm h}\). Such relation is expected in the models of GC formation (Kravtsov & Gnedin, 2005; Bekki et al., 2008; Choksi et al., 2018; Choksi & Gnedin, 2019; El-Badry et al., 2019) and is observed to hold over more than five orders of magnitude in galaxy stellar mass with \(\eta_{\rm GC}\approx 3\times 10^{-5}\)(see Spirler & Forbes, 2009; Hudson et al., 2014; Harris et al., 2017; Forbes et al., 2018; Doran & Harris, 2023). Given that stellar mass-halo mass relation is nonlinear and in the stellar mass range of the MW progenitors is close to \(M_{\star}\propto M_{\rm h}^{\alpha}\) with \(\alpha\approx 1.5-2\) for \(M\approx 10^{11}-10^{12}\,M_{\odot}\) and \(\alpha\approx 2-2.5\) for smaller masses (e.g., Kravtsov et al., 2018; Read & Erkal, 2019; Nadler et al., 2020), the fraction of stellar mass in bound star clusters increases with decreasing stellar mass as \(f_{\rm cl}\propto M_{\star}^{1/\alpha-1}\), at least for galaxies of \(M_{\star}\gtrsim 10^{7}\,M_{\odot}\) that contain at least a single GC (e.g., Chen & Gnedin, 2023). We thus can expect the mass fraction of stars formed in bound clusters to increase in with decreasing stellar mass and metallicity for \(M_{\star}\lesssim 10^{10}\,M_{\odot}\) and variations of the fraction of star formation in bound clusters are therefore quite ubiquitous. High-\(z\) galaxies with a significant fraction of rest-frame UV light concentrated in compact (sizes of tens of parsecs) clumps are also directly observed (e.g., Livermore et al., 2015; Johnson et al., 2017; Vanzella et al., 2023), although such observations are few because they require a lucky lensing configuration. At \(z\approx 0\), however, the fraction of UV light in compact bound clusters is found to be a function of star formation surface density, \(\Sigma_{\rm SFR}\), (e.g., Adamo et al., 2020, see also Adamo et al., 2020 for a review) reaching \(\sim 50-100\%\) in galaxies with highest \(\Sigma_{\rm SFR}\). The overall observed trend is qualitatively consistent with expectations of cluster formation models (Kruijssen, 2012). Furthermore, during early epochs galaxy progenitors are 1) in the fast mass accretion regime, 2) have small sizes, and 3) are expected to have high gas fractions. We can thus expect that surface density of gas and star formation, and correspondingly the fraction of star formation in bound clusters, to be comparably high during these epochs. Observations of both local and high-\(z\) galaxies are therefore consistent with the conclusion that a significant fraction of star formation in the early MW occurred in compact, bound star clusters. The conclusion that a significant fraction of stars formed in compact bound clusters also has a straightforward implication for galaxy formation simulations. If modern simulations do a good job modelling stellar populations of observed galaxies (see Naab & Ostriker, 2017; Vogelsberger et al., 2020, for reviews of recent dramatic progress in such simulations) and if GCs were a significant fraction of early star formation, we can expect that spatial distribution of the low-metallicity in-situ stars born in simulations should be similar to that of observed GCs. Indeed, the left panel of Figure 13 shows a remarkable similarity between the galactocentric distance distribution of the low-metallicity in-situ stars in the FIRE-2 simulations (Hopkins et al., 2018; Wetzel et al., 2023) of the MW-sized progenitors and the corresponding distribution of the MW GCs. Although the model distri Figure 12: Nitrogen-to-oxygen \(\rm log(O/H)=[N/O]+\rm log(N/O)_{\odot}\) abundance ratio as a function of oxygen abundance \(12+\rm log(O/H)=[O/H]+\rm log(O/H)_{\odot}\) for giants with low \(V_{\rm g}\) classified as accreted (orange) and those born in-situ (blue, including Aurora). Stars are classified into accreted and in-situ based on their energy \(E\), \(L_{z}\) values. Conservative (thin black lines) and fiducial (thick black lines) abundance measurements for GN-z11 from Cameron et al. (2023) are also shown. Solar \(\rm log(N/O)_{\odot}=-0.86\) and \(12+\rm log(O/H)=8.71\) is marked with \(\odot\). Giants with high N/O ratio fall squarely within the range of measurements for GN-z11. Note that in accordance with Fig 2, at least 75% of the high-\(\rm\rm\rm\rm\,I\!N/O\)] stars belong to Aurora. bution varies from object to object, the predictions are quite close to the observed GC distribution and the variations are much smaller than, say, the difference in the distance distribution of the MW in-situ and accreted GCs. In fact, six out of seven of the hosts have predicted distance distributions within the bootstrap \(2\sigma\) uncertainty of the MW GC distribution (the outer shaded band). Moreover, as can be seen in the right panel of Figure 13, the FIRE-2 galaxies that match the distance distribution of the MW GCs (m12b, m12f, m12r) also have the age-[Fe/H] distribution that is closest to that of the MW GCs. These hosts have the earliest formation (oldest ages of stars at a given metallicity), which indicates that MW star formation history corresponds to the earliest-forming tail of the MW-sized galaxies. This is consistent with independent indications of the early formation of the MW based on the occurrence and timing of the GS/E merger discussed by Fattahi et al. (2019) and Dillamore et al. (2022). Interestingly, if we adopt \(f_{\rm N/O}\) as a function of [Fe/H] estimated in this study (see Fig. 8 and eq. 3) as a proxy for the functional form of the \(f_{\rm cl}[({\rm Fe/H})]\) dependence, and convolve the metallicity distribution of in-situ stars in the FIRE-2 simulation with that function, the resulting distribution of metallicities has a broad peak at \([{\rm Fe/H}]\sim-1\div-1.5\) resembling broadly metallicity distribution of the low-metallicity in-situ MW GCs. This consistency check indicates that the assumption \(f_{\rm cl}\propto f_{\rm N/O}\) is broadly consistent with observed properties of the MW in-situ GCs. The plausible models for producing enhanced abundances of nitrogen involve massive stars (see, e.g., Bastian & Lardo, 2018) - massive rotating stars (\(M\gtrsim 15\,M_{\odot}\), e.g., Maeder & Meynet, 2006; Crowther, 2007) or very massive stars of \(\sim 10^{3}-10^{4}\,M_{\odot}\)(Densienskov & Hartwick, 2014). If stellar collisions or some other physical processes that facilitate the formation of massive stars at low metallicities in clusters can effectively create a substantial number of massive and very massive stars, this can potentially make the initial mass function (IMF) more top-heavy (see, e.g., Dib et al., 2007; Chon et al., 2021) and significantly boost the UV luminosity per unit of stellar mass formed. Even without a significant change to the IMF, a significant fraction of star formation in bound clusters will result in many short-duration bursts of star formation when each cluster forms. Clusters older than \(\sim 3-5\) Myrs are usually observed to be unobscured with an ionized environment, which means that they form on the time scales \(\lesssim 3\) Myrs. For a galaxy like GN-z11 with a stellar mass \(M_{\star}\approx 5\times 10^{8}\,M_{\odot}\) at \(z\approx 10.6\) corresponding to the cosmic time of \(\approx 430\) Myr, \(f_{\rm cl}=0.5\) implies that \(\sim 100-200\) bound clusters of mass \(\sim 10^{6}-10^{7}\,M_{\odot}\) form during \(\sim 300-400\) Myr. On Myr time scales the UV luminosity of the parent galaxy will thus spike shortly after cluster formation and will rapidly decrease before the next cluster forms. Thus, as the stellar mass grows the \(M_{\star}/L_{\rm UV}\) ratio will fluctuate strongly due to individual cluster formation (see, e.g., Zick et al., 2018, for a case study of such process in the Fornax dwarf). Large scatter in \(M_{\star}/L_{\rm UV}\) will boost the abundance of UV-bright galaxies and may help explain an overabundance of the UV-bright galaxies at \(z\gtrsim 10\) observed by JWST (e.g., Finkelstein et al., 2023; Wilkins et al., 2023; Yung et al., 2023). ### From pre-disk Aurora stage to disk formation: qualitative changes in Galaxy's evolution This work adds new details to the emerging picture of the rapid and profound transformation that the Milky Way underwent in its youth. Below we list the main changes in the properties of the Galaxy around the high redshift epoch corresponding to metallicities \([{\rm Fe/H}]\approx-1\). Figure 13: **Left:** Cumulative distribution of the galactocentric distance for the MW in-situ GCs (thick lines with shaded bands) in the metallicity interval \([{\rm Fe/H}]\in[-3,-1]\) compared to the distribution of galactocentric distance of the in-situ born stars in the FIRE simulations of the MW-sized galaxies in the same metallicity interval. The figure shows that the distribution of observed in-situ clusters and in-situ stars in a significant fraction of simulated galaxies agree indicating that MW GCs could form as part of the regular in-situ star formation in the early Milky Way. **Right**: age-metallicity relation for the in-situ stars in simulations and in-situ MW GCs. _Spin-up_. The Milky Way was not born a disk. As we demonstrated in BK22, the earliest state of the Galaxy accessible to scrutiny through an archaeological record of its stars is that of i) high velocity dispersion and ii) little net rotation. Around \(\rm[Fe/H]\approx-1.5\), the velocity ellipsoid of the in-situ stars is quasi-isotropic with a dispersion of \(\sim 90\) km/s. At these metallicities, there is some evidence for a modest net spin characterized by a median \(V_{\phi}\approx 50\) km/s. The median \(v_{\phi}\) then increases promptly with increasing metallicity, and by \(\rm[Fe/H]\approx-0.9\), the Galaxy has an established coherently rotating disk with a median tangential velocity of \(V_{\phi}\approx 150\) km/s. This is revealed in Figures 4 and 5 of BK22 (see also Figure 3 of Conroy et al. 2022 and Figure 6 of Rix et al. 2022). Numerical simulations show that the record of the original kinematic state of the Galaxy remains largely unaltered until the present day notwithstanding the intervening merger history (see, e.g., BK22; McCluskey et al. 2023). _Scatter in elemental abundances_. As its bulk stellar kinematics is reshaped, Galaxy's chemical properties evolve as well. The _Aurora_ pre-disk stellar population exhibits a large scatter in most chemical abundance ratios. The origin of this scatter has not been pinned down yet, but it is hypothesised that continuous gas accretion and bursty star-formation may play a role. Figure 7 of BK22 shows that dispersions in abundance ratios of most elements shrink on average by a factor of \(\sim 2\) on crossing the \(\rm[Fe/H]\approx-1\) threshold. Some elements, such as nitrogen, exhibit a more dramatic decrease in abundance scatter, i.e. by a factor of \(\sim 4\) as the Galaxy forms its disk. _Transition from clustered to regular star formation_. In BK22 we showed that the elements that exhibit the largest scatter in the Aurora population are N, Al, O, and Si, i.e. all of the established markers of anomalous chemical evolution characteristic of stars in GCs. It is therefore surmised that massive GCs had an important role in the formation and evolution of the high-redshift Milky Way, imprinted in the properties of the Aurora stellar population. In this work, we explore further the contribution of GCs to the Galactic stellar field and show that the fraction of the stars form in bound clusters is at its highest in Aurora, i.e. at \(\rm[Fe/H]<-1\), and drops by more than an order of magnitude at \(\rm[Fe/H]>-1\), where the Galactic disk forms. _The \(\alpha\)-bump_. There is evidence that not only the structural properties of the Galactic star formation changed drastically with the creation of the disk, but also its overall efficiency. The first obvious signature of the likely star-formation burst taking place around \(\rm[Fe/H]\approx-1\) is in the rise and fall of the median \(\alpha\)-abundance ratios of both in-situ field stars and in-situ GCs as a function of metallicity (see middle panel of Figure 3), which was pointed out and explored in detail by Conroy et al. (2022). _Change of slope of the metallicity distribution_. An increase in the pace of star formation betrayed by the rise of [Mg/Fe] is naturally imprinted in the shape of the metallicity distribution function. As left panel of Figure 5 shows, the MDFs of the in-situ and the accreted stars agree at \(\rm[Fe/H]<-1.5\), where the slopes of both distributions are close to \(\rm d\log(n)/\rm d[Fe/H]\approx 1\) (also see Rix et al. 2022, for discussion). At \(\rm[Fe/H]\approx-1\), the MDF of the in-situ population has a clear inflection point: its slope abruptly changes to \(\rm d\log(n)/\rm d[Fe/H]\approx 2.3\). In comparison, the MDF of the accreted stars (mainly GS/E) shows no features around \(\rm[Fe/H]\approx-1\). ## 5 Conclusions Over the last decade, a consensus has been reached that a small number of the Milky Way field stars exhibit a chemical abundance pattern characteristic of that found in Galactic globular clusters. In particular, excess nitrogen abundance has been used widely as an effective GC fingerprint. While the globular cluster origin of these nitrogen-rich field stars is likely beyond doubt, the birthplace of the clusters themselves has remained unclear. This work provides a fresh re-assessment of the origin of the field stars enriched in nitrogen. For the first time, the origin of the bulk of such stars - the low-metallicity in-situ Aurora population - is identified unambiguously. Below we summarize our main results and spell out their implications. 1. We start by defining the selection criteria that relate directly to the properties of the observed GC stellar populations. The GC stars not only show elevated levels of nitrogen but also exhibit reduced oxygen as well as increased aluminium abundances. Accordingly, we identify field stars with high [N/O] and [Al/Fe] ratios, namely with [N/O]\(>0.55\) and [Al/Fe]\(>-0.1\) (see Figure 1). Relying on [N/O] instead of [N/Fe] also helps us to run a comparison with measurements of gas-phase abundances in high-redshift galaxies, which are referenced to oxygen. 2. At the basis of our analysis is a new classification of the in-situ and accreted field stars and GCs. We first use [Al/Fe] in the metallicity range \(\rm[Fe/H]\in[-1.4,-1]\) to estimate the boundary between in-situ and accreted objects in the plane of total energy and vertical component of the angular momentum \((E,L_{z})\), as the high-[Al/Fe] and low-[Al/Fe] stars have very different \((E,L_{z})\) distributions with a small overlap (see Figure 2). We approximate the boundary with a simple parametric form (see Equation 1), but we have also tested our results against the classification done with a "Gradient Boosted Trees" machine learning method instead (Friedman 2001), using GradientBoostingClassifier class from the Sci-kit Learn package, and found no significant difference in classification accuracy. 3. We show that simply using the \((E,L_{z})\) boundary informed by distributions of stars with distinct [Al/Fe] ratios results in two population of GCs with distinct properties (see Figure 3). For example, the accreted GCs show systematically lower [Mg/Fe] ratios at fixed [Fe/H] than the in-situ objects, in-situ and accreted clusters occupy different regions of the [Mg/Fe]-[Al/Fe] space. Note that in our in-situ/accreted classification (i) we do not separate MW bulge and disk components and ii) we see no evidence for an additional low-energy component sometimes referred to as the Low-Energy group, Kraken/Heracles and Koala (see Massari et al. 2019; Kruijssen et al. 2020; Forbes 2020; Horta et al. 2021). 4. We show that the distribution of the selected high-[N/O] stars in the \((E,L_{z})\) plane matches that of the Aurora population with high-[Al/Fe] and \(-1.4<\rm[Fe/H]<-1\) (see Figures 5 and 6). Furthermore, we show that the radial distribution of high-[N/O] stars is very similar to that of the Aurora population (see Figure 10). This indicates that the majority of the high-[N/O] stars formed in situ. This is not surprising because, if these stars form in GCs, not only the MW is expected to produce a lot more GCs than any of the accreted dwarfs due to the strong correlation between the total GC mass and the mass of the host (e.g., Hudson et al. 2014; Forbes et al. 2018), the MW-born GCs are on average older and have more time to dissolve and disrupt. While the \((E,L_{z})\) distributions of Aurora and high-[N/O] stars are very similar, a small fraction of field stars enriched in nitrogen can possibly be assigned to the accreted population, more precisely to the GS/E merger. 5. We show that distributions of high-[N/O] stars in metallicity, energy, \(L_{z}\), and galactocentric distance is similar to those of the in-situ GCs at metallicities \(\rm[Fe/H]\lesssim-1\) (see Figures 9, 11) and is different from the corresponding distributions of GCs classified as accreted. At \(\rm[Fe/H]>-1\) the distributions of high-[N/O] stars and GCs are different, which indicates that at later times corresponding to such metallicities these stars form via a different route. * We estimate the high-[N/O] fraction of the total stellar mass at a given metallicity, \(f_{\rm N/O}(\rm[Fe/H])\), and show that this fraction decreases rapidly with increasing metallicity (Figure 8) from \(2\%<f_{\rm N/O}<4\%\) at \(\rm[Fe/H]\approx-1.4\) to \(f_{\rm N/O}\approx 0.04\%\) at \(\rm[Fe/H]>-0.9\). In contrast, even if all high-[N/O] stars in the overlapping region of \((E,L_{z})\) are assigned to the GS/E, the high-[N/O] fraction in the progenitor dwarf galaxy is some 5 times lower at \(\rm[Fe/H]<-1\), i.e. \(f_{\rm N/O}\approx 0.8\%\) and shows no obvious trend with metallicity. * Given that the estimated \(f_{\rm N/O}\) fraction depends sensitively on the [N/O] threshold adopted for the selection of these peculiar stars, we measure the fraction of high-[N/O] stars in surviving Galactic GCs and show that \(f_{\rm N/O,cl}\) scales with cluster initial mass (see Figure 4). Combining the relation between \(f_{\rm N/O,cl}\) and initial mass with a model of the initial mass function of a population of freshly-born GCs, we show that the observed in-situ \(f_{\rm N/O}\) implies that up to \(f_{\rm cl}\approx 50\%-70\%\) of the stellar mass being formed in bound clusters at \(\rm[Fe/H]\lesssim-1.5\) in the high-redshift MW (see Figure 8) and up to \(\approx 4-15\%\) at \(\rm[Fe/H]=-1\) (see S 3.7). * These results show that star formation in bound star clusters was a significant mode of star formation in the early Milky Way. In this context, we show that low-metallicity (\(\rm[Fe/H]<-1\)) in-situ stellar particles in the FIRE-2 simulations of Milky Way-sized galaxies have distributions of galactocentric distance and age-metallicity relations quite similar to those of the in-situ MW globular clusters (see Figure 13). * We use the estimated mass fraction of star formation in bound clusters as a function of metallicity to estimate the stellar mass of the Aurora population (stars with \(\rm[Fe/H]<-1\) formed in-situ) of \(M_{\rm Aur}=5\pm 2\times 10^{8}(1-f_{\rm disrup})^{-1}\,M_{\odot}\), where \(f_{\rm disrup}\) is a fraction of clusters disrupted by \(z=0\) expected to be \(\approx 0.5-0.7\) in the models of cluster evolution (see 3.8 for details). This indicates that the mass of this population is comparable to that of the overall MW stellar halo. Given that the Aurora stars are more centrally concentrated than the accreted halo, this component is expected to dominate the spheroidal population in the inner \(\approx 10\) kpc of the Galaxy. * We argue that if the MW evolution is typical of the early stages of galaxies, the high fraction of star formation in massive bound clusters may help explain the anomalous high [N/O] ratio observed in the \(z\approx 10.6\) galaxy GN-z11 (see Section 3.10) and the abundance of UV-bright galaxies at \(z\gtrsim 10\) (see Section 4). ## Acknowledgments We are grateful to Oleg Gnedin for useful discussions about the fraction of disrupted globular clusters at different metallicities. We also wish to thank Stephanie Monty, Mark Gieles, Danny Horta, Harley Katz, Jason Sanders and GyuChul Myeong for their comments that helped improve the quality of this manuscript. VB wishes to thank Andy Bunker and Chiaki Kobayashi for stimulating discussions of the GN-z11's chemistry. AK was supported by the National Science Foundation grants AST-1714658 and AST-1911111 and NASA ATP grant 80NSSC20K0512. This research made use of data from the European Space Agency mission Gaia ([http://www.cosmos.esa.int/gaia](http://www.cosmos.esa.int/gaia)), processed by the Gaia Data Processing and Analysis Consortium (DPAC, [http://www.cosmos.esa.int/web/gaia/dpac/consortium](http://www.cosmos.esa.int/web/gaia/dpac/consortium)). Funding for the DPAC has been provided by national institutions, in particular, the institutions participating in the Gaia Multilateral Agreement. This paper made use of the Whole Sky Database (wsdb) created by Sergey Koposov and maintained at the Institute of Astronomy, Cambridge with financial support from the Science & Technology Facilities Council (STFC) and the European Research Council (ERC). We also used FIRE-2 simulation public data (Wetzel et al. 2023, [http://flathub.flatironinstitute.org/fire](http://flathub.flatironinstitute.org/fire)), which are part of the Feedback In Realistic Environments (FIRE) project, generated using the Gizmo code (Hopkins 2015) and the FIRE-2 physics model (Hopkins et al. 2018b). Analyses presented in this paper were greatly aided by the following free software packages: NumPy (Oliphant 2015), SciPy (Jones et al. 01 ), Matplotlib (Hunter 2007), and Scikit-learn (Pedregosa et al. 2011). We have also used the Astrophysics Data Service (ADS) and arXiv preprint repository extensively during this project and the writing of the paper. ## Data availability This study uses allStarLite-dr17-synspec_rev1 and apogee_astroNN-DR17 catalogues publicly available at [https://www.sdss.org/dr17/irspec/spectro_data/](https://www.sdss.org/dr17/irspec/spectro_data/). The catalog of the MW globular clusters with distances used in this study is publicly available at [https://people.smp.uq.edu.au/HolgerBaumgardt/globular/](https://people.smp.uq.edu.au/HolgerBaumgardt/globular/).
2309.16087
Temporal evolution of a driven optomechanical system in the strong coupling regime
We obtain a time-evolution operator for a forced optomechanical quantum system using Lie algebraic methods when the normalized coupling between the electromagnetic field and a mechanical oscillator, $G/\omega_m$, is not negligible compared to one. Due to the forcing term, the interaction picture Hamiltonian contains the number operator in the exponents, and in order to deal with it, we approximate these exponentials by their average values taken between initial coherent states. Our approximation is justified when we compare our results with the numerical solution of the number of photons, phonons, Mandel parameter, and the Wigner function, showing an excellent agreement.
L. Medina-Dozal, J. Récamier, H. M. Moya-Cessa, F. Soto-Eguibar, R. Román-Ancheyta, I. Ramos-Prieto, A. R. Urzúa
2023-09-28T00:52:14Z
http://arxiv.org/abs/2309.16087v2
# Temporal evolution of a driven optomechanical system in the strong coupling regime ###### Abstract We obtain a time-evolution operator for a forced optomechanical quantum system using Lie algebraic methods when the normalized coupling between the electromagnetic field and a mechanical oscillator, \(G/\omega_{m}\), is not negligible compared to one. Due to the forcing term, the interaction picture Hamiltonian contains the number operator in the exponents, and in order to deal with it, we approximate these exponentials by their average values taken between initial coherent states. Our approximation is justified when we compare our results with the numerical solution of the number of photons, phonons, Mandel parameter, and the Wigner function, showing an excellent agreement. ## 1 Introduction The simplest system describing the main aspects of cavity optomechanics consists of an optically driven Fabry-Perot resonator with one end mirror fixed and the other harmonically bound and allowed to oscillate due to the radiation pressure from the intracavity field [1]. In a standard optomechanical system, the mirror's position parametrically modulates the frequency of the optical cavity mode. Compared with the mechanical frequency of the mirror, \(\omega_{m}\), and the cavity line width, the optomechanical coupling, \(G_{0}\), is usually small [2]. However, the effective coupling, \(G\), is a function of the cavity photons; for many photons, this coupling increases by \(\sqrt{n}\)[1]. Refs. [3, 4] report the experimental normal mode splitting when the coupling strength between the mechanical oscillator and the cavity field is strong enough (\(G_{0}\geq\kappa,\gamma_{m}\)), where \(\kappa\) is the cavity amplitude decay rate and \(\gamma_{m}\) the mechanical oscillator decay rate [5]. Under these circumstances, the mechanical oscillator states become dressed with the interacting photons, and these dressed states are entangled states between the field and the mechanical oscillator, in a similar form as a cavity field, and a two-level atom are entangled in the case of the Jaynes-Cummings model [6]. In Ref [7], an optomechanical system with a coupling proportional to the squared displacement operator was considered. They obtained that the expected mechanical displacement shows collapses and revivals similar to those in the Jaynes-Cummings model when the cavity and mechanical oscillator are both in coherent states. Here, we show that without the quadratic nonlinear interaction, a driven optomechanical system can also display the phenomena of collapses and revivals when working in the strong coupling regime. In this work, using Hamiltonian parameters consistent with experiments reported in the literature [2, 8], we show in detail the temporal evolution of several physical observables, like the average value of the photon number, the average value of the phonon number, the linear entropy for the mirror, and the Wigner function for the field and the mechanical oscillator. Our results obtained with the time evolution operator in the strong coupling regime are in excellent agreement with those obtained purely numerically. We structure our paper as follows. In Sec. 2, we present a Lie algebraic approach to obtain the exact solution for the undriven optomechanical system. We evaluate the evolution of the average phonon number and find that for strong enough coupling, the mechanical oscillator changes its behavior from cooling to heating, with all the other Hamiltonian parameters fixed. Then, we construct an approximate time-evolution operator for a driven optomechanical system applicable even when the coupling \(G_{0}/\omega_{m}\sim 1\). In Sec. 3, we evaluate the temporal evolution of several observables, like the average number of photons and phonons, the Mandel parameter, and the linear entropy for a wide range of values of the optomechanical coupling. Sec. 4 discusses the corresponding Wigner function of the system. Finally, in Sec. 5, we show our conclusions. ## 2 Theory This section first introduces a Lie algebraic approach to obtain the exact solution for an undriven optomechanical system. Our analysis concentrates on the evolution of the average phonon number, revealing an interesting transition: as the coupling strength \(G_{0}\) becomes sufficiently large, the mechanical oscillator shifts from a cooling behavior to a heating one. We will see that this transition persists even when changing Hamiltonian parameters, emphasizing the intricate interplay between the mechanical and optical aspects of the system. We also develop an approximate time-evolution operator tailored for a pumped (driven) optomechanical system in scenarios where the normalized coupling \(G_{0}/\omega_{m}\) is close to unity. ### Optomechanical system Let us consider the basic optomechanical interaction, encompassing a mechanical oscillator's frequency \(\omega_{m}\) and an electromagnetic cavity field with frequency \(\omega_{c}\). The quantum Hamiltonian representing this interaction is [4] \[\hat{H}_{opt}=\hbar\omega_{c}\hat{n}+\hbar\omega_{m}\hat{N}-\hbar G_{0}\hat{n }(\hat{b}+\hat{b}^{\dagger}), \tag{1}\] where the optomechanical single-photon coupling strength \(G_{0}\) is given by \[G_{0}=\frac{\omega_{c}}{L}\left(\frac{\hbar}{2m\omega_{m}}\right)^{1/2}. \tag{2}\] \(L\) corresponds to the cavity length and \(m\) represents the mass of the mechanical oscillator. The operators \(\hat{a}\) (\(\hat{b}\)) and \(\hat{a}^{\dagger}\) (\(\hat{b}^{\dagger}\)) are the annihilation and creation operators of the quantized cavity field (mechanical oscillator), respectively. Thus, \(\hat{n}=\hat{a}^{\dagger}\hat{a}\) and \(\hat{N}=\hat{b}^{\dagger}\hat{b}\) are the number operators of the quantized field and mechanical oscillator, respectively. It is easy to recognize that the set of operators appearing in \(\hat{H}_{opt}\) is closed under commutation (see Table 1); therefore, the evolution operator can be expressed exactly as a product of exponentials [9] \[\hat{U}_{opt}=e^{\alpha_{1}\hat{n}}e^{\alpha_{2}\hat{N}}e^{\alpha_{3}\hat{n}\hat {b}^{\dagger}}e^{\alpha_{4}\hat{n}\hat{b}}e^{\alpha_{5}\hat{n}^{2}}. \tag{3}\] Upon substituting \(\hat{U}_{opt}\) into Schrodinger equation, we obtain the complex time-dependent functions \(\alpha_{i}\), \[\alpha_{1} = -i\omega_{c}t, \tag{4}\] \[\alpha_{2} = -i\omega_{m}t,\] (5) \[\alpha_{3} = -\frac{G_{0}}{\omega_{m}}\left(1-e^{i\omega_{m}t}\right),\] (6) \[\alpha_{4} = -\alpha_{3}^{*},\] (7) \[\alpha_{5} = \left(\frac{G_{0}}{\omega_{m}}\right)^{2}(i\omega_{m}t-1+e^{-i \omega_{m}t}). \tag{8}\] To express the evolution operator, Eq. (3), in the context of the standard displacement operator of quantum optics [10], we first clarify that \[\hat{D}_{\hat{x}}(\alpha)\equiv e^{\alpha\hat{x}^{\dagger}-\alpha^{*}\hat{x}} =e^{-\frac{1}{2}|\alpha|^{2}}e^{\alpha\hat{x}^{\dagger}}e^{-\alpha^{*}\hat{x}} \tag{9}\] serves as the representation of the Glauber displacement operator associated with the quantized field and mechanical oscillator (with \(\hat{x}=\hat{a},\hat{b}\)), and \(\alpha\in\mathbb{C}\). By utilizing the commutation relations provided in Table 1, we can express the evolution operator as \[\hat{U}_{opt}=e^{\alpha_{2}\hat{N}}e^{\alpha_{1}\hat{n}}e^{\alpha_{5}\hat{n}^ {2}}e^{\frac{1}{2}|\alpha_{3}\hat{n}|^{2}}\hat{D}_{\hat{b}}\left(\alpha_{3} \hat{n}\right). \tag{10}\] Now, we choose the initial state to be a tensor product of Glauber coherent states [11], encompassing both the field and the mechanical oscillator, \[\left|\Psi(0)\right\rangle=\left|\alpha\right\rangle_{f}\otimes\left|\Gamma \right\rangle_{m}\equiv\left|\alpha,\Gamma\right\rangle, \tag{11}\] where \(\alpha\) and \(\Gamma\) are the amplituded of their corresponding coherent state. Using \(\hat{U}_{opt}\), the time-evolution of the coherent states in terms of their corresponding Fock states has the explicit form \[\left|\Psi(t)\right\rangle=e^{-\frac{1}{2}|\alpha|^{2}}\sum_{k=0}^{\infty} \frac{\alpha^{k}}{\sqrt{k!}}e^{-i[\omega_{c}t-\mathrm{Im}(\alpha_{3}\Gamma^{ *})]k}e^{i\left(\frac{G_{0}}{\omega_{m}}\right)^{2}[\omega_{m}t-\sin(\omega_{m }t)]k^{2}}\left|k,\Gamma_{k}(t)\right\rangle, \tag{12}\] with \[\Gamma_{k}(t)=(\Gamma+k\alpha_{3})e^{-i\omega_{m}t}=\Gamma e^{-i\omega_{m}t}- k\frac{G_{0}}{\omega_{m}}\left(e^{-i\omega_{m}t}-1\right), \tag{13}\] \begin{table} \begin{tabular}{||c c c c c c||} \hline & \(\hat{n}\) & \(\hat{N}\) & \(\hat{n}\hat{b}\) & \(\hat{n}\hat{b}^{\dagger}\) & \(\hat{n}^{2}\) \\ \hline \hline \(\hat{n}\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) \\ \(\hat{N}\) & \(0\) & \(0\) & \(-\hat{n}\hat{b}\) & \(\hat{n}\hat{b}^{\dagger}\) & \(0\) \\ \(\hat{n}\hat{b}\) & \(0\) & \(\hat{n}\hat{b}\) & \(0\) & \(\hat{n}^{2}\) & \(0\) \\ \(\hat{n}\hat{b}^{\dagger}\) & \(0\) & \(-\hat{n}\hat{b}^{\dagger}\) & \(-\hat{n}^{2}\) & \(0\) & \(0\) \\ \(\hat{n}^{2}\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) \\ \hline \end{tabular} \end{table} Table 1: Commutation relations associated with the operators within the Hamiltonian, Eq. (1). We have included the operator \(\hat{n}^{2}\) to account for the commutator between \(\hat{n}\hat{b}\) and \(\hat{n}\hat{b}^{\dagger}\). where the entanglement between the field and the mechanical oscillator is evident. Using this result, we obtain the average number of photons \[\langle\Psi(t)|\hat{n}|\Psi(t)\rangle=e^{-|\alpha|^{2}}\sum_{k=0}^{\infty}k\frac{ |\alpha|^{2k}}{k!}=|\alpha|^{2}, \tag{14}\] and the average number of phonons \[\langle\Psi(t)|\,\hat{N}\,|\Psi(t)\rangle=e^{-|\alpha|^{2}}\sum_{k=0}^{\infty }\frac{|\alpha|^{2k}}{k!}|\Gamma_{k}(t)|^{2}=e^{-|\alpha|^{2}}\sum_{k=0}^{ \infty}\frac{|\alpha|^{2k}}{k!}|\Gamma+k\alpha_{3}|^{2}. \tag{15}\] The summation can be done, and we to \[\langle\hat{N}(t)\rangle=|\Gamma|^{2}+(\alpha_{3}\Gamma^{*}+\alpha_{3}^{*} \Gamma)\,|\alpha|^{2}+|\alpha_{3}|^{2}\left(|\alpha|^{2}+|\alpha|^{4}\right). \tag{16}\] Using the explicit form for the function \(\alpha_{3}\), we finally obtain for \(\Gamma\in\mathbb{R}\), \[\langle\hat{N}(t)\rangle=\Gamma^{2}+2\left(\frac{G_{0}}{\omega_{m}}\right)(1- \cos(\omega_{m}t))\left[\left(\frac{G}{\omega_{m}}\right)(|\alpha|^{2}+1)-| \alpha|^{2}\Gamma\right] \tag{17}\] where \(G\equiv|\alpha|^{2}G_{0}\) is the effective coupling. We see that the average number of photons remains constant, while the average number of phonons depends upon the coupling constant \(G_{0}/\omega_{m}\), the number of photons in the cavity \(|\alpha|^{2}\), and the interaction time. From Eq. (17), we see that for a large enough number of photons, the sign of the second term may become positive thus increasing the average number of phonons. In Fig. 1, we show the temporal evolution of the average phonon number \(\langle\hat{N}(t)\rangle\) for several values of the effective coupling \(G\); we used \(\omega_{m}=10^{7}\)s\({}^{-1}\), \(G_{0}/\omega_{m}=0.033\), \(|\alpha|^{2}=4\). In the figure, we present the cases for \(G=0\) (blue) (in this case \(\langle\hat{N}(t)\rangle=N_{0}=4\)), \(G=|\alpha|^{2}G_{0}\) (orange), \(G=5|\alpha|^{2}G_{0}\) (green) and \(G=10|\alpha|^{2}G_{0}\) (red). In these cases, the average number of phonons \(\langle\hat{N}(t)\rangle\) oscillates below its initial value; thus, the mechanical oscillator is being cooled, and the amount of cooling is larger as the effective coupling is larger. However, for \(G=15|\alpha|^{2}G_{0}\) we see a different behavior: \(\langle\hat{N}(t)\rangle\) becomes a function that oscillates above its initial value, so the mechanical oscillator heats up. We stress the fact that at this point there is no forcing term in the Hamiltonian and the average value of the photon number operator is constant and equal to \(|\alpha|^{2}\). ### Forced optomechanical system Now consider the case where the optomechanical system is driven by a field with frequency \(\omega_{p}\) of the form \[\hat{V}=\hbar\Omega\cos(\omega_{p}t)(\hat{a}+\hat{a}^{\dagger}). \tag{18}\] The total Hamiltonian of the driven optomechanical system is \[\hat{H}=\hat{H}_{opt}+\hat{V}, \tag{19}\] and the corresponding time evolution operator is \[\hat{U}=\hat{U}_{opt}\hat{U}_{I}, \tag{20}\] with \(\hat{U}_{opt}\) given in Eq. (10), and \(\hat{U}_{I}\) the time-evolution operators in the interaction picture that satisfies \[i\hbar\frac{\partial\hat{U}_{I}}{\partial t}=\left[\hat{U}_{opt}^{\dagger} \hat{V}\hat{U}_{opt}\right]\hat{U}_{I}, \tag{21}\] with the initial condition \(\hat{U}_{I}(0)=1\). Transforming the operators \(\hat{a}\) and \(\hat{a}^{\dagger}\), one obtains \[\hat{U}_{opt}^{\dagger}\hat{a}\hat{U}_{opt}=e^{iE(t)(2\hat{n}+1)}e^{iF(t)( \hat{b}^{\dagger}e^{i\omega_{m}t/2}+\hat{b}e^{-i\omega_{m}t/2})}\hat{a}e^{-i \omega_{c}t}, \tag{22}\] and \[U_{opt}^{\dagger}\hat{a}^{\dagger}U_{opt}=\hat{a}^{\dagger}e^{i\omega_{c}t}e^ {-iF(t)(\hat{b}^{\dagger}e^{i\omega_{m}t/2}+\hat{b}e^{-i\omega_{m}t/2})}e^{-iE (t)(2\hat{n}+1)}, \tag{23}\] where the functions \(E(t)\) and \(F(t)\) are \[E(t)=\left(\frac{G_{0}}{\omega_{m}}\right)^{2}(\omega_{m}t-\sin(\omega_{m}t)), \tag{24}\] and \[F(t)=\frac{2G_{0}}{\omega_{m}}\sin\Bigl{(}\frac{\omega_{m}}{2}t\Bigr{)}. \tag{25}\] In reference [12], the case where \(G_{0}/\omega_{m}=0.0236\) was considered, and these functions were simply neglected \(F(t)\simeq 0\), \(E(t)\simeq 0\); then, it was possible to obtain an approximate time evolution operator, whose results were corroborated comparing them with a purely numerical calculation using the corresponding full Hamiltonian. In this approximation, valid for \(G_{0}/\omega_{m}\ll 1\), the interaction Hamiltonian in the interaction picture is \[\hat{H}_{I}=\hbar\Omega\cos(\omega_{p}t)\left[\hat{a}^{\dagger}e^{i\omega_{c} t}+\hat{a}e^{-i\omega_{c}t}\right]\simeq\frac{\hbar\Omega}{2}\left[\hat{a}^{ \dagger}e^{i(\omega_{c}-\omega_{p})t}+\hat{a}e^{-i(\omega_{c}-\omega_{p})t} \right], \tag{26}\] where we have neglected terms that oscillate rapidly. The corresponding time evolution operator is \[\hat{U}_{I}=e^{\beta_{1}^{(0)}\hat{a}^{\dagger}}e^{\beta_{2}^{(0)}\hat{a}}e^{ \beta_{3}^{(0)}}, \tag{27}\] with \(\beta_{1}^{(0)}(t)=(\Omega/2\Delta)(e^{i\Delta t}-1)\), and the detuning \(\Delta=\omega_{p}-\omega_{c}\). When \(\Delta\to 0\), \(\beta_{1}^{(0)}=i\Omega t/2\). Note that \(\beta_{1}^{(0)}\)oscillates with the frequency of the detuning. When we do not consider the weak coupling limit, the interaction Hamiltonian is given by \[\hat{H}_{I} = \hbar\Omega\cos(\omega_{p}t)\left[\hat{a}^{\dagger}e^{i\omega_{c }t}e^{-iF(t)(\hat{b}^{\dagger}e^{i\omega_{m}t/2}+\hat{b}e^{-i\omega_{m}t/2})}e ^{-iE(t)(2\hat{n}+1)}\right] \tag{28}\] \[+ \hbar\Omega\cos(\omega_{p}t)\left[e^{iE(t)(2\hat{n}+1)}e^{iF(t)( \hat{b}^{\dagger}e^{i\omega_{m}t/2}+\hat{b}e^{-i\omega_{m}t/2})}\hat{a}e^{-i \omega_{c}t}\right].\] It is essential to say that the operators in \(\hat{H}_{I}\) still form a time-dependent Lie algebra; this was noted first in [13]. In order to treat the operators in the exponents, at least in an approximate form, we take the average value of the exponentials between initial coherent states for the field and the mechanical oscillator respectively; that is, we take \[e^{-iF(t)(\hat{b}^{\dagger}e^{i\omega_{m}t/2}+\hat{b}e^{-i\omega _{m}t/2})}e^{-iE(t)(2\hat{n}+1)} \simeq m\langle\Gamma|e^{-iF(t)(\hat{b}^{\dagger}e^{i\omega_{m}t/2}+ \hat{b}e^{-i\omega_{m}t/2})}|\Gamma\rangle_{m} \tag{29}\] \[\times\ {}_{f}\langle\alpha|e^{-iE(t)(2\hat{n}+1)}|\alpha\rangle_{f},\] for an initial state \(|\Psi(0)\rangle=|\alpha,\Gamma\rangle\). The first term in the previous equation is simply \[{}_{m}\langle\Gamma|e^{-iF(t)(\hat{b}^{\dagger}e^{i\omega_{m}t/2}+\hat{b}e^{- i\omega_{m}t/2})}|\Gamma\rangle_{m}=e^{-\frac{1}{2}F^{2}(t)}e^{-iF(t)(\Gamma^{*}e^{i \omega_{m}t/2}+\Gamma e^{-i\omega_{m}t/2})}, \tag{30}\] and the second one is \[{}_{f}\langle\alpha|e^{-iE(t)(2\hat{n}+1)}|\alpha\rangle_{f}=e^{-iE(t)}e^{| \alpha|^{2}(e^{-2iE(t)}-1)}, \tag{31}\] where we have used the fact that \(|\alpha\rangle_{f}\) is not an eigenstate of the photon number operator. In this approximation, the interaction Hamiltonian takes the form \[\hat{H}_{I}=\hbar\Omega\cos\left(\omega_{p}t\right)\left[\phi(t)\hat{a}^{ \dagger}e^{i\omega_{c}t}+\phi^{*}(t)\hat{a}e^{-i\omega_{c}t}\right], \tag{32}\] with the time-dependent function \[\phi(t)=e^{-\frac{1}{2}F^{2}(t)}e^{-iF(t)(\Gamma^{*}e^{i\omega_{m}t/2}+ \Gamma e^{-i\omega_{m}t/2})}e^{-iE(t)}e^{|\alpha|^{2}(e^{-2iE(t)}-1)}, \tag{33}\] and the corresponding time-evolution operator given by \[\hat{U}_{I}=e^{\beta_{1}\hat{a}^{\dagger}}e^{\beta_{2}\hat{a}}e^{\beta_{3}}, \tag{34}\] where \[\dot{\beta}_{1} = -i\Omega\phi(t)\cos(\omega_{p}t)e^{i\omega_{c}t}, \tag{35}\] \[\dot{\beta}_{2} = -i\Omega\phi^{*}(t)\cos(\omega_{p}t)e^{-i\omega_{c}t},\] (36) \[\dot{\beta}_{3} = -i\Omega\beta_{1}\phi^{*}(t)\cos(\omega_{p}t)e^{-i\omega_{c}t}. \tag{37}\] It can be seen that \(\beta_{1}=-\beta_{2}^{*}\), as a consequence of the unitarity of the time evolution operator \(\hat{U}_{I}\). Due to the product between \(\cos(\omega_{p}t)\) and \(e^{i\omega_{c}t}\), we can perform a rotating wave approximation (RWA) on this set of equations, whose integration leads to terms like the \(\beta_{1}^{(0)}\) introduced above. Also, in the limiting case \(\phi(t)\to 1\) for an arbitrary \(t\), we can obtain a behavior that is independent of the coupling; in this sense, the direct integration of \(\dot{\beta}_{1}\) gives, \[\beta_{1}(t)=\frac{\Omega}{\omega_{p}^{2}-\omega_{c}^{2}}\left[e^{i\omega_{c}t }(\omega_{c}\cos(\omega_{p}t)-i\omega_{p}\sin(\omega_{p}t))-\omega_{c}\right], \tag{38}\] where besides the oscillations with the frequency of the detuning, there will be fast oscillations with frequency \(\omega_{c}+\omega_{p}\). Fig. 2 shows the evolution of \(\mbox{Re}(\beta_{1}(t))\) in the three forms presented: for the rotating wave approximation in (27), the limit case when \(\phi\to 1\) in (38), and the numerical integration of (35). We see that the numerical integration and the limiting case are nearly overlapped, and present oscillations bounded by the rotating wave approximation curve. Since \(\beta_{1}=-\beta_{2}^{*}\), the time evolution operator given by Eq. (34) can be written as a displacement operator \[\hat{U}_{I}=e^{\beta_{3}}e^{\frac{1}{2}|\beta_{1}|^{2}}\hat{D}_{\hat{a}}(\beta _{1}), \tag{39}\] and the full-time evolution operator is then \[\hat{U}=e^{-i\omega_{c}t\hat{n}}e^{-i\omega_{m}t\hat{N}}e^{\alpha_{5}\hat{n}^{2 }}e^{\frac{1}{2}|\alpha_{3}\hat{n}|^{2}}\hat{D}_{\hat{b}}(\alpha_{3}\hat{n}) \hat{D}_{\hat{a}}(\beta_{1})e^{\frac{1}{2}|\beta_{1}|^{2}}e^{\beta_{3}}. \tag{40}\] For an initial state given by the tensor product of a field coherent state and mechanical coherent state, \(|\alpha,\Gamma\rangle\), the state vector at time \(t\) can be expressed as \[|\Psi(t)\rangle = e^{(\beta_{3}+\frac{1}{2}|\beta_{1}|^{2})}e^{i\,\mbox{Im}(\beta _{1}\alpha^{*})}e^{-\frac{1}{2}|\beta_{1}+\alpha|^{2}} \tag{41}\] \[\times \sum_{k}e^{-i(\omega_{c}t-\mbox{Im}(\alpha_{\Gamma}\Gamma^{*}))k }e^{i\left(\frac{G_{0}}{\omega_{m}}\right)^{2}(\omega_{m}t-\sin(\omega_{m}t))k ^{2}}\] \[\times \frac{(\beta_{1}+\alpha)^{k}}{\sqrt{k!}}|k,\Gamma_{k}(t)\rangle,\] where \(\Gamma_{k}(t)\) is defined by the Eq. (13). Evaluation of observables In this section, we evaluate the temporal evolution of several observables, like the average number of photons and phonons, the Mandel parameter, and the linear entropy of the field and of the mechanical oscillator. ### Average number of photons and phonons The photon number operator in the Heisenberg representation is \[\hat{n}(t)=\hat{U}^{\dagger}\hat{n}\hat{U}=\hat{n}-\beta_{2}\hat{a}+\beta_{1} \hat{a}^{\dagger}-\beta_{1}\beta_{2}, \tag{42}\] and the average number of photons is given by \[\langle\hat{n}(t)\rangle=\langle\alpha|\hat{n}(t)|\alpha\rangle=|\alpha|^{2}- \alpha\beta_{2}+\alpha^{*}\beta_{1}-\beta_{1}\beta_{2}. \tag{43}\] Using that \(\beta_{1}=-\beta_{2}^{*}\), we obtain \[\langle\hat{n}(t)\rangle=|\alpha+\beta_{1}|^{2}. \tag{44}\] For the case \(G_{0}/\omega_{m}\ll 1\), the function \(\phi(t)\to 1\), and we can integrate Eq. (35), to obtain the average value of the photon number as \[\langle\hat{n}(t)\rangle=|\alpha|^{2}+\frac{\Omega}{\Delta}\left[1-\cos( \Delta t)\right]\left(\frac{\Omega}{2\Delta}+\mbox{Re}(\alpha)\right). \tag{45}\] For those cases with larger values of \(G_{0}/\omega_{m}\), the function \(\phi(t)\) is a complicated function of time and we cannot obtain an analytic solution of Eq. (35). The average number of phonons is \[\langle\hat{N}(t)\rangle=\langle\Psi(t)|\hat{N}|\Psi(t)\rangle, \tag{46}\] which has the explicit form \[\langle\hat{N}(t)\rangle = e^{2\,\mbox{Re}(\beta_{3}+\alpha\beta_{2})}e^{-|\alpha|^{2}}\sum _{k}\frac{\alpha^{k}}{k!}\sum_{p}\frac{\beta_{1}^{p}}{p!}(k+p)! \tag{47}\] \[\times \sum_{k^{\prime}}\frac{\alpha^{*k^{\prime}}}{k^{\prime}!}\sum_{p^ {\prime}}\frac{\beta_{1}^{*p^{\prime}}}{p^{\prime}!}|\Gamma_{k+p}(t)|^{2},\] with the condition \(k+p=k^{\prime}+p^{\prime}\). As we show below, the summations can be done analytically. Starting from Eq. (47), and applying the condition \(k+p=k^{\prime}+p^{\prime}\), the third summation can be done, arriving at \[\langle\hat{N}(t)\rangle = e^{2\,\mbox{Re}(\beta_{3}+\alpha\beta_{2})}e^{-|\alpha|^{2}} \sum_{k}\frac{\alpha^{k}}{k!}\sum_{p}\frac{\beta_{1}^{p}}{p!}|\Gamma_{k+p}(t)| ^{2}(\alpha^{*}+\beta_{1}^{*})^{k+p}. \tag{48}\] Setting \(n=k+p\), then \(p=n-k\), we get \[\langle\hat{N}(t)\rangle = e^{2\,\mbox{Re}(\beta_{3}+\alpha\beta_{2})}e^{-|\alpha|^{2}} \sum_{k}\frac{\alpha^{k}}{k!}\sum_{n=k}\frac{\beta_{1}^{(n-k)}}{(n-k)!}|\Gamma _{n}(t)|^{2}(\alpha^{*}+\beta_{1}^{*})^{n}. \tag{49}\] The second sum may be started at zero, as only zeros are added, and the order of the sums can be changed, to get \[\langle\hat{N}(t)\rangle=e^{2\,\mbox{Re}(\beta_{3}+\alpha\beta_{2})}e^{-| \alpha|^{2}}\sum_{n=0}\frac{|\Gamma_{n}(t)|^{2}(\alpha^{*}+\beta_{1}^{*})^{n}} {n!}\sum_{k}^{n}\frac{\beta_{1}^{(n-k)}}{(n-k)!}\frac{\alpha^{k}}{k!}n!, \tag{50}\] that gives \[\langle\hat{N}(t)\rangle=e^{2\,{\rm Re}(\beta_{3}+\alpha\beta_{2})}e^{-|\alpha|^{2 }}\sum_{n=0}\frac{|\Gamma_{n}(t)|^{2}|\alpha+\beta_{1}|^{2n}}{n!}. \tag{51}\] We have \(|\Gamma_{n}|^{2}=|\Gamma|^{2}+|\alpha_{3}|^{2}n^{2}+(\Gamma\alpha_{3}^{*}+ \alpha_{3}\Gamma^{*})n\), so that \[\sum_{n}\frac{|\alpha+\beta_{1}|^{2n}}{n!}|\Gamma_{n}|^{2} = \sum_{n=0}\frac{|\alpha+\beta_{1}|^{2n}}{n!}[|\Gamma|^{2}+|\alpha_ {3}|^{2}n^{2}+(\Gamma\alpha_{3}^{*}+\alpha_{3}\Gamma^{*})n] \tag{52}\] \[= |\Gamma|^{2}e^{|\alpha+\beta_{1}|^{2}}+(\Gamma\alpha_{3}^{*}+ \alpha_{3}\Gamma^{*})|\alpha+\beta_{1}|^{2}e^{|\alpha+\beta_{1}|^{2}}\] \[+ |\alpha_{3}|^{2}(|\alpha+\beta_{1}|^{2}+|\alpha+\beta_{1}|^{4}) e^{|\alpha+\beta_{1}|^{2}},\] and using Eq. (44), we finally find \[\langle\hat{N}(t)\rangle = e^{2\,{\rm Re}(\beta_{3}+\alpha\beta_{2})}e^{-|\alpha|^{2}}e^{| \alpha+\beta_{1}|^{2}}[|\Gamma|^{2}+2\,{\rm Re}(\Gamma\alpha_{3}^{*})\langle \hat{n}(t)\rangle \tag{53}\] \[+ |\alpha_{3}|^{2}(\langle\hat{n}(t)\rangle+\langle\hat{n}(t) \rangle^{2})].\] On the other hand, the phonon number operator in the Heisenberg representation is \[\hat{N}(t)=\hat{N}+(\alpha_{3}\hat{b}^{\dagger}+\alpha_{3}^{*}\hat{b})\hat{n}( t)+|\alpha_{3}|^{2}\hat{n}(t)^{2}, \tag{54}\] and taking its average value between coherent states, we obtain \[\langle\hat{N}(t)\rangle=|\Gamma|^{2}+2\,{\rm Re}(\alpha_{3}\Gamma^{*})\langle \hat{n}(t)\rangle+|\alpha_{3}|^{2}(\langle\hat{n}(t)\rangle+\langle\hat{n}(t) \rangle^{2}). \tag{55}\] Comparing Eqs. (53) and (55), we see that \(2\,{\rm Re}(\beta_{3}+\alpha\beta_{2})-|\alpha|^{2}+|\alpha+\beta_{1}|^{2}=0\) as a condition of unitarity; this fact has been confirmed numerically. In Fig. 3, we show the temporal evolution for the average _photon_ number; in solid lines, the results obtained with the method described in the text, and in dotted lines the numerical results obtained using the full Hamiltonian. In these cases, the coupling between the field and the mechanical oscillator is small, \(G_{0}/\omega_{m}=0.033\), corresponding to the set of Hamiltonian parameters used in Ref. [12]. The amplitude for the forcing term is \(\Omega=(\pi/20)\omega_{c}\). In red we show the case with \(\omega_{p}=0.8\omega_{c}\) so that the detuning \(\Delta=\omega_{p}-\omega_{c}=-0.2\omega_{c}\) (red detuned) and in blue the case with \(\omega_{p}=1.2\omega_{c}\) so that \(\Delta=0.2\omega_{c}\) (blue detuned); thus, we are working under non-resonant conditions. Note the almost perfect agreement between the analytic and the numerical calculations for the case of the average photon number. The initial average photon number is \(\bar{n}=|\alpha|^{2}=4\), and for the red detuned case we see that the average photon number is an increasing oscillatory function of time, attaining values in the range \(4\leq\bar{n}(t)<9\) with a period \(T_{\Delta}=(2\pi/\Delta)=5T_{c}\), being \(T_{c}=2\pi/\omega_{c}\) the period of the cavity. The superimposed fast oscillations with small amplitude have the period \(T_{fast}=2\pi/(\omega_{c}+\omega_{p})\simeq T_{c}/2\) for \(\omega_{p}=0.8\omega_{c}\). For the blue detuned case, we see that the average photon number is a decreasing oscillatory function of time, its values in the range \(1.5<\bar{n}<4.5\); the period of the fast oscillations is again \(T_{fast}\), and that of the enveloping is \(T_{\Delta}\), as before. The effective coupling evolves in time according to \(G(t)=G_{0}\langle\hat{n}(t)\rangle\), and attains values as large as \(G\simeq 0.25\,\omega_{m}\). In Fig. 4, we show the temporal evolution for the average _phonon_ number; in the upper panel the forcing frequency is \(\omega_{p}=0.8\omega_{c}\) (red detuned), and in the lower panel, it is \(\omega_{p}=1.2\omega_{c}\) (blue detuned). The initial value for the average phonon number is \(\bar{N}=\langle\hat{N}\rangle=|\Gamma|^{2}=4\). In these figures, we also show the exact results obtained for an optomechanical system with no forcing term (black dashed line), the numerical results for the forced optomechanical system (dotted darker blue or red line), and the results obtained with the analytic method (solid blue or red line). The Hamiltonian parameters are the same as those used for Fig. 3. In both cases (blue detuned and red detuned), the average value of the phonon number is a decreasing function of time, with an envelope oscillating with the period of the mechanical oscillator \(T_{m}=2\pi/\omega_{m}\). The analytic results also show rapid oscillations, with period \(T_{\Delta}\) due to the average photon number evolution (see Eq. (55)). The main difference between the red and blue detuned cases is that the effect of the forcing term in the red detuned case is a larger decrease of the average phonon number with respect to the nonforced system, while in the blue detuned case, the average phonon number for the forced system decreases less than the decrease for the nonforced system. We can see that, in the red-detuned case, the average photon number is an increasing oscillating function, and the average phonon number is a decreasing oscillating function that takes values smaller than its values for the nonforced system. For the blue detuned case, the average photon number is a decreasing oscillating function, and the average phonon number is a decreasing oscillating function that takes values above those of the nonforced system. For this set of Hamiltonian parameters, the red-detuned and the blue-detuned cases show a cooling behavior of the mechanical oscillator. Figure 3: Temporal evolution of the average photon number with Hamiltonian parameters \(\omega_{c}=10^{9}s^{-1}\), \(\omega_{m}=\omega_{c}/100\), \(\Gamma=2\), \(\alpha=2\), \(G_{0}/\omega_{m}=0.033\), \(\Omega=(\pi/20)\omega_{c}\). Red detuned, \(\omega_{p}=0.8\omega_{c}\). Blue detuned, \(\omega_{p}=1.2\omega_{c}\). The solid lines are given by the analytical expression (45), whereas the dashed lines are obtained by solving the original Hamiltonian numerically. In Fig. 5, we show the results obtained when we filter the rapid oscillations obtained in the analytic approximation for the phonon number operator and compare them with the numerical calculation. Recall that in our analytic approach, we have replaced the operators appearing in the exponents in the expression for the interaction Hamiltonian (see Eq. (28)) by an average taken between coherent states; then, as a consequence of this approximation, the creation-annihilation operators for the mechanical oscillator are not being taken into account properly and interference terms may not be present. However, we demonstrate that when we take an average of these spurious oscillations, the analytic results converge to the numerical results, see the lower panel of Fig. 5. In Fig. 6 upper panel, we plot the temporal evolution of the average photon number obtained with the analytical method described above for \(\omega_{p}=0.8\omega_{c}\) (red lines), and the results obtained by means of a purely numerical calculation with the full Hamiltonian (dark red lines) using Qutip [14], for a strong coupling constant \(G_{0}=0.33\omega_{m}\), ten times larger than that used in the previous figures; as before, the initial value is \(\bar{n}(t_{0})=4\). At the beginning of the evolution \(\bar{n}(t)\) increases up to \(\bar{n}=9\), with rapid oscillations going down to \(\bar{n}=4\); after a time of the order of \(5\times 10^{-7}\) seconds, we notice the appearance of a first collapse, similar to those seen in the Jaynes-Cummings model, the oscillations cease with \(\bar{n}(t)\) taking a constant value around \(\bar{n}=6\). The length of the collapse is \(\tau\simeq 2\times 10^{-6}\) seconds and after that, we see a new revival in the oscillations, its duration being of the same order of magnitude as that of the collapse. In the lower panel, the forcing frequency \(\omega_{p}=1.2\omega_{c}\) (blue detuned); the analytic results are shown in blue, and the numerical results in dark blue. Collapses and revivals are also present in this case, the main difference between both cases is that with red detuning \(\bar{n}(t)\) oscillates above its initial value, while for blue detuning it oscillates below its initial value, conduct we had already seen for the case of a small \(G_{0}/\omega_{m}\). The duration of the collapses and revivals is similar in both cases. We can see a good qualitative agreement between the numerical and the analytic calculations. In Fig. 7, we see the temporal evolution of the average phonon number using the same Hamiltonian parameters as those of Fig. 6. The upper panel corresponds to red detuning and the lower panel to blue detuning; in both cases, the initial average value of the phonon number operator is 4. In the red detuning case, we show the results obtained with the analytic method after averaging the fast oscillations due to the revival of the average photon number in the red line, and in the brown broken line the numerical results obtained with the total Hamiltonian. Figure 6: Temporal evolution of the average photon number with Hamiltonian parameters at the strong coupling \(\omega_{c}=10^{9}\), \(\omega_{m}=\omega_{c}/100\), \(\Gamma=2\), \(\alpha=2\), \(G_{0}/\omega_{m}=0.33\), \(\Omega=(\pi/20)\omega_{c}\). Upper panel \(\omega_{p}=0.8\omega_{c}\) (red detuned, dark color is the numerical solution). Lower panel \(\omega_{p}=1.2\omega_{c}\) (blue detuned, dark color is the numerical solution) We notice that the filtered results have a reasonable agreement with the numerical results, the main difference is seen in the amplitude of the oscillation in the temporal region of the photon collapse; the frequency of the oscillations is the same all along the temporal evolution shown in the figure. We see that the average phonon number has an increasing oscillation in the range \(4\leq\langle\hat{N}(t)\rangle\leq 8\) so that the average number of phonons increases from its initial value. In the lower panel, we show in blue full-line the analytic results with no filtering applied, the numerical results in blue line-dot, and in black line the results for the non-forced system. The non-forced system shows a decrease in the average phonon number in the range \(2<\langle\hat{N}(t)\rangle\leq 4\), with the forcing present, the average phonon number attains even smaller numbers in the range \(1<\langle\hat{N}(t)\rangle\leq 4\), so that, with blue detuning, the forcing term cools the mechanical oscillator. The agreement between the analytic and the numerical results is relatively good in the region of the collapses of the photon number, the main differences are present in the areas of the revivals of the photon number. The presence of collapses and revivals in an optomechanical system, similar to those shown Figure 7: Temporal evolution of the average phonon number with Hamiltonian parameters at the strong coupling regime \(\omega_{c}=10^{9}\)s\({}^{-1}\), \(\omega_{m}=\omega_{c}/100\), \(\Gamma=2\), \(\alpha=2\), \(G_{0}=0.33\omega_{m}\), \(\Omega=(\pi/20)\omega_{c}\). Upper panel \(\omega_{p}=0.8\omega_{c}\) (red detuned). Redline filtered analytic results, brown broken-line numerical results. Lower panel \(\omega_{p}=1.2\omega_{c}\) (blue detuned). Blue line analytic results with no filtering, blue broken-line numerical results. In black are the numerical results for the non-forced system in the Jaynes-Cummings model of quantum optics, was predicted in [15]. The analogy with quantum optics can be understood when the optomechanical coupling constant relies on the strong coupling regime (\(G_{0}>\kappa,\gamma_{m}\)), where \(\kappa\) is the amplitude for the cavity decay rate, and \(\gamma_{m}\) is the rate at which the oscillator exchanges phonons with the environment since to have coherent dynamics it is necessary that the time scale, in which the optomechanical interaction takes place, be smaller than the decoherence time scales in the system. ### Mandel parameter and linear entropy Consider now the Mandel parameter for the field defined as [16]: \[Q=\frac{\langle\hat{n}^{2}(t)\rangle-\langle\hat{n}(t)\rangle^{2}}{\langle\hat {n}(t)\rangle}. \tag{56}\] When the averages are taken between coherent states, \(Q=1\) and the distribution is Poissonian. In Eq. (42), we presented the photon number operator in the Heisenberg representation \(\hat{n}(t)\), so that \[\hat{n}^{2}(t)=\hat{U}^{\dagger}\hat{n}\hat{U}\hat{U}^{\dagger}\hat{n}\hat{U}= (\hat{n}-\beta_{2}\hat{a}+\beta_{1}\hat{a}^{\dagger}-\beta_{1}\beta_{2})(\hat {n}-\beta_{2}\hat{a}+\beta_{1}\hat{a}^{\dagger}-\beta_{1}\beta_{2}), \tag{57}\] with \(\hat{n}\) the photon number operator in the Schrodinger picture. When we take the average values given in Eq. (56) between initial coherent states and make use of Eqs. (42) and (57), we obtain \(Q=1\); so the field remains in a coherent state along the evolution. We can also evaluate the Mandel parameter for the mechanical oscillator \[Q_{M}=\frac{\langle\hat{N}^{2}(t)\rangle-\langle\hat{N}(t)\rangle^{2}}{ \langle\hat{N}(t)\rangle}, \tag{58}\] where the average number of phonons is given by Eq. (55), and \[\langle\hat{N}^{2}(t)\rangle=\langle\Psi(t)|\hat{N}^{2}|\Psi(t)\rangle=\langle \Psi(t_{0})|\hat{N}^{2}(t)|\Psi(t_{0})\rangle. \tag{59}\] In the Heisenberg representation the operator \(\hat{N}^{2}(t)\) is given by \[\hat{N}^{2}(t)=\left(\hat{N}+(\alpha_{3}\hat{b}^{\dagger}+\alpha_{3}^{*}\hat {b})\hat{n}(t)+|\alpha_{3}|^{2}\hat{n}^{2}(t)\right)^{2}, \tag{60}\] and its average value between coherent states \(|\Gamma\rangle\) is \[\langle\hat{N}^{2}(t)\rangle = \langle\hat{N}^{2}\rangle+4\,\mbox{Re}\bigg{\{}\alpha_{3}\Gamma^ {*}\left(|\Gamma|^{2}+\frac{1}{2}\right)\bigg{\}}\langle\hat{n}(t)\rangle\] \[+ 2\left(\mbox{Re}\big{\{}(\alpha_{3}\Gamma^{*})^{2}\big{\}}+| \alpha_{3}|^{2}\left(2|\Gamma|^{2}+\frac{1}{2}\right)\right)\langle\hat{n}^{ 2}(t)\rangle\] \[+ 4|\alpha_{3}|^{2}\,\mbox{Re}\{\alpha_{3}\Gamma^{*}\}\langle\hat{ n}^{3}(t)\rangle+|\alpha_{3}|^{4}\langle\hat{n}^{4}(t)\rangle.\] With these expressions, we can compute the Mandel parameter for the mechanical oscillator. The result is found in Fig. 8, after filtering the rapid oscillations (red line), along with the result of purely numerical computation (brown broken-line); we see a very good agreement between them. We also see that the Mandel parameter for the mechanical oscillator is larger than one, corresponding to a super-Poissonian statistic, that is, a behavior of classical states. Consider now the linear entropy for the subsystem \(x\), where \(x\) is the label of mirror or cavity; it is given by \(S^{(x)}=1-\mathrm{Tr_{x}}[\rho_{x}^{2}]\). The reduced density matrix for the mechanical oscillator is \[\rho_{m}(t)=\mathrm{Tr_{c}}[\rho]=e^{-|\alpha+\beta_{1}|^{2}}\sum_{p}\frac{| \alpha+\beta_{1}|^{2p}}{p!}|\Gamma_{p}(t)\rangle\langle\Gamma_{p}(t)|. \tag{61}\] From this expression, we obtain \[\mathrm{Tr_{m}}[\rho_{m}^{2}(t)]=e^{-2|\alpha+\beta_{1}|^{2}}\sum_{p,q}\frac{| \alpha+\beta_{1}|^{2p}}{p!}\frac{|\alpha+\beta_{1}|^{2q}}{q!}e^{-|\Gamma_{p}(t )-\Gamma_{q}(t)|^{2}}, \tag{62}\] and we can evaluate the linear entropy for the mechanical oscillator. We see in Fig. 9 the evolution of the linear entropy for the mechanical oscillator. One can see an oscillatory conduct with the period of the mechanical oscillator, with small oscillations with the frequency of the detuning, superimposed on top of them. These are evident in the region of the revivals, while in the region of the collapses, the linear entropy oscillates only with frequency \(\omega_{m}\). At times \(t_{n}=2\pi n/\omega_{m}\), the entanglement between the field and the mechanical oscillator is zero, the system returns to a pure state and the linear entropy goes to zero. At times \(t_{n}=(2n+1)\pi/\omega_{m}\), the entanglement is maximum and the linear entropy attains its maximum value; the system is maximally entangled. In this figure, we have not filtered the rapid oscillations, these can be seen in the regions of the revivals of the photon number operator. ## 4 Wigner function Having determined the expectation values of several observables, and the degree of mixedness and entanglement on the intracavity field and the mechanical mirror, via the linear entropy, we now show a representation of the wavepackets evolution in phase space using the Wigner function. We choose to make a comparison between a purely numerical solution of the Hamiltonian (19), and the solution using the wavefunction (41), derived from the approximated interaction Hamiltonian (32). To begin, we recall the definition of the continuous and discrete Wigner function of a density matrix \(\hat{\rho}\)[17, 18] \[\begin{split} W_{\hat{\rho}}(q,p)&=\frac{1}{\pi} \int\limits_{-\infty}^{\infty}dx\,\exp\left(-2\mathrm{i}xp\right)\left\langle q -x\right|\hat{\rho}\left|q+x\right\rangle,\\ \tilde{W}_{\hat{\rho}}(q,p)&=\frac{1}{N}\sum_{n} \exp\left(-\frac{4\mathrm{i}\pi np}{N}\right)\left\langle q-n\right|\hat{\rho }\left|q+n\right\rangle,\end{split} \tag{63}\] where \(q\) and \(p\) are the position and canonical momentum in the phase-space, and \(N\) is the dimension of the discrete and finite Hilbert space used to spawn the phase-space coordinates; either way, the procedure to obtain the Wigner functions for the comparison is similar. In the first case, a complete numerical solution of the initial forced optomechanical Hamiltonian (19) is given to an ordinary differential equation solver in Python's QuTip [14] and Julia's QuantumOptics.jl [19], returning the evolved density matrix; i.e., there is no approximation in the dynamical equations, just numerical considerations for the discretization of the Fock space. In the second case, we treat the approximate Hamiltonian (32), first numerically solving their time-dependent coefficient equations \(\hat{\beta}_{i}\) in (35), (36), (37), and putting them in the wavefunction (41). For both cases, we use the built-in Wigner function routine in the numerical packages to obtain the respective phase-space representation. In Figs. 10 and 11, we show the Wigner function for the field and the mechanical oscillator, respectively. The parameters are carried on from the set used above for the linear entropy, i.e., \(\omega_{c}=10^{9}\,\mathrm{s}^{-1}\), \(\omega_{m}=\omega_{c}/100\), \(\omega_{p}=0.8\omega_{c}\), \(G_{0}=0.33\omega_{m}\), \(\Omega=(\pi/20)\omega_{c}\), and the initial coherent wavefunctions with \(\alpha=2\), \(\Gamma=2\). In each figure, the row above shows the Wigner function using the analytical solution given by the wavefunction (41), and the row below shows the Wigner function using the numerical solution of the Schrodinger equation using the original Hamiltonian (19). From left to right, the snapshots correspond to the times \(t_{n}=\left\{0,\frac{\pi}{\omega_{m}},\frac{2\pi}{\omega_{m}}\right\}\) where minimum, maximum and minimum entanglement is present. From Fig. 10, the visual verification reveals that the field evolves in a non-classical form, as a quantum harmonic oscillator whose dynamics have some degree of complexity, giving place to phase space interferences and negative values in the Wigner function. We can see a very good agreement between both calculations, an indication of the accuracy of the approximate methodology we used. In Fig. 11 we show the Wigner function for the mechanical oscillator, as it was to be expected, the mechanical oscillator behaves classically, its Wigner function being positive at all times and showing a squeezed profile in the momentum direction when the entanglement is maximum at \(t=\pi/\omega_{m}\). It can be seen that there are very small differences between the purely numerical calculations and the analytic ones due to the approximations made in the analytic treatment. ## 5 Conclusions In this work, we have obtained the exact time-evolution operator for the undriven optomechanical system, and we have evaluated the evolution of the average phonon number as a function of the effective coupling parameter \(G=G_{0}|\alpha|^{2}\). We have seen that, for small values of \(G_{0}/\omega_{m}\) the mechanical oscillator cools down, and the amount of cooling increases as we take larger values of the coupling until a point where a further increase of the coupling reverses this conduct, the cooling diminishes and we can attain a region where heating is reached. We developed an algebraic method to obtain an approximate time-evolution operator for the driven optomechanical system. The forcing term in the total Hamiltonian is responsible for the time-dependent Lie algebra in the interaction picture Hamiltonian, therefore it is necessary to make approximations in order to apply the standard algebraic method. The transformed creation-annihilation operators have an effective frequency \(\omega_{eff}\), that contains operators corresponding to the mechanical oscillator and to the field, they take the form \(\hat{a}_{I}(t)=\hat{a}e^{-i\omega_{eff}t}\). Due to the presence of non-commuting operators in \(\omega_{eff}\), we cannot apply the Wei-Norman theorem [9], so we approximate the exponential by its average value taken between initial coherent states for the field and for the mechanical oscillator, obtaining an approximate interaction Hamiltonian whose time evolution operator is exact. With this evolution operator, we evaluated the evolution of the average values of the number operator of photons and phonons, and we found a very good agreement between the analytic Figure 11: Plot of the Wigner function for the mechanical mirror evolution. The initial eigenfunction is a coherent state \(|\Gamma=2\rangle\). The selected snapshots are at times where the entanglement between the field and the mechanical mirror is a minimum or a maximum, this is, from left to right, \(t_{n}=\left\{0,\frac{\pi}{\omega_{m}},\frac{2\pi}{\omega_{m}}\right\}\). In the row above, we have the analytical solution given by the wavefunction (41), and in the row below, we have the numerical solution solving the Schrödinger equation for the original Hamiltonian (19). results and purely numerical results for the case of the photons; for the phonons and coupling constant in the strong regime, we obtained fast oscillations that are not seen in the numerical calculation. When we take an average of these fast oscillations, the analytic and the numerical results show a good agreement. To find out if the mechanical oscillator behaves classically or not, we evaluated its Mandel parameter and its linear entropy; in both of these calculations, we found that the mechanical oscillator behaves classically. Finally, we computed the Wigner function for the field and the mechanical oscillator using the approximate evolved state and also using a purely numerical calculation. For the field, we found a non-classical behavior, since the Wigner function attains negative values; for the mechanical oscillator, the Wigner function is positive at all times, corresponding to a classical behavior. It is worth mentioning the qualitative agreement of the Wigner functions obtained from the methodology developed in this work, and the purely numerical one. **Acknowledgements:** J. Recamier and L. Medina-Dozal acknowledge partial support from DGAPA-UNAM project IN109822. A.R. Urzua acknowledges postdoctoral support from CONAH-CyT and DGAPA-UNAM.
2310.00209
Nonlinear stability of entropy waves for the Euler equations
In this article, we consider a class of the contact discontinuity for the full compressible Euler equations, namely the entropy wave, where the velocity is continuous across the interface while the density and the entropy can have jumps. The nonlinear stability of entropy waves is a longstanding open problem in multi-dimensional hyperbolic conservation laws. The rigorous treatments are challenging due to the characteristic discontinuity nature of the problem (G.-Q. Chen and Y.-G. Wang in \textit{Nonlinear partial differential equations}, Volume 7 of \textit{Abel Symp.}(2012)). In this article, we discover that the Taylor sign condition plays an essential role in the nonlinear stability of entropy waves. By deriving the evolution equation of the interface in the Eulerian coordinates, we relate the Taylor sign condition to the hyperbolicity of this evolution equation, which reveals a stability condition of the entropy wave. With the optimal regularity estimates of the interface, we can derive the a priori estimates without loss of regularity.
Wei Wang, Zhifei Zhang, Wenbin Zhao
2023-09-30T00:56:50Z
http://arxiv.org/abs/2310.00209v2
# Nonlinear stability of entropy waves for the Euler equations ###### Abstract. In this article, we consider a class of the contact discontinuity for the full compressible Euler equations, namely the entropy wave, where the velocity is continuous across the interface while the density and the entropy can have jumps. The nonlinear stability of entropy waves is a longstanding open problem in multi-dimensional hyperbolic conservation laws. The rigorous treatments are challenging due to the characteristic discontinuity nature of the problem (G.-Q. Chen and Y.-G. Wang in _Nonlinear partial differential equations_, Volume 7 of _Abel Symp._(2012)). In this article, we discover that the Taylor sign condition plays an essential role in the nonlinear stability of entropy waves. By deriving the evolution equation of the interface in the Eulerian coordinates, we relate the Taylor sign condition to the hyperbolicity of this evolution equation, which reveals a stability condition of the entropy wave. With the optimal regularity estimates of the interface, we can derive the a priori estimates without loss of regularity. Key words and phrases:compressible Euler, free boundary problems, contact discontinuity, entropy wave 2010 Mathematics Subject Classification: 35Q31, 35Q35, 35R35, 76B03, 76N10 W. Wang is supported by NSF of China (Nos.11931010, 12271476). Z. Zhang is supported by NSF of China (Nos. 12171010, 12288101). W. Zhao is supported by China Postdoctoral Science Foundation (Nos.2020TQ001, 2021M690225). For a piecewise smooth weak solution of (1.6) in a domain \(\Omega=\mathbb{T}^{2}\times(-1,\,1)\) with \[\Omega^{\pm}=\{x=(\overline{x},\,x_{3})=(x_{1},\,x_{2},\,x_{3})\in\Omega:\,x_{3} \gtrless f(t,\,\overline{x})\},\] the Rankine-Hugoniot (RH) conditions are satisfied on the interface \(\Gamma_{f}=\{(\overline{x},\,f(t,\,\overline{x}):\,\overline{x}\in\mathbb{T}^ {2})\}\): \[\begin{cases}\llbracket m_{N}\rrbracket=0,\\ m_{N}\llbracket u_{N}\rrbracket+|N|^{2}\llbracket p\rrbracket=0,\\ m_{N}\llbracket u_{\tau}\rrbracket=0,\\ m_{N}\llbracket\frac{|u|^{2}}{2}+e\rrbracket+\llbracket pu_{N}\rrbracket=0, \end{cases} \tag{1.3}\] where \[N=(-\partial_{1}f,\,-\partial_{2}f,\,1)^{\top},\qquad\tau_{1}=(1,\,0,\, \partial_{1}f)^{\top},\qquad\tau_{2}=(0,\,1,\,\partial_{2}f)^{\top}, \tag{1.4}\] \[u_{N}=u\cdot N,\qquad u_{\tau}=(u\cdot\tau_{1},\,u\cdot\tau_{2})^{\top},\qquad m _{N}=\rho(u_{N}-\partial_{t}f). \tag{1.5}\] There are two kinds of characteristic discontinuities on which \(\llbracket p\rrbracket=\llbracket u_{N}\rrbracket=m_{N}=0\) (see [4, 5, 7, 13]): * Vortex sheets: \[\llbracket u_{\tau}\rrbracket\neq 0;\] * Entropy waves: \[\llbracket u_{\tau}\rrbracket=0,\qquad\llbracket\rho\rrbracket\neq 0,\qquad \llbracket S\rrbracket\neq 0.\] In this article, we focus on the entropy waves. The system (1.1) can be rewritten as a symmetric hyperbolic system of \((p^{\pm},\,u^{\pm},\,S^{\pm})\) in \(\Omega^{\pm}\) respectively: \[\begin{cases}\frac{1}{\gamma p^{2}}D_{t}^{\pm}p^{\pm}+\nabla\cdot u^{\pm}=0, \\ \rho^{\pm}D_{t}^{\pm}u^{\pm}+\nabla p^{\pm}=0,\\ D_{t}^{\pm}S^{\pm}=0,\end{cases} \tag{1.6}\] where \(D_{t}^{\pm}=\partial_{t}+u^{\pm}\cdot\nabla\). Since \(\llbracket u\rrbracket=0\), we shall just use \(D_{t}=\partial_{t}+u\cdot\nabla\) to denote the material derivative. The density \(\rho^{\pm}\) can be recovered from (1.2) by \[\rho^{\pm}=A^{-\frac{1}{\gamma}}(p^{\pm})^{\frac{1}{\gamma}}\mathrm{e}^{- \frac{S^{\pm}}{\gamma}}. \tag{1.7}\] On the interface \(\Gamma_{f}\), the RH conditions across \(\Gamma_{f}\) are \[\llbracket p\rrbracket=p^{+}-p^{-}=0,\qquad\llbracket u\rrbracket=u^{+}-u^{- }=0,\qquad\text{on }\Gamma_{f}. \tag{1.8}\] Meanwhile, the density and the entropy can have jumps across \(\Gamma_{f}\). We shall assume that \[\llbracket\rho\rrbracket=\rho^{+}-\rho^{-}\neq 0,\qquad\llbracket S \rrbracket=S^{+}-S^{-}\neq 0,\qquad\text{on }\Gamma_{f}. \tag{1.9}\] The evolution of the interface \(\Gamma_{f}\) is given by \[\partial_{t}f=u^{\pm}\cdot N. \tag{1.10}\] We shall also use \(N^{\pm}=\mp N\) to indicate outer normal directions of \(\Gamma_{f}\) in \(\Omega^{\pm}\) respectively. On the fixed upper and lower boundaries \(\Gamma^{\pm}=\mathbb{T}^{2}\times\{\pm 1\}\), there holds that \[u^{\pm}\cdot n^{\pm}\big{|}_{\Gamma^{\pm}}=0, \tag{1.11}\] with \(n^{\pm}=(0,\,0,\,\pm 1)^{\top}\). In this article, we shall prove the a priori estimates of the problem (1.6)-(1.11) without loss of regularity under the Taylor sign condition \[\llbracket\nabla_{N}p\rrbracket=\nabla_{N}p^{+}-\nabla_{N}p^{-}=-\nabla_{N^{+} }p^{+}-\nabla_{N^{-}}p^{-}>0. \tag{1.12}\] More precise statement of the main result will be presented in Section 3. ### History and related works There are three fundamental waves in the multi-dimensional hyperbolic conservation laws: shock waves, rarefaction waves and contact discontinuities (including vortex sheets and entropy waves). The interested reader is referred to [4, 13] for detailed discussion. The nonlinear stability of shock waves and rarefaction waves were proved in [24, 25] and [2, 3] respectively. As for contact discontinuities, they are characteristic discontinuities and usually subject to the Kelvin-Helmholtz instability and the Rayleigh-Taylor instability (see [14, 16]). If \(\llbracket u\rrbracket\neq 0\), the contact discontinuity is also called the vortex sheet. The 3D vortex sheets are violently unstable while the 2D vortex sheets are weakly stable under the supersonic condition (see [9, 15, 27]). The nonlinear stability of the 2D vortex sheets were proved in [11, 12] (see also [31, 32]). If \(\llbracket u\rrbracket=0\) where the velocity is continuous across the interface, the contact discontinuity is also called the entropy wave. The normal modes analysis shows that the entropy wave is only weakly stable (see [5]). Recently in [19], the authors proved the stability of the entropy wave with constant states by vanishing viscosity. However, the nonlinear stability of the general entropy waves in multi-dimensional situations is a longstanding open problem (see [7]). There even lacks stability conditions addressing this problem. As stated in [7], _"it would be interesting to analyze entropy waves to explore new phenomena and features of these waves in two-dimensions and even higher dimensions."_ In this article, we discover that the Taylor sign condition is essential to the nonlinear stability of entropy waves and prove the a priori estimates of the problem without loss of regularity. More precisely: * We shall derive the evolution equation of the interface and study the problem in the Eulerian coordinates. This approach was first used in [34] to investigate the stability of the incompressible current-vortex sheets. It was generalized by the authors to the one-phase compressible Euler equations in [39], where it is vacuum in \(\Omega^{+}\). The entropy wave can be seen as a two-phase problem, where we have two fluids (\(p^{\pm}\), \(u^{\pm}\), \(S^{\pm}\)) in \(\Omega^{\pm}\) respectively. * With the evolution equation of the interface, we discover that the Taylor sign condition is a natural stability condition since it is equivalent to the hyperbolicity of this evolution equation. Then we can derive the optimal regularity estimates of the interface. This enables us to investigate the quantities inside \(\Omega^{\pm}\) in a simpler way. We do not need to make a change of coordinates or use the Alinhac's good unknowns. The a priori estimates can be derived without loss of regularity. This is important since loss of regularity is a common phenomenon for characteristic discontinuities (see [4]). * The Taylor sign condition is commonly used when treating the free boundary problems. However, for the two-phase compressible flow without gravity or other forces, it is a strong requirement that the Taylor sign condition (1.12) holds at each point on the interface. On the other hand, the violation of the Taylor sign condition will lead to the Rayleigh-Taylor instability (see [16]). * In order for the piecewise smooth solution to be the entropy wave defined in (1.8)-(1.9), there need some high order compatibility conditions (3.14) on the interface. As discussed in Remark 3.1, the violation of the compatibility conditions could transform the entropy wave to a vortex sheet and lead to the Kelvin-Helmholtz instability. This suggests that the entropy wave is a really special class of contact discontinuities of the compressible Euler equations. See also the discussion of viscous contact discontinuities in [17, 18]. There is also a huge literature investigating other stabilizing effects on the interface. When taking magnetic fields under consideration, there are more types of characteristic discontinuities (see [5, 20]). If the magnetic fields are parallel to the interface, the characteristic discontinuities are called current-vortex sheets. The stability of the current-vortex sheets were investigated in [6, 35, 37, 42] for the compressible case and in [10, 22, 28, 34, 36, 43] for the incompressible case. If the magnetic fields are continuous across the interface and not parallel to the interface, the characteristic discontinuities are called MHD contact discontinuities. In [29, 30] Morando et al proved the nonlinear stability assuming that the Taylor sign condition holds. See also [38] for the case with surface tension. Recently, Wang and Xin in [40] managed to prove the nonlinear stability without the assumption of the Taylor sign condition. They used the Lagrangian coordinates and verified that the normal component of the magnetic field can stabilize the interface (see also [41] for the incompressible case with surface tension). The rest of the article is organized as follows. After laying out some preliminaries in Section 2, we shall derive the evolution equation of the interface and present the main result in Section 3. We prove some basic estimates in Section 4. The evolution of the interface is estimated in Section 5. The estimates of the pressure, the velocity and the entropy are derived in Section 6. In Appendix A, some results on the elliptic systems are presented. We also list some analytic tools in Appendix B. ## 2. Preliminaries In this section, we recall some preliminary results on the harmonic coordinates and the Dirichlet-Neumann (DN) operators from [34, 39]. The notations and basic properties of paradifferential operators are included in Appendix B. ### Harmonic coordinates Given a smooth function \(f_{*}=f_{*}(\overline{x})\), we define a reference domain \[\Omega_{*}^{\pm}=\{x\in\Omega:\,x_{3}\gtrless f_{*}(\overline{x})\},\qquad \Gamma_{*}=\{(\overline{x},\,f_{*}(\overline{x})):\,\overline{x}\in\mathbb{T} ^{2}\}.\] We shall consider the free boundary problem that lies in a neighborhood of the reference domain \(\Omega_{*}\). To this end, we define \[\mathcal{N}(\delta,\,\kappa)=\{f\in H^{\kappa}(\mathbb{T}^{2}):\,\big{\|}f-f_{ *}\big{\|}_{H^{\kappa}(\mathbb{T}^{2})}\leq\delta\}.\] For a function \(f\in\mathcal{N}(\delta,\,\kappa)\), set \[\Omega_{f}^{\pm}=\{x\in\Omega:\,x_{3}\gtrless f(\overline{x})\},\qquad\Gamma_ {f}=\{(\overline{x},\,f(\overline{x})):\,\overline{x}\in\mathbb{T}^{2}\}.\] Then we can introduce the harmonic coordinates. Define \(\Phi_{f}^{\pm}:\,\Omega_{*}^{\pm}\to\Omega_{f}^{\pm}\) by the harmonic extension: \[\begin{cases}\Delta\Phi_{f}^{\pm}=0,&\text{in }\Omega_{*}^{\pm},\\ \Phi_{f}^{\pm}(\overline{x},\,f_{*}(\overline{x}))=(\overline{x},\,f( \overline{x})),&\text{on }\Gamma_{*},\\ \Phi_{f}^{\pm}(\overline{x},\,\pm 1)=(\overline{x},\,\pm 1),&\text{on }\Gamma^{ \pm}.\end{cases} \tag{2.1}\] Given \(f_{*}\), there exists \(\delta_{0}>0\) such that \(\Phi_{f}^{\pm}\) is bijective when \(\delta\in[0,\,\delta_{0}]\). Thus we can also define the inverse map \((\Phi_{f}^{\pm})^{-1}:\,\Omega_{f}^{\pm}\to\Omega_{*}^{\pm}\) such that \[\Phi_{f}^{\pm}\circ(\Phi_{f}^{\pm})^{-1}=\operatorname{Id},\qquad(\Phi_{f}^{ \pm})^{-1}\circ\Phi_{f}^{\pm}=\operatorname{Id}.\] Let us list some basic inequalities about harmonic coordinates without proof. **Lemma 2.1**.: _Assume that \(f\in\mathcal{N}(\delta_{0},\,\kappa)\) with \(\kappa\geq 4\). Then there exists a constant \(C=C(\delta_{0},\,\big{\|}f_{*}\big{\|}_{H^{\kappa}(\mathbb{T}^{2})})\) such that_ 1. _If_ \(u\in H^{s}(\Omega_{f}^{\pm})\) _with_ \(s\in[0,\,\kappa]\)_, then_ \[\big{\|}u\circ\Phi_{f}^{\pm}\big{\|}_{H^{s}(\Omega_{*}^{\pm})}\leq C\big{\|}u \big{\|}_{H^{s}(\Omega_{f}^{\pm})}.\] 2. _If_ \(u\in H^{s}(\Omega_{*}^{\pm})\) _with_ \(s\in[0,\,\kappa]\)_, then_ \[\big{\|}u\circ(\Phi_{f}^{\pm})^{-1}\big{\|}_{H^{s}(\Omega_{f}^{\pm})}\leq C \big{\|}u\big{\|}_{H^{s}(\Omega_{f}^{\pm})}.\] 3. _If_ \(u,\,v\in H^{s}(\Omega_{f}^{\pm})\) _with_ \(s\in[2,\,\kappa]\)_, then_ \[\big{\|}uv\big{\|}_{H^{s}(\Omega_{f}^{\pm})}\leq C\big{\|}u\big{\|}_{H^{s}( \Omega_{f}^{\pm})}\big{\|}v\big{\|}_{H^{s}(\Omega_{f}^{\pm})}.\] ### The Dirichlet-Neumann operator For a smooth enough function \(g=g(\overline{x})\) on \(\Gamma_{f}=\{(\overline{x},\,f(\overline{x})):\,\overline{x}\in\mathbb{T}^{2}\}\), denote the harmonic extension of \(g\) to \(\Omega_{f}^{\pm}\) by \(\mathcal{H}_{f}^{\pm}g\), that is, \[\begin{cases}\Delta\mathcal{H}_{f}^{\pm}g=0,&\text{in }\Omega_{f}^{\pm},\\ (\mathcal{H}_{f}^{\pm}g)(\overline{x},\,f(\overline{x}))=g(\overline{x}),& \text{on }\Gamma_{f},\\ (\mathcal{H}_{f}^{\pm}g)(\overline{x},\,\pm 1)=0,&\text{on }\Gamma^{\pm}.\end{cases} \tag{2.2}\] Here we use the Dirichlet boundary condition on the bottom \(\Gamma^{\pm}\) instead of the Neumann boundary condition as in the usual case. This modification is useful in the energy estimates in the following sections. For a smooth enough function \(g=g(\overline{x})\) on \(\Gamma_{f}=\{(\overline{x},\,f(\overline{x})):\,\overline{x}\in\mathbb{T}^{2}\}\), define \[\mathcal{G}_{f}^{\pm}g=N^{\pm}\cdot\nabla\mathcal{H}_{f}^{\pm}g\big{|}_{ \Gamma_{f}}=\mp N\cdot\nabla\mathcal{H}_{f}^{\pm}g\big{|}_{\Gamma_{f}}, \tag{2.3}\] where \(N=(-\partial_{1}f,\,-\partial_{2}f,\,1)^{\top}\) is the scaled normal vector on the surface \(\Gamma_{f}\). However, all the regularity properties of the Dirichlet-Neumann operator will be kept in spite of the modification, as discussed in the appendix of [39]. The same arguments in [21] yield the following basic properties of the DN operator. **Lemma 2.2**.: _Let \(f\in\mathcal{N}(\delta_{0},\,\kappa)\) with \(\kappa\geq 4\). Then there exists a constant \(C=C(\delta_{0},\,\big{\|}f\big{\|}_{H^{\kappa}(\mathbb{T}^{2})})\) such that_ 1. \(\mathcal{G}_{f}^{\pm}\) _is self-adjoint:_ \[(\mathcal{G}_{f}^{\pm}\phi,\,\psi)=(\phi,\,\mathcal{G}_{f}^{\pm}\psi),\qquad \forall\phi,\,\psi\in H^{\frac{1}{2}}(\mathbb{T}^{2}).\] 2. \(\mathcal{G}_{f}^{\pm}\) _is positive:_ \[(\mathcal{G}_{f}^{\pm}\phi,\,\phi)\geq C\big{\|}\phi\big{\|}_{H^{\frac{1}{2}} (\mathbb{T}^{2})}.\] By the appendix of [34], we also have the following paralinearization of the DN operators. **Lemma 2.3**.: _Assume that \(f\in H^{\kappa}(\mathbb{T}^{2})\) with \(\kappa\geq 4\). Then the DN operators \(\mathcal{G}_{f}^{\pm}\) can be decomposed as_ \[\mathcal{G}_{f}^{\pm}=T_{\lambda}+R_{f}^{\pm}, \tag{2.4}\] _where the symbol of the leading term is_ \[\lambda(x,\,\xi)=\sqrt{(1+|\nabla f|^{2})|\xi|^{2}-(\nabla f\cdot\xi)^{2}} \tag{2.5}\] _and the remainder terms \(R_{f}^{\pm}\) satisfy that_ \[\big{\|}R_{f}^{\pm}g\big{\|}_{H^{\kappa}}\leq C\big{(}\big{\|}f\big{\|}_{H^{ \kappa}}\big{)}\big{\|}g\big{\|}_{H^{\kappa}},\qquad\forall s\in[1/2,\,\kappa -1]. \tag{2.6}\] _Furthermore, there holds that_ \[\big{\|}\mathcal{G}_{f}^{\pm}g\big{\|}_{H^{\kappa-1}}\leq C\big{(}\big{\|}f \big{\|}_{H^{\kappa}}\big{)}\big{\|}g\big{\|}_{H^{\kappa}},\qquad\forall s\in[ 1/2,\,\kappa]. \tag{2.7}\] ### Notations When there is no ambiguity, we shall use \(u(x)=\mathbf{1}_{\Omega^{+}}(x)u^{+}(x)+\mathbf{1}_{\Omega^{-}}(x)u^{-}(x)\) and \(\left\|u\right\|_{H^{s}(\Omega)}=\left\|u^{+}\right\|_{H^{s}(\Omega^{+})}+\left\| u^{-}\right\|_{H^{s}(\Omega^{-})}\) to simplify notations. Since \(u\) is continuous across \(\Gamma_{f}\) from (1.8), we shall just use \(D_{t}=\partial_{t}+u\cdot\nabla\) as the material derivative. The tangential derivatives in \(\Omega^{\pm}\) are \[\overline{\partial}_{t}=\partial_{t}+\mathcal{H}_{f}^{\pm}(\partial_{t}f) \partial_{3},\qquad\overline{\partial}_{j}=\partial_{j}+\mathcal{H}_{f}^{\pm} (\partial_{j}f)\partial_{3}\quad(j=1,\,2). \tag{2.8}\] By (1.10), it is direct to verify that \[D_{t}=\overline{\partial}_{t}+u_{1}\overline{\partial}_{1}+u_{2}\overline{ \partial}_{2},\qquad\text{on }\Gamma_{f}. \tag{2.9}\] By the definition of the harmonic extension (2.2), the derivatives \(\overline{\partial}=(\overline{\partial}_{1},\,\overline{\partial}_{2})\) are tangential to both \(\Gamma_{f}\) and \(\Gamma^{\pm}\). We denote by \[\Lambda=\langle\nabla\rangle=(1-\Delta)^{1/2},\qquad\Upsilon=\langle\overline {\partial}\rangle=(1+|\overline{\partial}|^{2})^{1/2} \tag{2.10}\] when treating high order derivatives. The minuscule indices \(i,\,j,\,k\) are in \(\{1,\,2\}\) and the capital indices \(J,\,K\) are in \(\{1,\,2,\,3\}\). We shall use the Einstein summation convention, that is, a repeated index in a term means summation of terms over the index. To simply the arguments, we also omit all the binomial coefficients \(\binom{l}{m}=\frac{l!}{m!(l-m)!}\). In the following, we shall use \(\mathcal{H}^{\pm}=\mathcal{H}_{f}^{\pm}\) and \(\mathcal{G}^{\pm}=\mathcal{G}_{f}^{\pm}\) if there is no confusion of the function \(f\). ## 3. Reformulation and main result ### Evolution of interface \(f\) By (2.9), we can rewrite the evolution equation of \(f\) in (1.10) as \[D_{t}f=u_{3}^{\pm}. \tag{3.1}\] By taking another derivative \(D_{t}\) to both sides of (3.1), we have \[D_{t}^{2}f=D_{t}u_{3}^{\pm}. \tag{3.2}\] For \(i=1,\,2\), there holds that \[\begin{split} D_{t}^{2}\overline{\partial}_{i}f=& \overline{\partial}_{i}D_{t}u_{3}^{\pm}+[D_{t}^{2},\,\overline{ \partial}_{i}]f\\ =&\overline{\partial}_{i}D_{t}u_{3}^{\pm}+[D_{t},\, \overline{\partial}_{i}]D_{t}f+D_{t}[D_{t},\,\overline{\partial}_{i}]f\\ =&\overline{\partial}_{i}D_{t}u_{3}^{\pm}-\overline {\partial}_{i}u_{j}^{\pm}\overline{\partial}_{j}D_{t}f-D_{t}(\overline{ \partial}_{i}u_{j}^{\pm}\overline{\partial}_{j}f)\\ =&\overline{\partial}_{i}D_{t}u_{3}^{\pm}-\overline {\partial}_{i}u_{j}^{\pm}(D_{t}\overline{\partial}_{j}f+\overline{\partial}_{ j}u_{k}^{\pm}\overline{\partial}_{k}f)\\ &-(\overline{\partial}_{i}D_{t}u_{j}^{\pm}-\overline{\partial}_{ i}u_{k}^{\pm}\overline{\partial}_{k}u_{j}^{\pm})\overline{\partial}_{j}f- \overline{\partial}_{i}u_{j}^{\pm}D_{t}\overline{\partial}_{j}f\\ =&\overline{\partial}_{i}D_{t}u^{\pm}\cdot N-2 \overline{\partial}_{i}u_{j}^{\pm}D_{t}\overline{\partial}_{j}f.\end{split} \tag{3.3}\] Furthermore, we can plug the momentum equation in (1.6) to (3.3) to get that \[\begin{split}\rho^{\pm}D_{t}^{2}\overline{\partial}_{i}f=& -\overline{\partial}_{i}\nabla p^{\pm}\cdot N+\frac{\overline{ \partial}_{i}\rho^{\pm}}{\rho^{\pm}}\nabla_{N}p^{\pm}-2\rho^{\pm}\overline{ \partial}_{i}u_{j}^{\pm}D_{t}\overline{\partial}_{j}f\\ =&-\Big{\{}\nabla\overline{\partial}_{i}p^{\pm}- \partial_{3}p^{\pm}\nabla\mathcal{H}^{\pm}(\overline{\partial}_{i}f)\Big{\}} \cdot N+\frac{\overline{\partial}_{i}\rho^{\pm}}{\rho^{\pm}}\nabla_{N}p^{\pm} -2\rho^{\pm}\overline{\partial}_{i}u_{j}^{\pm}D_{t}\overline{\partial}_{j}f \\ =&\partial_{3}p^{\pm}\nabla_{N}\mathcal{H}^{\pm}( \overline{\partial}_{i}f)-\nabla_{N}\overline{\partial}_{i}p^{\pm}+\frac{ \overline{\partial}_{i}\rho^{\pm}}{\rho^{\pm}}\nabla_{N}p^{\pm}-2\rho^{\pm} \overline{\partial}_{i}u_{j}^{\pm}D_{t}\overline{\partial}_{j}f.\end{split} \tag{3.4}\] Thus, by taking a sum of (3.4) with the positive index and the negative index respectively, we have \[D_{t}^{2}\overline{\partial}_{i}f+\frac{1}{\rho^{+}+\rho^{-}}\Big{\{}\partial _{3}p^{+}\mathcal{G}^{+}-\partial_{3}p^{-}\mathcal{G}^{-}\Big{\}}\overline{ \partial}_{i}f=\frac{1}{\rho^{+}+\rho^{-}}\Big{\{}\mathcal{M}^{+}+\mathcal{M}^ {-}\Big{\}}, \tag{3.5}\] with the Dirichlet-Neumann operators \(\mathcal{G}^{\pm}\) defined in (2.3) and \[\mathcal{M}^{\pm}=-\nabla_{N}\overline{\partial}_{i}p^{\pm}+\frac{\overline{ \partial}_{i}\rho^{\pm}}{\rho^{\pm}}\nabla_{N}p^{\pm}-2\rho^{\pm}\overline{ \partial}_{i}u_{j}^{\pm}D_{t}\overline{\partial}_{j}f. \tag{3.6}\] Lastly, the decomposition of the DN operators \(\mathcal{G}^{\pm}=T_{\lambda}+R^{\pm}\) in (2.4) can be applied to get that \[D_{t}^{2}\overline{\partial}_{i}f+\mathfrak{a}T_{\lambda}\overline{\partial} _{i}f=\mathcal{N}^{+}+\mathcal{N}^{-}, \tag{3.7}\] where the Taylor sign \(\mathfrak{a}\) is \[\mathfrak{a}=\frac{\partial_{3}p^{+}-\partial_{3}p^{-}}{\rho^{+}+\rho^{-}}, \tag{3.8}\] and the lower order terms are \[\begin{split}\mathcal{N}^{\pm}=&-\frac{1}{\rho^{+ }+\rho^{-}}\nabla_{N}\overline{\partial}_{i}p^{\pm}+\frac{\overline{\partial}_ {i}\rho^{\pm}}{(\rho^{+}+\rho^{-})\rho^{\pm}}\nabla_{N}p^{\pm}\\ &-\frac{2\rho^{\pm}}{\rho^{+}+\rho^{-}}\overline{\partial}_{i}u_ {j}^{\pm}D_{t}\overline{\partial}_{j}f\mp\frac{\partial_{3}p^{\pm}}{\rho^{+}+ \rho^{-}}R^{\pm}\overline{\partial}_{i}f.\end{split} \tag{3.9}\] Since \[\partial_{3}=\frac{1}{1+|\nabla f|^{2}}\Big{\{}\nabla_{N}+\partial_{i}f \overline{\partial}_{i}\Big{\}},\] the RH conditions (1.8) and the Taylor sign condition (1.12) imply that \[\begin{split}\mathfrak{a}=&\frac{1}{(\rho^{+}+\rho ^{-})(1+|\nabla f|^{2})}\Big{\{}\llbracket\nabla_{N}p\rrbracket+\partial_{i}f \llbracket\overline{\partial}_{i}p\rrbracket\Big{\}}\\ =&\frac{\llbracket\nabla_{N}p\rrbracket}{(\rho^{+}+ \rho^{-})(1+|\nabla f|^{2})}>0.\end{split} \tag{3.10}\] ### Evolution of the vorticity \(\omega^{\pm}\) and the entropy \(S^{\pm}\) By taking the curl of the both sides of the momentum equation in (1.6), we have the evolution equation of the vorticity \(\omega^{\pm}\) as \[D_{t}\omega^{\pm}=\omega^{\pm}\cdot\nabla u^{\pm}-\omega^{\pm}(\nabla\cdot u^{ \pm})-\nabla\frac{1}{\rho^{\pm}}\times\nabla p^{\pm}. \tag{3.11}\] The evolution of the entropy \(S^{\pm}\) is given by the entropy equation in (1.6) as \[D_{t}S^{\pm}=0. \tag{3.12}\] ### Evolution of the pressure \(p^{\pm}\) To derive the evolution equation of the pressure \(p^{\pm}\), we take \(D_{t}{(\ref{eq:p})}_{1}-\nabla\cdot\left\{\frac{1}{\rho^{\pm}}\times{(\ref{eq: p})}_{2}\right\}\) to get that \[D_{t}\Big{(}\frac{1}{\gamma p^{\pm}}D_{t}p^{\pm}\Big{)}-\nabla\cdot\Big{(} \frac{1}{\rho^{\pm}}\nabla p^{\pm}\Big{)}=\operatorname{tr}{(\nabla u^{\pm}) ^{2}}. \tag{3.13}\] This is a second order wave equation. ### Compatibility conditions Assume that \(\kappa\geq 4\) is an integer. To estimate high order derivatives of the piecewise smooth weak solutions, we need the following compatibility conditions on the interface \(\Gamma_{f}\): \[[\![D_{t}^{l}p]\!]=0,\qquad[\![D_{t}^{l}u]\!]=0,\qquad 0\leq l\leq\kappa+1. \tag{3.14}\] Next we discuss an important consequence of the compatibility conditions (3.14). Assume that the solution \((p^{\pm},\,u^{\pm},\,S^{\pm})\in H^{3}(\Omega^{\pm})\subset C^{1}(\Omega^{\pm})\) and \(f\in H^{3}(\mathbb{T}^{2})\subset C^{1}(\mathbb{T}^{2})\). Recall the vectors \(\tau_{1}\) and \(\tau_{2}\) in (1.4) which are tangential to the interface \(\Gamma_{f}\). The RH conditions (1.8) and the compatibility conditions (3.14) can be applied to the momentum equation in (1.6) to get that \[[\![\frac{\nabla p}{\rho}]\!]=-[\![D_{t}u]\!]=0. \tag{3.15}\] Furthermore, since \([\![p]\!]=0\) and \[[\![\tau_{i}\cdot\nabla p]\!]=\tau_{i}\cdot\nabla[\![p]\!]=0,\qquad i=1,\,2, \tag{3.16}\] by (3.15) there holds that \[0=\tau_{i}\cdot[\![\frac{\nabla p}{\rho}]\!]=[\![\frac{1}{\rho}]\!](\tau_{i} \cdot\nabla p),\qquad i=1,\,2. \tag{3.17}\] That is, when \([\![\rho]\!]\neq 0\), by (3.17) we must have \[\overline{\partial}_{i}p\big{|}_{\Gamma_{f}}=\tau_{i}\cdot\nabla p\big{|}_{ \Gamma_{f}}=0\qquad(i=1,\,2), \tag{3.18}\] which implies that the pressure on the interface \(p|_{\Gamma_{f}}=p(t,\overline{x},f(t,\,\overline{x}))\) is independent on the space variables \(\overline{x}\). Thus, we may assume that \[p\big{|}_{\Gamma_{f}}=q(t),\qquad q(0)=0. \tag{3.19}\] **Remark 3.1**.: The consequence (3.18) is necessary for the piecewise smooth weak solution to be an entropy wave. If (3.18) fails, the tangential component of the pressure force is not trivial. Since the densities have a jump \([\![\rho]\!]\neq 0\), the accelerations on both sides must have a jump: \[[\![D_{t}u]\!]\cdot\tau_{j}=-[\![\frac{\tau_{j}\cdot\nabla p}{\rho}]\!]=-[\![ \frac{1}{\rho}]\!](\tau_{j}\cdot\nabla p)\neq 0.\] The entropy wave could evolve to a vortex sheet immediately. ### Main result Assume that \(\kappa\geq 4\) is an integer. The full energy norm is defined as \[\begin{split}\mathcal{E}(t)=&\big{\|}f\big{\|}_{H^{ \kappa}(\mathbb{T}^{2})}+\big{\|}D_{t}f\big{\|}_{H^{\kappa-\frac{1}{2}}( \mathbb{T}^{2})}\\ &+\big{\|}u\big{\|}_{H^{\kappa}(\Omega)}+\big{\|}D_{t}u\big{\|}_{ H^{\kappa-1}(\Omega)}+\sum_{l=2}^{\kappa+1}\big{\|}D_{t}^{l}u\big{\|}_{H^{\kappa+1-l}( \Omega)}\\ &+\big{\|}(p,\,S)\big{\|}_{H^{\kappa}(\Omega)}+\sum_{l=1}^{ \kappa+1}\big{\|}(D_{t}^{l}p,\,D_{t}^{l}S)\big{\|}_{H^{\kappa+1-l}(\Omega)}. \end{split} \tag{3.20}\] For some \(T>0\), we shall denote by \[M=\sup_{0<t<T}\mathcal{E}(t),\qquad M_{0}=\mathcal{E}(0).\] The lower order energy norm is \[\mathcal{F}(t)=\big{\|}(p,\,u,\,S)\big{\|}_{H^{\kappa-1}(\Omega)}+\sum_{l=1}^ {\kappa}\big{\|}(D_{t}^{l}p,\,D_{t}^{l}u,\,D_{t}^{l}S)\big{\|}_{H^{\kappa-l}( \Omega)}. \tag{3.21}\] **Theorem 3.2**.: _Let \(\kappa\geq 4\) be an integer. Suppose that the initial data \((f_{\rm in},\,p_{\rm in}^{\pm},\,u_{\rm in}^{\pm},\,S_{\rm in}^{\pm})\) satisfy the bound \(\mathcal{E}(0)=M_{0}<\infty\). Furthermore, assume that there are two constants \(0<c_{0}<C_{0}\) such that_ 1. \(c_{0}\leq p_{\rm in}^{\pm}\leq C_{0}\)_,_ \(c_{0}\leq p_{\rm in}^{\pm}\leq C_{0}\)_;_ 2. \(-1+c_{0}\leq f_{\rm in}\leq 1-c_{0}\)_;_ 3. \(\frac{N_{\rm in}}{|N_{\rm in}|}\cdot\big{[}\mathbb{V}p_{\rm in}^{\pm}\big{]} \geq c_{0}\)_._ _Then for \(T>0\) small enough, the solution \((f,\,p^{\pm},\,u^{\pm},\,S^{\pm})\) to the problem (1.6)-(1.11) under the compatibility conditions (3.14) satisfies that_ 1. \(M\leq C(M_{0},\,c_{0},\,C_{0})+TC(M,\,c_{0},\,C_{0})\)_;_ 2. \(-1+\frac{c_{0}}{2}\leq f\leq 1-\frac{c_{0}}{2}\)_;_ 3. \(\frac{N}{|N|}\cdot\big{[}\mathbb{V}p^{\pm}\big{]}\geq\frac{c_{0}}{2}\)_._ The constants \(C(M_{0},\,c_{0},\,C_{0})\) are continuous functions of \(M_{0},\,c_{0},\,C_{0}\). In the energy estimates in the following sections, we shall take \(c_{0}\) and \(C_{0}\) to be fixed and just use \(C(M_{0})\) to denote constants from line to line. ## 4. Basic energy estimates In this section, we prove some basic energy estimates. ### Lower order estimates For the lower order energy norm \(\mathcal{F}\) defined in (3.21), we have the following estimates. **Proposition 4.1**.: _For \(t\in[0,\,T]\), there holds that_ \[\mathcal{F}(t)\leq C(M_{0})+TC(M). \tag{4.1}\] _Furthermore,_ \[\big{\|}(p,\,u,\,S)\big{\|}_{W^{\kappa-3,\infty}(\Omega)}+\sum_{l=1}^{ \kappa-2}\big{\|}(D_{t}^{l}p,\,D_{t}^{l}u,\,D_{t}^{l}S)\big{\|}_{W^{\kappa-2-l, \infty}(\Omega)}\leq C(M_{0})+TC(M). \tag{4.2}\] Proof.: For \(1\leq l\leq\kappa\), since \[\begin{cases}D_{t}\Lambda^{\kappa-l}D_{t}^{l}p=\Lambda^{\kappa-l}D_{t}^{l+1}p -[\Lambda^{\kappa-l},\,D_{t}]D_{t}^{l}p,\\ D_{t}\Lambda^{\kappa-l}D_{t}^{l}u=\Lambda^{\kappa-l}D_{t}^{l+1}u-[\Lambda^{ \kappa-l},\,D_{t}]D_{t}^{l}u,\\ D_{t}\Lambda^{\kappa-l}D_{t}^{l}S=0,\end{cases}\] we have \[\frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}t}\big{\|}(D_{t}^{l}p,\,D_ {t}^{l}u,\,D_{t}^{l}S)\big{\|}_{H^{\kappa-l}(\Omega)}^{2}\] \[= \int_{\Omega}\Big{\{}D_{t}\Lambda^{\kappa-l}D_{t}^{l}p\cdot \Lambda^{\kappa-l}D_{t}^{l}p+D_{t}\Lambda^{\kappa-l}D_{t}^{l}u\cdot\Lambda^{ \kappa-l}D_{t}^{l}u+D_{t}\Lambda^{\kappa-l}D_{t}^{l}S\cdot\Lambda^{\kappa-l}D_ {t}^{l}S\Big{\}}\mathrm{d}x\] \[+\int_{\Omega}\frac{\nabla\cdot u}{2}\Big{\{}|\Lambda^{\kappa-l} D_{t}^{l}p|^{2}+|\Lambda^{\kappa-l}D_{t}^{l}u|^{2}+|\Lambda^{\kappa-l}D_{t}^{l}S|^{2 }\Big{\}}\mathrm{d}x\] \[\leq C(M).\] The case when \(l=0\) follows in a similar way. Application of the Sobolev inequalities to (4.1) proves (4.2). ### Tangential energy of \((p,\,u)\) Recall the equations for the pressure and velocity \((p,\,u)\) in (1.6): \[\begin{cases}\frac{1}{\gamma p}D_{t}p+\nabla\cdot u=0,\\ \rho D_{t}u+\nabla p=0.\end{cases} \tag{4.3}\] For \(0\leq l\leq\kappa+1\), by taking \(D_{t}^{l}\) to both sides of (4.3), we have the system for \((D_{t}^{l}p,\,D_{t}^{l}u)\) as \[\begin{cases}\frac{1}{\gamma p}D_{t}^{l+1}p+\nabla\cdot D_{t}^{l}u=\mathcal{N }_{p}^{l},\\ \rho D_{t}^{l+1}u+\nabla D_{t}^{l}p=\mathcal{N}_{u}^{l},\end{cases} \tag{4.4}\] where \[\mathcal{N}_{p}^{l}=[\frac{1}{\gamma p},\,D_{t}^{l}]D_{t}p+[\nabla\cdot,\,D_{ t}^{l}]u,\qquad\mathcal{N}_{u}^{l}=[\rho,\,D_{t}^{l}]D_{t}u+[\nabla,\,D_{t}^{l}]p.\] **Proposition 4.2**.: _For \(t\in[0,\,T]\), there holds that_ \[\sum_{l=0}^{\kappa+1}\big{\|}(D_{t}^{l}p,\,D_{t}^{l}u)\big{\|}_{L^{2}(\Omega)} \leq C(M_{0})+TC(M). \tag{4.5}\] Proof.: Since \(\kappa\geq 4\) and \[\big{\|}(\mathcal{N}_{p}^{l},\,\mathcal{N}_{u}^{l})\big{\|}_{L^{2}(\Omega)}\leq C(M),\] energy estimates of (4.4) yield that \[\frac{\mathrm{d}}{\mathrm{d}t}\int_{\Omega}\Big{\{}\frac{1}{\gamma p} \frac{|D_{t}^{l}p|^{2}}{2}+\rho\frac{|D_{t}^{l}u|^{2}}{2}\Big{\}}\mathrm{d}x\] \[= \int_{\Omega}D_{t}\Big{\{}\frac{1}{\gamma p}\frac{|D_{t}^{l}p|^{ 2}}{2}+\rho\frac{|D_{t}^{l}u|^{2}}{2}\Big{\}}\mathrm{d}x+\int_{\Omega}(\nabla \cdot u)\Big{\{}\frac{1}{\gamma p}\frac{|D_{t}^{l}p|^{2}}{2}+\rho\frac{|D_{t}^ {l}u|^{2}}{2}\Big{\}}\mathrm{d}x\] \[= \int_{\Omega}\Big{\{}D_{t}\big{(}\frac{1}{\gamma p}\big{)}\frac{| D_{t}^{l}p|^{2}}{2}+D_{t}\rho\frac{|D_{t}^{l}u|^{2}}{2}\Big{\}}\mathrm{d}x+ \int_{\Omega}(\nabla\cdot u)\Big{\{}\frac{1}{\gamma p}\frac{|D_{t}^{l}p|^{2}} {2}+\rho\frac{|D_{t}^{l}u|^{2}}{2}\Big{\}}\mathrm{d}x\] \[+\int_{\Omega}\Big{\{}D_{t}^{l}p\mathcal{N}_{p}^{l}+D_{t}^{l}u \cdot\mathcal{N}_{u}^{l}\Big{\}}\mathrm{d}x+\int_{\Omega}\nabla\cdot(D_{t}^{l} uD_{t}^{l}p)\mathrm{d}x\] \[\leq C(M), \tag{4.6}\] where we have used the boundary conditions (1.11) and the compatibility conditions (3.14). ### Estimates of \(\omega\) and \(S\) From (3.11)-(3.12), the vorticity \(\omega\) and the entropy \(S\) satisfy the following transport equations respectively: \[D_{t}\omega=\omega\cdot\nabla u-\omega\nabla\cdot u-\nabla\frac{1}{\rho} \times\nabla p, \tag{4.7}\] and \[D_{t}S=0. \tag{4.8}\] Direct energy estimates yield the following result. **Proposition 4.3**.: _For \(t\in[0,\,T]\), there hold that_ \[\big{\|}\omega\big{\|}_{H^{\kappa-1}(\Omega)}+\sum_{l=1}^{\kappa}\big{\|}D_{t }^{l}\omega\big{\|}_{H^{\kappa-l}(\Omega)}\leq M_{0}+TC(M), \tag{4.9}\] _and_ \[\big{\|}S\big{\|}_{H^{\kappa}(\Omega)}+\sum_{l=1}^{\kappa+1}\big{\|}D_{t}^{l} S\big{\|}_{H^{\kappa+1-l}(\Omega)}\leq M_{0}+TC(M). \tag{4.10}\] Proof.: Taking \(\Lambda^{\kappa-l}D_{t}^{l}\) (\(1\leq l\leq\kappa\)) to both sides of (4.7), we have \[D_{t}\Lambda^{\kappa-l}D_{t}^{l}\omega=[D_{t},\,\Lambda^{\kappa-l}]D_{t}^{l} \omega+\Lambda^{\kappa-l}D_{t}^{l}(\omega\cdot\nabla u-\omega\nabla\cdot u- \nabla\frac{1}{\rho}\times\nabla p). \tag{4.11}\] Then, the Sobolev inequalities and (4.11) can be applied to get that \[\frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}t}\int_{\Omega}|\Lambda^{ \kappa-l}D_{t}^{l}\omega|^{2}\mathrm{d}x\] \[= \int_{\Omega}D_{t}\Lambda^{\kappa-l}D_{t}^{l}\omega\cdot\Lambda^{ \kappa-l}D_{t}^{l}\omega\mathrm{d}x+\int_{\Omega}\frac{\nabla\cdot u}{2}| \Lambda^{\kappa-l}D_{t}^{l}\omega|^{2}\mathrm{d}x\] \[\leq C(M)\big{\|}D_{t}\Lambda^{\kappa-l}D_{t}^{l}\omega\big{\|}_{L^{2}( \Omega)}+C(M)\big{\|}\nabla\cdot u\big{\|}_{L^{\infty}(\Omega)}\] \[\leq C(M).\] This proves (4.9) with \(1\leq l\leq\kappa+1\). The case when \(l=0\) follows similarly. For the entropy \(S\) in (4.8), since \[D_{t}\Lambda^{\kappa+1-l}D_{t}^{l}S=[D_{t},\,\Lambda^{\kappa+1-l}]D_{t}^{l}S,\] the estimates in (4.10) can be proved just as those in (4.9). ## 5. Estimates of the interface \(f\) In this section, we shall derive the estimates of the interface \(f\). **Proposition 5.1**.: _For \(t\in[0,\,T]\), there holds that_ \[\big{\|}f\big{\|}_{H^{\kappa}(\mathbb{T}^{2})}+\big{\|}D_{t}f\big{\|}_{H^{ \kappa-\frac{1}{2}}(\mathbb{T}^{2})}\leq M_{0}+TC(M). \tag{5.1}\] Recall the equation of \(f\) in (3.7): \[D_{t}^{2}\overline{\partial}_{i}f+\mathfrak{a}T_{\lambda}\overline{\partial}_ {i}f=\mathcal{N}^{+}+\mathcal{N}^{-}, \tag{5.2}\] where \(\lambda\), \(\mathfrak{a}\), and \(\mathcal{N}^{\pm}\) are given by (2.5), (3.8), and (3.9) respectively. The rest of the section is devoted to the proof of Proposition 5.1. Set \[F=\Upsilon^{\kappa-\frac{3}{2}}\overline{\partial}_{i}f=\langle\overline{ \partial}\rangle^{\kappa-\frac{3}{2}}\overline{\partial}_{i}f.\] Taking \(\Upsilon^{\kappa-\frac{3}{2}}\) to both sides of (5.2), we have the equation for \(F\) as \[D_{t}^{2}F+\mathfrak{a}T_{\lambda}F=-\,[\Upsilon^{\kappa-\frac{3}{2}},\,D_{t} ^{2}]\overline{\partial}_{i}f-[\Upsilon^{\kappa-\frac{3}{2}},\,\mathfrak{a}T _{\lambda}]\overline{\partial}_{i}f+\Upsilon^{\kappa-\frac{3}{2}}(\mathcal{N}^ {+}+\mathcal{N}^{-}). \tag{5.3}\] Direct computation shows that \[\frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}t}\int_{\mathbb{T}^{2}} \Big{\{}|D_{t}F|^{2}+\mathfrak{a}|T_{\sqrt{\lambda}}F|^{2}\Big{\}}\mathrm{d} \overline{x}\] \[= \frac{1}{2}\int_{\mathbb{T}^{2}}D_{t}\Big{\{}|D_{t}F|^{2}+ \mathfrak{a}|T_{\sqrt{\lambda}}F|^{2}\Big{\}}\mathrm{d}\overline{x}+\frac{1}{ 2}\int_{\mathbb{T}^{2}}(\overline{\partial}_{j}u_{j})\Big{\{}|D_{t}F|^{2}+ \mathfrak{a}|T_{\sqrt{\lambda}}F|^{2}\Big{\}}\mathrm{d}\overline{x}\] \[= \int_{\mathbb{T}^{2}}\Big{\{}D_{t}^{2}F\cdot D_{t}F+\mathfrak{a} \cdot D_{t}T_{\sqrt{\lambda}}F\cdot T_{\sqrt{\lambda}}F\Big{\}}\mathrm{d} \overline{x}\] \[= \int_{\mathbb{T}^{2}}\Big{\{}D_{t}^{2}F+\mathfrak{a}T_{\lambda}F \Big{\}}D_{t}F\mathrm{d}\overline{x}+\int_{\mathbb{T}^{2}}\mathfrak{a}\Big{\{} T_{\sqrt{\lambda}}^{*}T_{\sqrt{\lambda}}F-T_{\lambda}F\Big{\}}D_{t}F\mathrm{d} \overline{x}\] \[+\int_{\mathbb{T}^{2}}[\mathfrak{a}D_{t},\,T_{\sqrt{\lambda}}]F \cdot T_{\sqrt{\lambda}}F\mathrm{d}\overline{x}+\frac{1}{2}\int_{\mathbb{T} ^{2}}\Big{\{}(\overline{\partial}_{j}u_{j})|D_{t}F|^{2}+(\overline{\partial}_ {j}u_{j}\mathfrak{a}+D_{t}\mathfrak{a})|T_{\sqrt{\lambda}}F|^{2}\Big{\}} \mathrm{d}\overline{x}.\] Therefore, \[\begin{split}&\frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}t}\Big{\{}|D_{t}F| ^{2}+\mathfrak{a}|T_{\sqrt{\lambda}}F|^{2}\Big{\}}\mathrm{d}\overline{x}\\ =&\frac{1}{2}\int_{\mathbb{T}^{2}}\Big{\{}(\overline{ \partial}_{j}u_{j})|D_{t}F|^{2}+(\overline{\partial}_{j}u_{j}\mathfrak{a}+D_{t }\mathfrak{a})|T_{\sqrt{\lambda}}F|^{2}\Big{\}}\mathrm{d}\overline{x}\\ &+\int_{\mathbb{T}^{2}}\mathfrak{a}\Big{\{}T^{*}_{\sqrt{\lambda}} T_{\sqrt{\lambda}}F-T_{\lambda}F\Big{\}}D_{t}F\mathrm{d}\overline{x}+\int_{ \mathbb{T}^{2}}[\mathfrak{a}D_{t},\,T_{\sqrt{\lambda}}]F\cdot T_{\sqrt{\lambda }}F\mathrm{d}\overline{x}\\ &-\int_{\mathbb{T}^{2}}[\Upsilon^{\kappa-\frac{3}{2}},\,D_{t}^{2} ]\overline{\partial}_{i}f\cdot D_{t}F\mathrm{d}\overline{x}-\int_{\mathbb{T}^ {2}}[\Upsilon^{\kappa-\frac{3}{2}},\,\mathfrak{a}T_{\lambda}]\overline{ \partial}_{i}f\cdot D_{t}F\mathrm{d}\overline{x}\\ &+\int_{\mathbb{T}^{2}}\Upsilon^{\kappa-\frac{3}{2}}(\mathcal{N} ^{+}+\mathcal{N}^{-})\cdot D_{t}F\mathrm{d}\overline{x}\\ :=& I_{1}+I_{2}+I_{3}+I_{4}+I_{5}+I_{6}.\end{split} \tag{5.4}\] For \(I_{1}\) in (5.4), it is direct to verify that \[I_{1}\leq C(M)\Big{\{}\big{\|}D_{t}F\big{\|}_{L^{2}(\mathbb{T}^{2})}^{2}+\big{\|} T_{\sqrt{\lambda}}F\big{\|}_{L^{2}(\mathbb{T}^{2})}^{2}\Big{\}}\leq C(M). \tag{5.5}\] For \(I_{2}\) in (5.4), it follows from Lemma B.4 that \[I_{2}\leq C(M)\big{\|}(T^{*}_{\sqrt{\lambda}}T_{\sqrt{\lambda}}-T_{\lambda})F \big{\|}_{L^{2}(\mathbb{T}^{2})}\big{\|}D_{t}F\big{\|}_{L^{2}(\mathbb{T}^{2}) }\leq C(M). \tag{5.6}\] For \(I_{3}\) in (5.4), an application of Lemma B.5 gives \[\begin{split} I_{3}=&\int_{\mathbb{T}^{2}}[ \mathfrak{a},\,T_{\sqrt{\lambda}}]D_{t}F\cdot T_{\sqrt{\lambda}}F\mathrm{d} \overline{x}+\int_{\mathbb{T}^{2}}\mathfrak{a}[D_{t},\,T_{\sqrt{\lambda}}]F \cdot T_{\sqrt{\lambda}}F\mathrm{d}\overline{x}\\ \leq&\big{\|}[\mathfrak{a},\,T_{\sqrt{\lambda}}]D_{t }F\big{\|}_{L^{2}(\mathbb{T}^{2})}\big{\|}T_{\sqrt{\lambda}}F\big{\|}_{L^{2}( \mathbb{T}^{2})}\\ &+\big{\|}\mathfrak{a}\big{\|}_{L^{\infty}(\mathbb{T}^{2})} \big{\|}[D_{t},\,T_{\sqrt{\lambda}}]F\big{\|}_{L^{2}(\mathbb{T}^{2})}\big{\|} T_{\sqrt{\lambda}}F\big{\|}_{L^{2}(\mathbb{T}^{2})}\\ \leq& C(M).\end{split} \tag{5.7}\] Similarly, \[\begin{split} I_{4}=&-\int_{\mathbb{T}^{2}}[ \Upsilon^{\kappa-\frac{3}{2}},\,D_{t}]D_{t}\overline{\partial}_{i}f\cdot D_{t} F\mathrm{d}\overline{x}-\int_{\mathbb{T}^{2}}D_{t}[\Upsilon^{\kappa-\frac{3}{2}},\,D_{t}] \overline{\partial}_{i}f\cdot D_{t}F\mathrm{d}\overline{x}\\ \leq&\big{\|}[\Upsilon^{\kappa-\frac{3}{2}},\,D_{t}]D _{t}\overline{\partial}_{i}f\big{\|}_{L^{2}(\mathbb{T}^{2})}\big{\|}D_{t}F \big{\|}_{L^{2}(\mathbb{T}^{2})}\\ &+\big{\|}D_{t}[\Upsilon^{\kappa-\frac{3}{2}},\,D_{t}]\overline{ \partial}_{i}f\big{\|}_{L^{2}(\mathbb{T}^{2})}\big{\|}D_{t}F\big{\|}_{L^{2}( \mathbb{T}^{2})}\\ \leq& C(M).\end{split} \tag{5.8}\] For \(I_{5}\) in (5.4), there holds that \[\begin{split} I_{5}=&-\int_{\mathbb{T}^{2}}[ \Upsilon^{\kappa-\frac{3}{2}},\,\mathfrak{a}]T_{\lambda}\overline{\partial}_{ i}f\cdot D_{t}F\mathrm{d}\overline{x}-\int_{\mathbb{T}^{2}}\mathfrak{a}[ \Upsilon^{\kappa-\frac{3}{2}},\,T_{\lambda}]\overline{\partial}_{i}f\cdot D_{t }F\mathrm{d}\overline{x}\\ \leq&\big{\|}[\Upsilon^{\kappa-\frac{3}{2}},\, \mathfrak{a}]T_{\lambda}\overline{\partial}_{i}f\big{\|}_{L^{2}(\mathbb{T}^{2})} \big{\|}D_{t}F\big{\|}_{L^{2}(\mathbb{T}^{2})}\\ &+\big{\|}\mathfrak{a}[\Upsilon^{\kappa-\frac{3}{2}},\,T_{\lambda}] \overline{\partial}_{i}f\big{\|}_{L^{2}(\mathbb{T}^{2})}\big{\|}D_{t}F\big{\|} _{L^{2}(\mathbb{T}^{2})}\\ \leq& C(M).\end{split} \tag{5.9}\] Next we estimate \(I_{6}\) with \(\mathcal{N}^{\pm}\) given by (3.9). Since \(\overline{\partial}_{i}p=\partial_{i}p+\mathcal{H}(\overline{\partial}_{i}f) \partial_{3}p\), it can be derived from (2.2) and (3.18) that \[\begin{cases}\Delta\overline{\partial}_{i}p^{\pm}=\partial_{i}\Delta p^{\pm}+ \mathcal{H}^{\pm}(\overline{\partial}_{i}f)\partial_{3}\Delta p^{\pm}+2 \nabla\mathcal{H}^{\pm}(\overline{\partial}_{i}f)\cdot\nabla\partial_{i}p^{\pm },&\text{in }\Omega^{\pm},\\ \overline{\partial}_{i}p^{\pm}=0,&\text{on }\Gamma_{f},\\ \partial_{3}\overline{\partial}_{i}p^{\pm}=0,&\text{on }\Gamma^{\pm}.\end{cases} \tag{5.10}\] Then we can use the elliptic system (6.1), the estimates (5.1) and (6.3) to get that \[\left\|\Delta\overline{\partial}_{i}p^{\pm}\right\|_{H^{\kappa-2}(\Omega^{ \pm})}\leq C(M). \tag{5.11}\] Thus, (5.11) can be applied to (5.10) to yield that \[\left\|\nabla_{N}\overline{\partial}_{i}p^{\pm}\right\|_{H^{\kappa-\frac{3}{ 2}}(\Gamma_{f})}\leq C(M)\big{\|}\Delta\overline{\partial}_{i}p^{\pm}\big{\|} _{H^{\kappa-2}(\Omega^{\pm})}\leq C(M). \tag{5.12}\] Therefore, we have \[\left\|\mathcal{N}^{\pm}\right\|_{H^{\kappa-\frac{3}{2}}(\mathbb{T}^{2})}\leq C (M)\big{\|}(\nabla_{N}\overline{\partial}_{i}p^{\pm},\,\overline{\partial}p^{ \pm},\,\overline{\partial}u^{\pm},\,D_{t}\overline{\partial}_{i}f)\big{\|}_{H ^{\kappa-\frac{3}{2}}(\mathbb{T}^{2})}\leq C(M).\] Thus, \[I_{6}\leq C(M). \tag{5.13}\] Combining all the estimates (5.5)-(5.13), we have \[\frac{\mathrm{d}}{\mathrm{d}t}\big{\|}(D_{t}\overline{\partial}_{i}f,\, \mathfrak{a}^{1/2}T_{\sqrt{\lambda}}\overline{\partial}_{i}f)\big{\|}_{H^{ \kappa-\frac{3}{2}}(\mathbb{T}^{2})}\leq C(M). \tag{5.14}\] As for \(\left\|f\right\|_{L^{2}(\mathbb{T}^{2})}\), we use (3.1) to get that \[\frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}t}\big{\|}f\big{\|}_{L^{2}(\mathbb{T}^ {2})}^{2}=\int_{\mathbb{T}^{2}}D_{t}f\cdot f\mathrm{d}\overline{x}+\int_{ \mathbb{T}^{2}}\overline{\partial}_{j}u_{j}|f|^{2}\mathrm{d}\overline{x}\leq C (M). \tag{5.15}\] Assuming the Taylor sign condition, (5.14)-(5.15) and (3.10) prove (5.1). ## 6. Full estimates of \((p,\,u)\) In this section, we shall recover the full estimates of \((p,\,u)\) from the tangential energy estimates in Section 4 by some elliptic estimates. ### Full estimates of \(p\) To recover the full estimates of the pressure \(p\) from the tangential estimates of \(D_{t}^{l}p\) in Section 4, we shall rewrite the wave equation of \(p\) in (3.13) as \[\begin{cases}\Delta p^{\pm}=\frac{\rho^{\pm}}{\gamma p^{\pm}}D_{t}^{2}p^{\pm} -\rho^{\pm}\mathrm{tr}\,(\nabla u^{\pm})^{2}+\mathcal{M}_{p}^{\pm},&\text{in } \Omega^{\pm},\\ p^{\pm}=q(t),&\text{on }\Gamma_{f},\\ \partial_{3}p^{\pm}=0,&\text{on }\Gamma^{\pm},\end{cases} \tag{6.1}\] where \(q(t)\) is given by (3.19) and \[\mathcal{M}_{p}^{\pm}=-\frac{\rho^{\pm}}{\gamma(p^{\pm})^{2}}(D_{t}p^{\pm})^{ 2}+\frac{1}{\rho^{\pm}}\nabla\rho^{\pm}\cdot\nabla p^{\pm}. \tag{6.2}\] **Proposition 6.1**.: _For \(t\in[0,\,T]\), there holds that_ \[\left\|p\right\|_{H^{\kappa}(\Omega)}+\sum_{l=1}^{\kappa+1}\left\|D_{t}^{l}p \right\|_{H^{\kappa+1-l}(\Omega)}\leq C(M_{0})+TC(M). \tag{6.3}\] _Furthermore,_ \[\left\|D_{t}u\right\|_{H^{\kappa-1}(\Omega)}+\sum_{l=2}^{\kappa+1}\left\|D_{t} ^{l}u\right\|_{H^{\kappa+1-l}(\Omega)}\leq C(M_{0})+TC(M). \tag{6.4}\] Proof.: To prove (6.3), we shall use an induction over the index \(l\). When \(l=\kappa+1\), it follows from (4.5) that \[\left\|(D_{t}^{\kappa+1}p,\,D_{t}^{\kappa+1}u)\right\|_{L^{2}(\Omega)}\leq C (M_{0})+TC(M). \tag{6.5}\] For \(D_{t}^{\kappa}p\), since \[\nabla D_{t}^{\kappa}p=\rho D_{t}^{\kappa}(\frac{\nabla p}{\rho})-\rho[D_{t}^ {\kappa},\,\frac{1}{\rho}]\nabla p-[D_{t}^{\kappa},\,\nabla]p,\] we have from the momentum equation \(D_{t}u=-\frac{\nabla p}{\rho}\) that \[\left\|D_{t}^{\kappa}p\right\|_{H^{1}(\Omega)}\leq \left\|\nabla D_{t}^{\kappa}p\right\|_{L^{2}(\Omega)}+\left\|D_{t} ^{\kappa}p\right\|_{L^{2}(\Omega)}\] \[\leq \left\|\rho D_{t}^{\kappa+1}u\right\|_{L^{2}(\Omega)}+\left\| \rho[D_{t}^{\kappa},\,\frac{1}{\rho}]\nabla p\right\|_{L^{2}(\Omega)}\] \[+\left\|[D_{t}^{\kappa},\,\nabla]p\right\|_{L^{2}(\Omega)}+\left\| D_{t}^{\kappa}p\right\|_{L^{2}(\Omega)}\] \[\leq \left\|D_{t}^{\kappa+1}u\right\|_{L^{2}(\Omega)}+C(\mathcal{F}).\] Thus, by (4.1) and (6.5), we have \[\left\|D_{t}^{\kappa}p\right\|_{H^{1}(\Omega)}\leq C(M_{0})+TC(M). \tag{6.6}\] Assume that \(1\leq l\leq\kappa-1\) and \[\sum_{k=l+1}^{\kappa+1}\left\|D_{t}^{k}p\right\|_{H^{\kappa+1-k}(\Omega)}\leq C (M_{0})+TC(M). \tag{6.7}\] Then we shall prove that \[\left\|D_{t}^{l}p\right\|_{H^{\kappa+1-l}(\Omega)}\leq C(M_{0})+TC(M). \tag{6.8}\] When \(1\leq l\leq\kappa-1\), the equation for \(D_{t}^{l}p\) is \[\begin{cases}\Delta D_{t}^{l}p=\frac{\rho}{\gamma p}D_{t}^{l+2}p+[\Delta,\,D_{ t}^{l}]p-[\frac{\rho}{\gamma p},\,D_{t}^{l}]D_{t}^{2}p\\ \qquad-D_{t}^{l}\Big{\{}\rho\mathrm{tr}\,(\nabla u)^{2}\Big{\}}+D_{t}^{l} \mathcal{M}_{p},&\text{in }\Omega^{\pm},\\ D_{t}^{l}p=\partial_{t}^{l}q(t),&\text{on }\Gamma_{f},\\ \partial_{3}D_{t}^{l}p=[\partial_{3},D_{t}^{l}]p,&\text{on }\Gamma^{\pm}, \end{cases} \tag{6.9}\] Since \[[\Delta,\,D_{t}^{l}]p =\sum_{m=0}^{l-1}D_{t}^{m}[\Delta,\,D_{t}]D_{t}^{l-1-m}p\] \[=\sum_{m=0}^{l-1}D_{t}^{m}\Big{\{}\Delta u_{J}\partial_{J}D_{t}^{l -1-m}p+2\nabla u_{J}\partial_{J}D_{t}^{l-1-m}p\Big{\}}\] \[=\sum_{m=0}^{l-1}\sum_{n=0}^{m}\Big{\{}D_{t}^{n}\Delta u_{J}\cdot D _{t}^{m-n}\partial_{J}D_{t}^{l-1-m}p+2D_{t}^{n}\nabla u_{J}\cdot D_{t}^{m-n} \partial_{J}D_{t}^{l-1-m}p\Big{\}},\] \[-[\frac{\rho}{\gamma p},\,D_{t}^{l}]D_{t}^{2}p =\sum_{m=0}^{l-1}D_{t}^{m}[D_{t},\,\frac{\rho}{\gamma p}]D_{t}^{l +1-m}p\] \[=\sum_{m=0}^{l-1}D_{t}^{m}\Big{\{}D_{t}(\frac{\rho}{\gamma p})D_{ t}^{l+1-m}p\Big{\}}\] \[=\sum_{n=0}^{l-1}D_{t}^{n+1}(\frac{\rho}{\gamma p})\cdot D_{t}^{ l+1-n}p,\] \[D_{t}^{l}(\rho\mathrm{tr}\,(\nabla u)^{2})= D_{t}^{l}(\rho\partial_{J}u_{K}\partial_{K}u_{J})\] \[=\sum_{m=0}^{l}\sum_{n=0}^{l-m}D_{t}^{l-m-n}\rho\cdot D_{t}^{m} \partial_{J}u_{K}\cdot D_{t}^{n}\partial_{K}u_{J},\] we have \[\big{\|}\Delta D_{t}^{l}p\big{\|}_{H^{\kappa-1-l}(\Omega)}\leq C(\mathcal{F}) \big{\|}D_{t}^{l+2}p\big{\|}_{H^{\kappa-1-l}(\Omega)}+C(\mathcal{F}). \tag{6.10}\] Similarly, \[[\partial_{3},\,D_{t}^{l}]p =\sum_{m=0}^{l-1}D_{t}^{m}[\partial_{3},\,D_{t}]D_{t}^{l-1-m}p\] \[=\sum_{m=0}^{l-1}D_{t}^{m}\Big{\{}\partial_{3}u_{J}\partial_{J}D_ {t}^{l-1-m}p\Big{\}}\] \[=\sum_{m=0}^{l-1}\sum_{n=0}^{m}D_{t}^{n}\partial_{3}u_{J}\cdot D_ {t}^{m-n}\partial_{J}D_{t}^{l-1-m}p,\] yields that \[\big{\|}\partial_{3}D_{t}^{l}p\big{\|}_{H^{\kappa-\frac{1}{2}-l}(\Gamma^{\pm} )}\leq C(\mathcal{F}). \tag{6.11}\] On the interface \(\Gamma_{f}\), the fact that \(p|_{\Gamma_{f}}=q(t)\) which is independent on \(\overline{x}\) infers that \[\big{\|}D_{t}^{l}p\big{\|}_{H^{\kappa+\frac{1}{2}-l}(\Gamma_{f})}= \big{\|}D_{t}^{l}p\big{\|}_{L^{2}(\Gamma_{f})} \tag{6.12}\] \[\leq C(\big{\|}f\big{\|}_{H^{\kappa-\frac{1}{2}}})\big{\|}D_{t}^{l}p \big{\|}_{H^{1}(\Omega)}\leq C(\big{\|}f\big{\|}_{H^{\kappa-\frac{1}{2}}})C( \mathcal{F}).\] Therefore, the standard elliptic theory can be applied to (6.9) to get that \[\big{\|}D_{t}^{l}p\big{\|}_{H^{\kappa+1-l}(\Omega)}\leq C(\big{\|}f\big{\|}_{H^{\kappa-\frac{1}{2}}})\Big{\{}\big{\|} \Delta D_{t}^{l}p\big{\|}_{H^{\kappa-1-l}(\Omega)} \tag{6.13}\] \[+\big{\|}\partial_{3}D_{t}^{l}p\big{\|}_{H^{\kappa-\frac{1}{2}-l}( \Gamma^{\pm})}+\big{\|}D_{t}^{l}p\big{\|}_{H^{\kappa+\frac{1}{2}-l}(\Gamma_{f} )}\Big{\}}\] \[\leq C(M_{0})+TC(M).\] where we have used (6.10)-(6.12), the induction assumption (6.7), (4.1) and (5.1). The case of \(l=0\) follows in a similar way. Notice that we can only get \(\left\|p\right\|_{H^{\kappa}(\Omega)}\) instead of \(\left\|p\right\|_{H^{\kappa+1}(\Omega)}\) due to limited regularity of the interface \(f\). Thus, (6.3) is proved. To prove (6.4), since \(D_{t}u=-\frac{\nabla p}{\rho}\) and \[D_{t}^{l+1}u=-D_{t}^{l}(\frac{1}{\rho}\nabla p)=-\frac{1}{\rho}\nabla D_{t}^{ l}p-[D_{t}^{l},\,\frac{1}{\rho}\nabla]p,\] the estimates of \(u\) in (6.4) follow from (6.3) and (4.1). ### Full estimates of \(u\) The full estimates of \(u\) can be recovered by Lemma A.2 and the fact that \[\overline{\partial}_{i}u\cdot N=\overline{\partial}_{i}D_{t}f-\overline{ \partial}_{j}f\overline{\partial}_{i}u_{j}=D_{t}\overline{\partial}_{i}f, \qquad i=1,\,2. \tag{6.14}\] **Proposition 6.2**.: _For \(t\in[0,\,T]\), there holds that_ \[\left\|u\right\|_{H^{\kappa}(\Omega)}\leq C(M_{0})+TC(M). \tag{6.15}\] Proof.: We apply (A.3) and (6.14) to get that \[\left\|u\right\|_{H^{\kappa}(\Omega)}\leq C\big{(}\big{\|}f\big{\|}_{H^{\kappa-\frac{1}{2}}}\big{)}\Big{\{} \big{\|}\nabla\times u\big{\|}_{H^{\kappa-1}(\Omega)}+\left\|\nabla\cdot u \right\|_{H^{\kappa-1}(\Omega)}\] \[+\left\|\overline{\partial}_{i}u\cdot N\right\|_{H^{\kappa-\frac {3}{2}}(\mathbb{T}^{2})}+\left\|u\right\|_{L^{2}(\Omega)}\Big{\}}\] \[\leq C\big{(}\big{\|}f\big{\|}_{H^{\kappa-\frac{1}{2}}}\big{)}\Big{\{} \big{\|}\omega\big{\|}_{H^{\kappa-1}(\Omega)}+\big{\|}\frac{1}{\gamma p}D_{t}p \big{\|}_{H^{\kappa-1}(\Omega)}\] \[+\left\|D_{t}\overline{\partial}_{i}f\right\|_{H^{\kappa-\frac{ 3}{2}}(\mathbb{T}^{2})}+\left\|u\right\|_{L^{2}(\Omega)}\Big{\}}\] \[\leq C\Big{(}C(M_{0})+TC(M)\Big{)}\cdot\Big{\{}C(M_{0})+TC(M)\Big{\}}\] \[\leq C(M_{0})+TC(M),\] where we have used (4.1), (4.5), (4.9) and (5.1). ## Appendix A Elliptic estimates For the one-phase elliptic system \[\begin{cases}\nabla\times u=\omega,\quad\nabla\cdot u=\sigma,&\text{in }\Omega_{f}^{-},\\ u\cdot N=\theta,&\text{on }\Gamma_{f},\\ u\cdot n_{-}=0,\quad\int_{\mathbb{T}^{2}}u_{j}\mathrm{d}\overline{x}=\alpha_{j }\,(j=1,\,2),&\text{on }\Gamma^{-},\end{cases}\] (A.1) we have the following existence result given by Proposition 5.1 in [34] (see also [8, 33]): **Lemma A.1**.: _Assume that \(f\in H^{\kappa-\frac{1}{2}}(\mathbb{T}^{2})\) with \(\kappa>\frac{5}{2}\). For \(s\in[2,\,\kappa]\), let \((\omega,\,\sigma)\in H^{s-2}(\Omega_{f}^{-})\) and \(\theta\in H^{s-\frac{3}{2}}(\mathbb{T}^{2})\) be such that_ \[\int_{\Omega_{f}^{-}}\sigma\mathrm{d}x=\int_{\mathbb{T}^{2}}\theta\mathrm{d} \overline{x},\] \[\nabla\cdot\omega=0,\text{ in }\Omega_{f}^{-},\quad\int_{\Gamma^{-}}\omega_{3} \mathrm{d}\overline{x}=0.\] _Then there exists a unique \(u\in H^{s-1}(\Omega_{f}^{-})\) to the system (A.1) such that_ \[\big{\|}u\big{\|}_{H^{s-1}(\Omega_{f}^{-})}\leq C\big{(}\big{\|}f \big{\|}_{H^{s-\frac{1}{2}}}\big{)}\Big{\{}\big{\|}(\omega,\,\sigma)\big{\|}_{H ^{s-2}(\Omega_{f}^{-})}+\big{\|}\theta\big{\|}_{H^{s-\frac{3}{2}}(\Gamma_{f})} +|\alpha_{1}|+|\alpha_{2}|\Big{\}}.\] (A.2) The regularity of the solution of the one-phase elliptic system (A.1) was improved in [8] (see also [39]) by using tangential derivatives for the boundary condition on the surface \(\Gamma_{f}\): **Lemma A.2**.: _Assume that \(f\in H^{\kappa-\frac{1}{2}}(\mathbb{T}^{2})\) with \(\kappa>\frac{5}{2}\). For \(s\in[2,\,\kappa]\), there holds that_ \[\big{\|}u\big{\|}_{H^{s}(\Omega_{f}^{-})}\leq C\big{(}\big{\|}f\big{\|}_{H^{\kappa-\frac{1}{2}}}\big{)}\Big{\{} \big{\|}\nabla\times u\big{\|}_{H^{s-1}(\Omega_{f}^{-})}+\big{\|}\nabla\cdot u \big{\|}_{H^{s-1}(\Omega_{f}^{-})}\] (A.3) Clearly, these two results also hold for the one-phase elliptic systems in \(\Omega_{f}^{+}\) in a similar fashion. ## Appendix B Paradifferential operators and commutator estimates In this appendix, we shall recall some basic facts on paradifferential operators from [26]. We first introduce the symbols with limited spatial smoothness. Let \(W^{k,\infty}(\mathbb{R}^{d})\) be the usual Sobolev spaces for \(k\in\mathbb{N}\). **Definition B.1**.: Given \(\mu\in[0,\,1]\) and \(m\in\mathbb{R}\), we denote by \(\Gamma_{\mu}^{m}(\mathbb{R}^{d})\) the space of locally bounded functions \(a(x,\,\xi)\) on \(\mathbb{R}^{d}\times\mathbb{R}^{d}\backslash\{0\}\), which are \(C^{\infty}\) with respect to \(\xi\) for \(\xi\neq 0\) such that, for all \(\alpha\in\mathbb{N}^{d}\) and \(\xi\neq 0\), the function \(x\to\partial_{\xi}^{\alpha}a(x,\,\xi)\) belongs to \(W^{\mu,\infty}\) and there exists a constant \(C_{\alpha}\) such that \[\big{\|}\partial_{\xi}^{\alpha}a(\cdot,\,\xi)\big{\|}_{W^{\mu,\infty}}\leq C_ {\alpha}(1+|\xi|)^{m-|\alpha|},\qquad\forall\,|\xi|\geq\frac{1}{2}.\] The seminorm of the symbol is defined as \[M_{\mu}^{m}(a):=\sup_{|\alpha|\leq\frac{\mathrm{d}}{2}+1+\mu}\sup_{|\xi|\geq \frac{1}{2}}\big{\|}(1+|\xi|)^{-m+|\alpha|}\partial_{\xi}^{\alpha}a(\cdot,\, \xi)\big{\|}_{W^{\mu,\infty}}\] If \(a\) is a function independent of \(\xi\), then \[M_{\mu}^{0}(a)=\big{\|}a\big{\|}_{W^{\mu,\infty}}.\] **Definition B.2**.: Given a symbol \(a\), the paradifferential operator \(T_{a}\) is defined by \[\widehat{T_{a}u}(\xi):=(2\pi)^{-d}\int_{\mathbb{R}^{d}}\chi(\xi- \eta,\,\eta)\widehat{a}(\xi-\eta,\,\eta)\psi(\eta)\widehat{u}(\eta)\mathrm{d}\eta,\] (B.1) where \(\widehat{a}\) is the Fourier transform of \(a\) with respect to the first variable. \(\chi(\xi,\,\eta)\in C^{\infty}(\mathbb{R}^{d}\times\mathbb{R}^{d})\) is an admissible cutoff function, that is, there exist \(0<\varepsilon_{1}<\varepsilon_{2}\) such that \[\chi(\xi,\,\eta)=1\quad\text{for }|\xi|\leq\varepsilon_{1}|\eta|,\qquad\chi( \xi,\,\eta)=0\quad\text{for }|\xi|\geq\varepsilon_{2}|\eta|,\] and \[|\partial_{\xi}^{\alpha}\partial_{\eta}^{\beta}\chi(\xi,\,\eta)|\leq C_{\alpha,\beta}(1+|\eta|)^{-|\alpha|-|\beta|}\quad\text{for }(\xi,\,\eta)\in\mathbb{R}^{d}\times\mathbb{R}^{d}.\] The cutoff function \(\psi(\eta)\in C^{\infty}(\mathbb{R}^{d})\) satisfies \[\psi(\eta)=0\quad\text{for }|\eta|\leq 1,\qquad\psi(\eta)=1\quad\text{for }|\eta| \geq 2.\] The admissible cutoff function \(\chi(\xi,\,\eta)\) can be chosen as \[\chi(\xi,\,\eta)=\sum_{k=0}^{\infty}\zeta_{k-3}(\xi)\varphi(\eta),\] where \(\zeta(\xi)=1\) for \(|\xi|\leq 1.1\), \(\zeta(\xi)=0\) for \(|\xi|\geq 1.9\), and \[\begin{cases}\zeta_{k}(\xi)=\zeta(2^{-k}\xi)&\text{for }k\in\mathbb{Z},\\ \varphi_{0}=\zeta,\quad\varphi_{k}=\zeta_{k}-\zeta_{k-1}&\text{for }k\geq 1. \end{cases}\] We also introduce the Littlewood-Paley operators \(\Delta_{k}\), \(S_{k}\) defined as \[\Delta_{k}u=\mathcal{F}^{-1}(\varphi_{k}\widehat{u})\quad\text{ for }k\geq 0, \qquad\Delta_{k}u=0\quad\text{for }k<0,\] \[S_{k}u=\sum_{l\leq k}\Delta_{l}u\quad\text{for }k\in\mathbb{Z}.\] When the symbol \(a\) depends only on the first variable \(x\) in \(T_{a}u\), we take \(\psi=1\) in (B.1). Then \(T_{a}u\) is just usual Bony's paraproduct defined as \[T_{a}u=\sum_{k=0}S_{k-3}a\Delta_{k}u.\] (B.2) We have the following Bony's paraproduct decomposition: \[au=T_{a}u+T_{u}a+R(a,\,u),\] (B.3) where the remainder term \(R(a,\,u)\) is \[R(a,\,u)=\sum_{|k-l|\leq 2}\Delta_{k}a\Delta_{l}u.\] **Lemma B.3**.: _There holds that_ 1. _If_ \(s\in\mathbb{R}\) _and_ \(\sigma<\frac{d}{2}\)_, then_ \[\big{\|}T_{a}u\big{\|}_{H^{s}}\lesssim\min\{\big{\|}a\big{\|}_{L^{\infty}} \big{\|}u\big{\|}_{H^{s}},\,\big{\|}a\big{\|}_{H^{s}}\big{\|}u\big{\|}_{H^{s+ \frac{d}{2}-s}},\,\big{\|}a\big{\|}_{H^{\frac{d}{2}}}\big{\|}u\big{\|}_{H^{s+ }}\}.\] 2. _If_ \(s>0\) _and_ \(s_{1},\,s_{2}\in\mathbb{R}\) _with_ \(s_{1}+s_{2}=s+\frac{d}{2}\)_, then_ \[\big{\|}R(a,\,u)\big{\|}_{H^{s}}\lesssim\big{\|}a\big{\|}_{H^{s_{1}}}\big{\|} u\big{\|}_{H^{s_{2}}}.\] 3. _If_ \(s>0\)_,_ \(s_{1}\geq s\)_,_ \(s_{2}\geq s\) _and_ \(s_{1}+s_{2}=s+\frac{d}{2}\)_, then_ \[\big{\|}au\big{\|}_{H^{s}}\lesssim\big{\|}a\big{\|}_{H^{s_{1}}}\big{\|}u\big{\|} _{H^{s_{2}}}.\] (B.4) There is also the symbolic calculus of paradifferential operator in Sobolev spaces. **Lemma B.4**.: _Let \(m,\,m^{\prime}\in\mathbb{R}\)._ 1. _If_ \(a\in\Gamma_{0}^{m}(\mathbb{R}^{d})\)_, then for any_ \(s\in\mathbb{R}\)_,_ \[\big{\|}T_{a}\big{\|}_{H^{s}\to H^{s-m}}\lesssim M_{0}^{m}(a).\] 2. _if_ \(a\in\Gamma_{\rho}^{m}(\mathbb{R}^{d})\) _and_ \(b\in\Gamma_{\rho}^{m^{\prime}}(\mathbb{R}^{d})\) _for_ \(\rho>0\)_, then for any_ \(s\in\mathbb{R}\)_,_ \[\big{\|}T_{a}T_{b}-T_{a\sharp b}\big{\|}_{H^{s}\to H^{s-m-m^{\prime}+\rho}} \lesssim M_{\rho}^{m}(a)M_{0}^{m^{\prime}}(b)+M_{0}^{m}(a)M_{\rho}^{m^{\prime} }(b),\] _where_ \[a\sharp b=\sum_{|\alpha|<\rho}\partial_{\xi}^{\alpha}a(x,\,\xi)D_{x}^{\alpha} b(x,\,\xi),\qquad D_{x}=-\mathrm{i}\partial_{x}.\] 3. _If_ \(a\in\Gamma_{\rho}^{m}(\mathbb{R}^{d})\) _for_ \(\rho\in(0,\,1]\)_, then for any_ \(s\in\mathbb{R}\)_,_ \[\big{\|}(T_{a})^{*}-T_{a^{*}}\big{\|}_{H^{s}\to H^{s-m+\rho}}\lesssim M_{\rho} ^{m}(a),\] _where_ \((T_{a})^{*}\) _is the adjoint operator of_ \(T_{a}\) _and_ \(a^{*}\) _is the complex conjugate of the symbol_ \(a\)_._ To estimate commutators, we recall a lemma from [1] (Lemma 2.15). **Lemma B.5**.: _Consider a symbol \(p=p(t,\,x,\,\xi)\) which is homogeneous of order \(m\). There holds that_ \[\big{\|}[T_{p},\,\partial_{t}+T_{u}\cdot\nabla]u\big{\|}_{H^{m}}\lesssim \Big{\{}M_{0}^{m}(p)\big{\|}u\big{\|}_{C^{1+}_{\star}}+M_{0}^{m}(D_{t}p) \Big{\}}\big{\|}u\big{\|}_{H^{m}}.\] (B.5) ## Declaration **Conflict of interest**: On behalf of all authors, the corresponding author states that there is no conflict of interest. Our manuscript has no associated data.
2308.16621
Meta-analysis of literature data in metal additive manufacturing: What can we (and the machine) learn from reported data?
Obtaining in-depth understanding of the relationships between the additive manufacturing (AM) process, microstructure and mechanical properties is crucial to overcome barriers in AM. In this study, database of metal AM was created thanks to many literature studies. Subsequently meta-analyses on the data was undertaken to provide insights into whether such relationships are well reflected in the literature data. The analyses help reveal the bias and what the data tells us, and to what extent machine learning (ML) can learn from the data. The first major bias is associated with common practices in identifying the process based on optimizing the consolidation. Most reports were for consolidation while data on microstructure and mechanical properties was significantly less. In addition, only high consolidation values was provided, so ML was not able to learn the full spectrum of the process - consolidation relationship. The common identification of process maps based on only consolidation also poses another bias as mechanical properties that ultimately govern the quality of an AM build are controlled not only by the consolidation, but also microstructure. Meta-analysis of the literature data also shows weak correlation between process with consolidation and mechanical properties. This weak correlation is attributed to the stated biases and the non-monotonic and non-linear relationships between the process and quality variables. Fortunately, trained ML models capture well the influence and interactions between process parameters and quality variables, and predicts accurately the yield stress, suggesting that the correlation between process, microstructure and yield strength is well reflected in the data. Lastly, due to the current limitation in the process map identification, we propose to identify the process map based on not only the consolidation, but also mechanical properties.
Raymond Wong, Anh Tran, Bogdan Dovgyy, Claudia Santos Maldonado, Minh-Son Pham
2023-08-31T10:29:26Z
http://arxiv.org/abs/2308.16621v1
Meta-analysis of literature data in metal additive manufacturing: What can we (and the machine) learn from reported data? ###### Abstract Obtaining in-depth understanding of the relationships between the additive manufacturing process, microstructure and mechanical properties is crucial to overcome barriers in additive manufacturing (AM). Over the past decades, there have been significant studies in AM, providing considerable amount of data available for examination of such relationships broadly across many literature studies. In this study, database of metal AM was created thanks to a large number of literature studies. Subsequently meta-analyses on the data was undertaken to provide insights into whether such relationships are well reflected in the literature data. The analyses help reveal the bias and what the data tells us, and to what extent machine learning can learn from the data. The first major bias is associated with common practices in identifying the process based on optimizing the consolidation. Most data reports were for consolidation while data on microstructure and mechanical properties was significantly less. In addition, only high consolidation was only provided. Machine learning trained on the data was not able to learn the process - consolidation relationship in the medium and low end ranges of consolidation. The common identification of process maps based on only consolidation also poses another bias because mechanical properties that ultimately govern the quality of an AM build are controlled not only by the consolidation, but also microstructure. However, the number of studies on quantifying the microstructure was extremely low, limiting the learning of the microstructure - mechanical properties relationships. Meta-analysis of the literature data also shows weak correlation between input (i.e. process parameters) with output (i.e. consolidation and mechanical properties). This weak correlation is attributed to the stated biases and the highly non-monotonic and non-linear relationships between the process and quality variables. Fortunately, machine learning models trained on the data capture well the interactions between process parameters are influential in the output, and predicts accurately the yield stress, suggesting that the correlation between process, microstructure and yield strength is well reflected in the data. Last but not least, due to the current limitation in the process map identification, we propose to identify the process map on the basis of not only the consolidation, but also mechanical properties. Such a identification shows that 316L and Inconel have much larger process map (i.e. highly printable) in comparison to the Ti6Al4V, Hastelloy-X and Inconel 625. Data analytics, Machine learning, Processing maps, Additive manufacturing, Alloys ## 1 Introduction Additive manufacturing (AM) has a potential to revolutionize the manufacturing industry by offering an efficient and cost-effective method for fabricating complex structures, providing advantages over other manufacturing techniques [1]. Despite its advantages, metal AM presents major challenges due to the formation of defects and undesirable microstructure, affecting the mechanical performance and reliability of final products in applications [2, 3]. In particular, extreme interactions between the energy beam and materials and associated complex thermal conditions in AM cause difficulties in understanding of the underlying relationship between alloy composition, microstructure and properties of additively manufactured alloys [4, 5, 6, 7]. Overcoming such challenges requires fundamental understanding of the process-microstructure-property relationship that will assist the development of (1) forward engineering (predicting the mechanical properties of a given alloy for a specific set of process parameters), and (2) reverse engineering (identifying the alloy composition and corresponding process parameters for a given set of properties). Over the past decade, there has been a considerable number of studies reporting the relationships between process, microstructure and properties (PMP). Meta-analysing a considerable large data reported by many research groups has potential to unravel such important PMP relationships and identify the good and not good practices in studying metal additive manufacturing. Such meta-analysis will also allow us to identify the bias and associated implications in our learning of the PMP relationships. Meta-analysis has becomes powerful in providing invaluable insights that may not be immediately apparent thanks to advances in data analytics (DA) and machine learning (ML) [8, 9, 10]. Despite the significant use of DA (and ML) in many fields, such as healthcare and online shopping, the use of data analytics for AM is still in its early stages [11]. Although some successes is shown in these data-driven studies (for example, mechanical properties based on simulated microstructure or process parameters [12, 13, 14, 15], melt pool dimensions for a given composition or process parameters [16, 17, 18, 19, 20, 21, 22], and porosity based on process parameters used or images of the build [23, 24, 25, 26, 27, 28, 29], such studies were only based on limited sets of conditions. While comprehensive material databases such as Materials Project, AFLOWLIB and OQMD are available for functional material properties, there has been little effort to develop similar database targeted for AM [30, 31, 32]. Notably, NIST and the now discontinued Citination have placed efforts to address the lack of a comprehensive database for AM. Nevertheless, it is still inadequate to be used for big data analysis at the scale of other fields [33, 34]. Fortunately, there are a considerable amount of data reported in literature studies for AM in the past decades [35, 36, 37, 38, 39, 40]. Such literature studies contain valuable data reflecting some key aspects of the PMP relationships in AM alloys. However, there had not been collection and structuring of rich data available in literature. This study firstly creates considerable database by collecting, subsequently structuring and organising data available in literature with focus on the laser powder bed fusion (LPBF) that is currently the most used AM method in fabricating metallic alloys. Secondly, the study carries out in-depth examination of the data to identify biases in the reported data that may hinder the learning of the PMP relationship for AM alloys. This will be achieved by conducting an analysis of the correlation, principal component and sensitivity analyses. Furthermore, ML algorithms will be used to test the current performance of the obtained dataset and provide bases to analyse the sensitivity of process parameters on the predicted consolidation and mechanical properties of the trained ML models. Last but not least, the study will identify process window maps that are optimized on the basis of not only the consolidation, but also key mechanical properties that are crucial for structural applications. Consequently, the use of DA and ML on a significantly large number of studies can provide invaluable insights relating to the PMP relationships in particular processing parameters and their effects on the printed product, assisting the AM users to optimize processing parameters. Such knowledge will also assist the AM users in assessing the printability of existing alloys and accelerating the search for new printable alloys with desired properties [41, 42, 43, 44, 45, 46]. ## 2 Methods An extensive literature search has been undertaken to create significant datasets from published data reported in literature. Over _2000_ data entries has been obtained for commonly printed alloys such as Ti6Al4V, Inconel 718, Inconel 625, Hastelloy X and 316L usig the laser powder bed fusion. The data collection from literature reports included processing parameters, consolidation and mechanical properties of as-built conditions and post processing information if available (see **Figure 2**). Powder bed fusion (PBF) is currently the most commonly used AM technique and has most comprehensive published data in comparison to other variations such as DED. Therefore, the data collection was done for PBF with focus on LPBF. The knowledge gained from this analysis will be applicable to electron beam powder bed fusion. The work focuses on exploratory data analysis (EDA) to establish relations between different variables in the dataset to understand the characteristics (including biases and limitations of current data reporting practices) and underlying process - consolidation - mechanical properties relationships hidden in the obtained datasets. This includes analysis of the reported data and uncovering relations between input parameters and output properties (including consolidation, yield stress (YS), ultimate tensile stress (UTS) and elongation). Following this, analysis of optimized process windows has been undertaken to gain insights into the relation between process parameters, material and properties of the selected alloys. Finally, trained supervised ML models have been used to investigate the impact of current limits of the reported data (including common practices) in the learning by machine. ### Explanatory Data Analysis Correlation techniques have been conducted to investigate the underlying correlation, issues and characteristics within the dataset in order to uncover deeper relationships between data features. The most commonly used techniques are Pearson's correlation coefficient and Spearman's correlation coefficient. Pearson's is used to capture linearity correlation between two variables and is defined as \[r_{p}=\frac{\sum(x-\overline{x})(y-\overline{y})}{\sqrt{\sum(x-\overline{x})^{2} \sum(y-\overline{y})^{2}}} \tag{1}\] where \(x\) and \(y\) are the individual values of the two variables, \(\overline{x}\) and \(\overline{y}\) denote the mean of the two variables in the dataset [47, 48]. Spearman's correlation is used to capture the strength of a monotonic relation between two variables and is given by \[r_{s}=1-6\sum\frac{d^{2}}{N(N^{2}-1)} \tag{2}\] where \(d\) denotes the difference between the two variables ranks, and \(N\) denotes the sample size [47, 48]. The two correlations are typically used in conjunction with each other to capture both types of correlation which the other would not be able to capture. The established relationship between the variation of processing parameters and the resulting defects suggests that the processing parameters are subject to multicollinearity. An example of the correlation has been demonstrated by Gordon et al. when optimizing laser power and speed, correlation is shown due to the underlying dependencies of the objective when optimizing print quality by balancing the two parameters [49]. Multicollinearity among the processing parameters (predictors) may have a negative impact on the performance of ML models, as changes in one process parameter will inherently influence the values of one or more other process parameters. Hence, individual changes in process parameters and its effect on the dependent variables will be difficult to differentiate between the effect of other process parameters [50]. To address this, variance inflation factor (VIF) has been used to quantify the degree of multicollinearity. VIF measures the degree of collinearity of each predictor by forming a regression of one predictor to all the other predictors, calculated as follows \[VIF_{n}=\frac{1}{1-R_{n}^{2}} \tag{3}\] where \(R_{n}^{2}\) is the coefficient of determination of the auxiliary regression for the _n_th predictor. A VIF of 1 indicates no collinearity; whereas, VIF exceeding 5 or 10 indicates high levels of multicollinearity as a standard practice. ### Process Window Identification To better understand the influence of multiple process parameters on the print quality, a window (i.e. heat map) of process parameters optimized for a separate output (or a combined output that consolidates all considered separate output variables) have been generated. This approach analyzes the raw data without the influence of ML and can provide users with predictions of output qualities purely based on reported data. Because laser power and laser speed are the most varied input parameters in literature (**Figure 3b**), heatmap was constructed on the basis of these two separate input variables. Two types of heatmaps with have been generated, \(Hmap_{1}\) and \(Hmap_{2}\). \(Hmap_{1}\) is based on an individual quality (i.e. output), either YS, average work-hardening, elongation or consolidation (\(z\)-axis) for a given laser speed (\(x\)-axis) and laser power (\(y\)-axis). Such that four \(Hmap_{1}\) will be generated for each output variable. As discussed earlier that consolidation (or any single quality variable) is not sufficient to reflect the quality. Therefore, \(Hmap_{2}\) is constructed to identify a process window that can achieve a combined variable that includes all the considered individual four output variables. The generation of \(Hmap_{1}\) consists of interpolating the data points within its convex hull to obtain \(z\) values at the point of interest \(P\) within this region. As the data points are not uniformly distributed, Barycentric interpolation was chosen over other methods as it accounts for the distances between neighboring data points, leading to more accurate and smoother estimates of the function value. In particular, the method employs a series of interpolations over small regions of parameterized triangles formed between data points [51, 52, 53]. The interpolation of \(P\) uses Barycentric coordinates (\(\lambda\)) of the parameterized triangles, given by \[\lambda_{n}=\frac{A_{n}}{A_{total}} \tag{4}\] such that \(\lambda\) represents the proportional size of the interior triangles formed by connecting point \(P\) with each vertex of the parameterized triangle. Where \(A_{n}\) denotes the area of an interior triangle and \(A_{total}\) represents the total area of the parameterized triangles. An illustration of the Barycentric coordinates is shown in **Figure 1**. Using these coordinates, \(P\) is determined as a weighted average dependent on the distance of the neighboring \(z\) values as follows \[P=\sum_{n=1}^{3}\lambda_{n}z_{n} \tag{5}\] To generate \(Hmap_{2}\), all interpolated values for \(Hmap_{1}\) of the same material has been normalized with a min-max scaler and summed with equal weighting to create a map which shows the optimized process parameter regions. Thus, the map considers process - consolidation - mechanical properties relationships as opposed to single properties like consolidation. The min-max scaler ensures all values for the map ranges from 0 to 1, such that the scaling of output qualities will contribute equally to the summarized map [54]. However, if any output quality is considered to be more important, a higher weighting factor can be added to allow more contribution from that individual output to the combined quality. ### Machine Learning Principal component analysis (PCA) was used to reduce the dimensionality of the input data. The efficacy of the dimensionality reduction has been investigated to study the use of volumetric energy density (VED) that is commonly used metric to optimize print quality. Furthermore, multiple supervised ML techniques have been employed to examine the accuracy and effects of data biases and limitations on the ML performance. Machine learning models employed include decision tree, random forest, support vector machine, neural network, XGBoost, LightGBM and CatBoost. Based on the observations of non-Gaussian distributed data and sensitivity analyses shown in **Figure 3** and **Table 1** respectively, the selection of these models focuses on methods which doesn't assume Gaussian distribution and is less susceptible to Figure 1: Illustration of Barycentric interpolation to obtain value at point \(P\), given three known \(Z\) values, corresponding to YS, average work-hardening, elongation or consolidation. multicollinearity [55, 56, 57]. To employing ML models on the dataset, it is essential to pre-process the data by converting it into a machine-readable format. Steps taken includes standardizing, dropping a set of data if the entry contains any missing values and applying _One Hot Encoding_ the material and treatment data. _One Hot Encoding_ creates a new category in a binary format for unique inputs within the material and treatment information dataset. This is necessary as the machine is incapable of taking string inputs, thereby converting the data to numerical inputs allows the machine to consider material type and treatment information. Furthermore, the binary format of the categorical values avoids asserting false linear relationships which _label encoding_ may induce as the ML models may misinterpret the numerical labelling as rankings [54]. Following this, the training, validation and test data has been split into a 80%/10%/10% ratio to evaluate the final performance of the model. About 80% of more than 2000 data entries obtained from literature was dropped due to the incompleteness for example missing mechanical properties data. Efforts to impute the missing data has been undertaken; however, the significant amount of incomplete data resulted in poor performance. Therefore, the model has been trained on over 300 points (of full data) after preprocessing the data To produce an effective model, all aforementioned models were hyperparameter-tuned with 500 trials and a RMSE loss function; the selection of this loss function is to reduce the effect of large outliers during the prediction and training phase by penalizing larger errors due to outliers [58]. ### Sensitivity Analysis It is common to simply trust a ML model because of high accuracy predictions and omit the interpretation of what the model has learnt. However, it is difficult to interpret what the machine has learned from the data. Sensitivity analysis enables interpretability of ML models by revealing the influence of model inputs on the model outputs. This analysis will allow quantification of the influence of process parameters and how this varies across different models. Subsequently, the resulting values can be used to determine whether a ML model is able to reflect the underlying science between process and properties [59, 60]. Sobol sensitivity analysis is a variance-based approach to quantify how the uncertainty in individual model inputs contributes to the uncertainty of model outputs; the method offers two ways of sensitivity analysis. The main effect sensitivity index determines the individual effects of a single process parameter input, without considering the interaction of this parameter with other process parameters. This can be obtained by \[S_{i}=\frac{\mathbb{V}[\mathbb{E}(Z|x_{i})]}{\mathbb{V}(Z)} \tag{6}\] such that \(S_{i}\) measures the variance of the conditional expectation \(\mathbb{V}[\mathbb{E}(Y|x_{i})]\), relative to the total variance \(\mathbb{V}(Z)\) for output \(Z\), given an input \(x_{i}\)[61, 62]. Total effect sensitivity index (\(S_{T_{i}}\)) determines the combined effects of a single process parameter input, considering their interactions with other process parameters. This can be calculated \[S_{T_{i}}=\frac{\mathbb{E}[\mathbb{V}(Z|x_{\sim i})]}{\mathbb{V}(Z)} \tag{7}\] where \(\mathbb{V}(Z|x_{\sim i})\) denotes the variance of conditional expectation for output \(Z\) for all inputs except the \(i\)-th element [61, 62]. This can also be expressed as the sum of the first (\(S_{i}\)) and higher order interactions with other model inputs. For example, for laser power, this is expressed as \[S_{T_{P}}=S_{\{P\}}+S_{\{P,v\}}+S_{\{P,h\}}+S_{\{P,t\}}+S_{\{P,v,h,t\}} \tag{8}\] where \(P\), \(v\), \(h\), and \(t\) denotes laser power, laser speed, hatch spacing and layer thickness, respectively. Such that \(S_{\{P,v\}}\) is the measure of the effect from the co-variance of both laser power and laser speed. The calculation of \(S_{i}\) and \(S_{T_{i}}\) typically requires approximation by Monte-Carlo sampling. This will be achieved by generating 100,000 model inputs using Saltelli sampling scheme [62]. Following this, the trained ML models will use the generated model inputs to predict the interested properties. The generated set of model inputs will provide a diverse and suitable set of process parameter combinations ensuring that the analysis encompasses a broad range of parameter values and interactions. By exploring the parameter space more comprehensively, this enhances the reliability of the calculated indices of the sensitivity analysis [59, 60, 62, 63]. ## 3 Results and Discussion ### Explanatory Data Analysis The distribution of the compiled dataset is summarized in **Figure 2**. The histogram emphasizes a bias in data reporting, most of studies reported data concerning the consolidation, but much fewer on mechanical property data that are, in fact, key indicators (much more than the consolidation) of the build quality for structural applications. The histogram shows that if a machine learning algorithm was trained on the literature reports, it might understate the importance of mechanical properties, even though these qualities accounts for the print quality and is significant in practice. In addition, although there is increasing a number of studies reporting the microstructure information, most of microstructure data is qualitative and quantitative data of microstructure remains very rare. Consequently, due to insufficient data reported for the microstructure and mechanical properties, ML would not able to learn the PMP relationship and would fail to provide accurate prediction of the mechanical data for given process parameters. Even on the consolidation data, there are limitations in method used to measure the consolidation. Hence the consolidation data contains considerable biases. Consolidation has been often measured by quantifying the density of defects such as porosity. While the optical (or electron microscopy) method can observe small defects, it is not effective in quantifying the 3D spatial distribution. By contrast, the X-ray tomography has limitations in observing fine defects. Therefore, reported data of porosity do not fully reflect the consolidation of an AM build. Also, different measurement methods can yield different results of porosity [64], resulting in inconsistent results. Moreover, even if the consolidation is accurately measured, it is known that the mechanical properties and performance of any mechanical/structural component are greatly dependent on other governing factors such as microstructure. Thus, the use of consolidation alone does not reflect well the quality of an AM build. Consequently, qualities that govern the mechanical performance such as YS, UTS, elongation, toughness, fatigue and creep that are important indicators of the load-bearing capacity and performance in structural applications should be included in the qualification of final products. Unfortunately, there have been insufficient data regarding toughness, fatigue and creep for meaningful analyses **Figure 2**. Therefore, this study only includes YS, UTS and elongation in consideration. To improve our consideration of the build quality, we proposed to introduce an additional output variable that is the average work-hardening. Work-hardening is an important parameter reflects the energy absorption capacity of a metal according to the Considere hypothesis [65]. The variable also indicates the tendency of a metal against the localisation that is one of the main mechanisms responsible for crack initiation [66]. Average work-hardening has been calculated by computing the difference between UTS and YS, divided by elongation. Together with the use of the YS, the introduction of the average work-hardening makes the UTS redundant and no longer needed. Analysis of the collected data showed a significant bias for high consolidation values, with over 80% of studies only reporting results with consolidation values above 95%. The highly skewed distribution is displayed in **Figure 3a**. The reporting of process parameters for high consolidation data only (all is above 70% with most of reports on above the 98%) without sufficient data for low consolidation creates a major bias. It will be shown later in the correlation analyses that this bias causes the data not to reflect the relationship between the process parameters and the consolidation well. Therefore, this bias could limit the machine in learning of the full relationship between the process parameters and consolidation. Training the machine using ML models on this skewed data causes the machine not to be able to accurately predict the process parameters for low consolidation, negatively affecting the performance of ML models. It is, therefore, calling the AM community to publish the process parameters that produce low consolidation alongside with currently reporting of high consolidation values. This is an essential step in formulating an extensive dataset as a ML model will produce biased results if trained with a dataset only containing "good" data [9, 67, 68]. Spearman's rank and Pearson's correlation coefficient between the process parameters and relevant outputs are shown in **Figure 4a** and **Figure 4b**, respectively. Spearman's rank has been used as it is more appropriate for heavy-tailed distributions and can be used to uncover monotonic relationships [47, 48]. Thus, Spearman's rank is suitable for the obtained dataset, where the majority of distributions for both input parameters (**Figure 3b**) and output properties (**Figure 3a**) are both heavy-tailed. Whereas Figure 3: **(a)** Histogram of reported output properties from the collected LPBF dataset. **(b)** Histogram of reported processing parameters from the collected LPBF dataset. Figure 2: Histogram of reported data from LPBF studies for the collected dataset. Pearson's is suitable to examine the linearity between variables. It is highly unlikely that the underlying physics of AM is linear, hence the Pearson's correlation coefficient reflects the non-linearity between two given variables [47, 48]. Volumetric energy density (VED) is a commonly used metric that consolidates the key process parameters such as power, beam speed, layer thickness, hatch spacing into a unified parameter and use it to optimize the print quality. Therefore, analysis of the correlation between VED with output variables such as consolidation and mechanical properties (reflecting the build quality) was also included in this study to discuss the effectiveness of the use of the VED. The colorbar besides the heatmap represents the correlation between the corresponding variables, where 1 denotes a perfect positive monotonic/linear relationship and -1 denotes a perfect negative monotonic/linear relationship, thus a value of 0 signifies no correlation. The closer the correlation is to \(\pm 1\), the more precise the association between two variables can be explained by their corresponding monotonic/linear relationship [69, 70]. However, the range limits on the bar has been scaled to [-0.4, 0.4] for a clearer presentation. Both Spearman's (**Figure 4a**) and Pearson's correlation coefficients (**Figure 4b**) indicate that overall laser power has the strongest correlation to the four quality variables, with VED showing similar coefficients but slightly weaker scores. Where the highest Spearman's correlation coefficient is 0.31 for laser power and elongation, while the highest Pearson's correlation coefficient is 0.20 for the correlation between layer thickness and VED with elongation. Nevertheless, with no coefficients exceeding \(\pm 0.5\), the low magnitude of the calculated coefficients suggests weak correlations between the processing parameters and the output properties. This is clearly not correct and highlights detrimental implication of biases within the data and/or the non-monotonicity in the correlationship. One major bias is as highlighted earlier regarding the availability of full spectrum of data in literature (**Figure 3a** - **Figure 3b**). Majority of literature data only reported the consolidation and mechanical properties corresponding to optimized print parameters. Authors only published (or get their studies published) data of high quality builds while the data corresponding to low quality were not published. This bias is particularly shown in the consolidation reports: over 80% of reported consolidation values are over 95%, **Figure 3a**. All available data of print parameters only correspond to a narrow range of values (limited to high consolidation and optimal mechanical properties). The lack of data that show strong effects of process parameters on build quality outside of the narrow range of optimized values causes the available data to fail at reflecting a strong correlation between the print parameter variables and output variables, explaining the low values for the Spearman's and Pearson's coefficients. Such a bias in only reporting optimized parameters makes the use of reported data fail to accurately capture the full spectrum of the process - property relationships [49]. Another reason is that the complex physics governing the melting and solidification of LPBF process can't be captured with a single one-to-one monotonic correlation between individual variables. This suggests a metric (e.g. VED) involving all parameters simultaneously should be capable of capturing the correlation of input - output variables. Surprisingly, the correlations between VED and the quality variables were quite similar to those of the laser power. It is important to note that the correlation values calculated in **Figure 4** may be affected by the presence of multicollinearity among the processing parameters. This is true for AM processes in which multiple parameters are tuned to achieve optimized quality. The multicollinearity is also reflected in a fact that the Pearson's correlation values were quite low [69, 70]. The variance inflation factor (VIF) is presented in **Table 1** to examine the degree of multicollinearity. VIF suggests that all considered parameters are highly multicollinear with the layer thickness and hatch spacing has the highest VIF, suggesting that the value choice of one (or both) of these two process parameters is dependent on the choice of other parameters. It is likely that the collinear dependence is extrinsic and engineered by printer users in optimizing the build quality. Laser power, speed and other parameters are often tied to one another to optimize the print quality [49]. To examine the correlation of an individual input parameter with output variables, it is necessary to have clean data in which only an input parameter is varied while the other input parameters are fixed. Unfortunately, such clean data is not publicly available in published literature. Most common method to address multicollinearity involves removing features with high VIF. However, as the objective is to understand how process parameters affect the build quality, removal of any features is avoided. Thus, algorithms immune to multicollinearity will be used Figure 4: **(a)** Heatmap of Spearman’s rank correlation coefficient between processing parameters and output properties, denoting the strength of monotonic relationship between subsequent variable pairs. **(b)** Heatmap of Pearsons’s rank correlation coefficient between processing parameters and output properties, denoting the strength of linearity between subsequent variable pairs. to train ML models, such as neural networks and tree-based algorithms. Where neural networks are not affected due to the overparameterization of coefficients or weights at each layer of the network, rendering the inflated regression coefficients redundant [71]. Whereas tree-based algorithms selects single features at a time when splitting the tree in a forward-stagewise manner, improving the model as demonstrated by Hastie et al. [57]. ### Principal Component Analysis (PCA) VED is often used in literature to reducing the dimensionality in the relationship between the quality of a build and process parameters. To further evaluate the efficacy of reducing the dimensionality in the data (in particular, the use of VED as a parameter to optimize print quality), PCA has been undertaken to reduce the dimensionality of all the process parameters into two variables for visualization. Where VED is, in effect, also a form a dimensionality reduction, thus a comparison would show the efficacy between both VED and the generated principal components ability to capture the correlation between process parameters and material properties. The PCA transformation used for this study was performed as follows \[\begin{bmatrix}0.5238&-0.0316&0.5966&0.6072\\ 0.4325&0.8685&-0.1185&-0.2114\end{bmatrix}\times\begin{bmatrix}P_{1}&P_{2}& \cdots&P_{n}\\ V_{1}&V_{2}&\cdots&V_{n}\\ T_{1}&T_{2}&\cdots&T_{n}\\ S_{1}&S_{2}&\cdots&S_{n}\end{bmatrix}=\begin{bmatrix}PC_{1,1}&PC_{1,2}&\cdots& PC_{1,n}\\ PC_{2,1}&PC_{2,2}&\cdots&PC_{2,n}\end{bmatrix} \tag{9}\] such that the product of the eigenvectors and P, V, T and S (power, scanning speed, layer thickness and hatch spacing respectively) is used to calculate the principal components \(PC_{1}\) and \(PC_{2}\). The eigenvector is generated by performing eigen decomposition on the covariance matrix of the dataset. Only two eigenvectors with the highest eigenvalues, \(PC_{1}\) and \(PC_{2}\) are retained, as these correspond to the two eigenvectors which capture the highest amount of variance in the data. The transformed space of both PC is represented in the biplot shown in **Figure**5. The length of the arrow depicts the strength of an individual process parameter with respect to its PC direction, whereas the angle represents the contribution of the process parameter to a PC: e.g., if a parameter is parallel to \(PC_{1}\), it contributes only to this component [72, 73, 67]. Thus, the transformed space suggests that layer thickness and hatch spacing contributes mostly to \(PC_{1}\), whereas laser speed mostly contributes to \(PC_{2}\), with laser power contributing to both equally. The angle of the arrows shows that the most influential variables for the construction of \(PC_{1}\) are layer thickness, hatch spacing and the laser power, whereas laser speed primarily contributes to \(PC_{2}\). Although the arrow lengths suggests laser speed has the greatest influence in generating a principal component, the biplot shows laser power correlates to all other process parameters studied. It should note that a principal component analysis can capture the correlation between constituent variables. The layer thickness and hatch spacing are very similar in creating the PC1, this suggests the two parameters are highly correlated with each other, where laser power is equally correlated to the laser speed, and the hatch spacing/layer thickness. Such correlation is consistent with the VIF values **Table**1 in which the VIFs of layer thickness and hatch spacing are almost the same while the VIF of the laser power is in between that \begin{table} \begin{tabular}{c c} \hline \hline **Variable** & **VIF** \\ \hline Laser power (W) & 4.78 \\ Laser speed (mm/s) & 2.92 \\ Layer thickness (\(\mu\)m) & 8.32 \\ Hatch spacing (\(\mu\)m) & 8.56 \\ \hline \hline \end{tabular} \end{table} Table 1: Multicollinearity detection of processing parameters with variance inflation factor (VIF). of laser speed and those of layer thickness and hatch spacing. This implies that laser power is used as the main parameter to balance with the adjustment of the other parameters on optimizing the quality build. **Figure 5a** displays the Spearman's rank heatmap for the two PCs. The generated map shows that the \(PC_{1}\) has stronger monotonic correlation with the quality variables (apart from the work hardening) than VED, hence is more effective at capturing the relationship between process parameters and quality. However, \(PC_{2}\) performs a lot worse than VED, suggesting that \(PC_{1}\) is capable of capturing the majority of the correlation between the process and properties. This is due to the process of PCA transformation: To maximise the captured variance in one direction, the other is reduced in the process [67, 72, 73]. The Spearman's and Pearson's rank heatmaps (**Figure 5a** - 5b) show the PC's better ability in capturing the correlation in comparison to VED. In particular, \(PC_{1}\) outperforms the VED at capturing the correlation of YS, elongation and consolidation. Nevertheless, despite popular belief, the results of the correlation for \(PC_{1}\) questions the suitability of VED as a metric to optimize the print quality. However, all the correlation values of PC1 are still relatively low, all being less than 0.5. This may imply potential issues in ML training as the machine may struggle with uncovering underlying patterns and correlation between processing parameters and properties [67]. Figure 5: PCA biplot representing the contribution and relationship between the processing parameters and both dimensionally-reduced principal components. The scatter plot showcases the projection of original data onto the reduced dimensional space. ### Process Parameter Optimization The common practice of identifying the process window is based on the consolidation. While achieving high consolidation is important, consolidation is not a single indicator of the material performance in structural application in which more than 90% of failures is due to mechanical performance, in particular fatigue [74]. Thus, the aim was to include multiple variables that better reflect the print quality and by analyzing the data across many different groups, the results will yield in a better identification of process window for high quality including consolidation. Therefore, selected processing parameters (laser power and speed) have been optimized with respect to not only the consolidation, but also YS, work-hardening and elongation following the method stated in Section 2.2. However, as the data on fatigue (and creep) is largely missing, these properties have not been considered for the identification of these windows. The generated optimized maps (\(Hmap_{2}\)) for commonly used AM alloys are displayed in **Figure 7**. The process maps (\(Hmap_{1}\)) for individual quality variable are provided in Supporting Information **Figure S2 - S6**. The red cross data points depict the collected data, whereas the color of the maps depicts the degree of quality with 1 denoting the highest quality, whereas 0 denoting the lowest quality. The generated heatmaps considers each alloy individually, as the identification of process parameters is highly dependent on the materials properties. Hence, a single map for all alloys would be obsolete as it wouldn't be an accurate representation of the true processing windows for an individual alloy; nevertheless, the map considering all materials is provided in Supporting Information **Figure S1** for reference. The overall printability of an alloy can be evaluated by the area of the process map for high quality. Such that an alloy with a larger process map can be printed with the high quality and with wider ranges of process parameters, i.e. more printable than another alloy with a smaller process map area. **Figure 7** suggests 316L and IN718 are most printable amongst all considered alloys. Furthermore, the maps suggests that IN625 and Hastelloy X have slightly better printability than Ti6Al4V. The low printability of Ti6Al4V is likely due to the loss of element (up to 0.9wt% loss of Al) and low ductility because of martensite and high dislocation densities [75, 76]. Figure 6: **(a)** Heatmap of Spearman’s rank correlation coefficient between principal component’s of processing parameters after dimensionality reduction and output properties, denoting the strength of monotonic relationship between subsequent variable pairs. **(b)** Heatmap of Pearson’s rank correlation coefficient, denoting the strength of linear relationship between subsequent variable pairs. Figure 7: \(Hmap_{2}\), optimized processing window maps for non-treated **(a)** 316L, **(b)** Ti6Al4V, **(c)** In625, **(d)** In718 and **(e)** Hastelloy X. The maps considers the yield strength, average work-hardening, elongation and consolidation when optimizing, where the brighter regions outlines the regions of laser power and laser speed which users are advised to use to obtain better overall print quality. ### Machine Learning Performance Following the preprocessing and training procedures stated in Section 2.3, the top 20 highest performing hyperparameters were used to train the 7 ML algorithms. The performance of all 140 trained models have been summarized with RMSE as the metric to evaluate the accuracy, **Figure 8**. The RMSE on the boxplot shows CatBoost and random forest exhibited the highest variability in performance, whereas XGBoost, LightGBM and neural networks demonstrated the least variability. Such that these three algorithms displayed consistent performance among the 20 trained models for each algorithm. However, on average, XGBoost emerged as the best-performing algorithm, surpassing CatBoost by a small margin. Overall, the performance of all trained models aligns well with existing literature, such that boosting algorithms typically have the best performance, followed by random forest, neural networks, support vector machine and decision tree [77]. Notably, the performance of neural network did not agree with this result, likely due to the limited size of the training data, as neural networks typically require large volumes of data to achieve optimal performance [78]. To assess the ML performance, the testing dataset has been fed to the best performing XGBoost model to give predictions of YS, average work-hardening, elongation and consolidation. **Figure 9** shows the comparison between the true values obtained from literature and the output predictions from the trained XGBoost model. The predictions for YS and elongation shows great agreement with the true values, the observation is reinforced by the calculated coefficient of determination (\(R^{2}\)) which were 0.91 and 0.83, respectively. The \(R^{2}\) values suggests that 91% and 83% of the variance can be explained by the model, for YS and elongation respectively. Notably, the \(R^{2}\) value for YS was the highest in all trained ML models. This high performance of ML on the YS is likely because of the high correlation between YS and the primary dendritic (or cellular) spacing. It is found that the YS is inversely related to the cellular dendritic spacing that is inversely proportional to the cooling rate, which is, in turn, controlled by the process parameters [79, 80]. This implies that there is a well defined correlation between the process parameters and YS reflected by the ML good performance, suggesting that ML is capable of reflecting the underlying science of such relationship and such a capability should be improved if there were more studies reporting the cellular spacing. The XGBoost model provides a reasonable prediction of the work hardening. As the work hardening was calculated based on YS, UTS and elongation, thus the accumulation of errors in predictions is expected to the reduce accuracy of work hardening predictions. However, the high accuracy Figure 8: Boxplot comparing RMSE performance metric of all trained machine learning models, where the median of the samples is displayed at the top. predictions for YS and elongation suggests the ML's low performance on the hardening was mainly due to the accuracy of predicting UTS. The ML model performed worst regarding the consolidation with 67% of the variance not predicted by the model. The low performance of consolidation (**Figure 9c2**) highlights the detrimental consequence of the bias in reporting the consolidation values as discussed earlier: The majority of consolidation reports was \(>\)98% (**Figure 3a**). Such a bias negatively affected the learning of the ML models. Furthermore, because the majority of consolidation data lies within the 90% - 100% range, the ML's prediction on the consolidation was highly weighted by the training of ML on the known data of this range. The performance of ML worsens for predicting consolidation further away from the known range (**Figure 9c1**). Moreover, the low performance of ML may be related to the difficulty in capturing the stochastic nature of porosity formation in the melting and cooling of AM process. Last but not least, as the property is influenced by the build location/direction due to the build up of residual heat and scanning strategy which were not included in the training of ML models; lowering the prediction accuracy of ML. To address this limitation, these relevant process parameters should ideally be incorporated into the training of ML model. However, this was not implemented due to insufficient data availability concerning these factors. Although the model considers whether the sample has been heat treated or HIP by means of _One Hot Encoding_, the results suggests no clear distinction in affecting the accuracy of predictions between treated and non treated samples; however, over time with more collected data, it is likely that the these observations may differ. Nevertheless, the results shown in **Figure 9** suggests that the trained ML model has some credit in characterizing the relationship between process parameters and the output properties considered, in particular for YS and elongation. Furthermore, the difference in performance between simple and complex ML models such as non-boosted and boosted algorithms only yielded in a minor increase in performance, suggesting that the biggest underlying factor is the quality of data. Therefore, getting high quality data with minimal biases would significantly improve the performance of ML models. Figure 9: Results of XGBoost model for predicted output properties against experimentally measured values collected in literature for all alloys. The predicted output properties considered are **(a)** yield stress, **(b)** average work hardening, **(c1)** consolidation, **(c2)** zoom in view of consolidation and **(d)** elongation. The red line depicts a perfect 1 to 1 fit, whereas the black line represents the line of regression obtained from the XGBoost model. ### Sensitivity Analysis Sensitivity analysis has been used to examine the trained ML models ability to account for important influence of AM process parameters in the quality. The best performing XGBoost and CatBoost model have been used for the analysis. The Sobol indices for the predicted properties by the XGBoost model is displayed in **Figure 10**. The values have been normalised such that the sum of the main effect sensitivity index is equal to 1, as does the sum of the total effect sensitivity index. The results shows that overall, laser power and speed are the most influential process parameters for all the investigated quality variables. However the results show that XGBoost was not able to reflect the underlying science of the process - mechanical property relations. For example, while the XGBoost model shows the laser power was seen highly influential in consolidation, the influence of laser speed was very weak. This is not consistent with a fact that both the laser power and speed are often used to optimize the consolidation [49]. In addition, the ML model shows a large influence of laser speed in YS, but not the laser power despite the two key parameter are used in controlling the thermal condition, in particular the cooling rate that governs the spacing of primary dendrites or cells [80, 81, 82, 83, 84]. Interestingly, the Sobol indices calculated by the CatBoost model (**Figure 11**) reflects the known underlying science well. This can be seen with the Sobol indices for the yield stress, as laser speed and power is shown to be the two most influential process parameters. This aligns well with the correlation between YS with speed and power with the cooling rate, hence YS as discussed earlier. However, hatch spacing is also shown to have high impact, especially with combined interactions as indicated by the total effect index. Although the ML's prediction for consolidation was suboptimal (**Figure 9**), the calculated Sobol indices reflect the roles of process parameters in the main mechanisms of porosity formation. As process induced pores, such as keyhole pores resulting from excessive power density, and lack of fusion pores caused by insufficient molten metal due to inadequate energy density, i.e. dependent on melt pool geometry, layer thickness and hatch spacing [85, 49, 80]. Hence, it is anticipated that the influence from processing parameters on consolidation are expected to be approximately equal - this is reflected well in (**Figure 11**). The mechanism of work hardening relies on interactions of dislocations between themselves Figure 10: Bar plots comparing the main effect sensitivity index and the total effect sensitivity index for the XGBoost model. The main effect sensitivity index measures the influence of an individual process parameter on a given output property, without considering its interactions with other process parameters. Whereas total effect sensitivity index measures the overall influence of an individual process parameter on a given output property, including both the main effect sensitivity and effects through its interactions with other process parameters. and with other crystallographic features. In additive manufacturing conditions, alloys often consist of high dislocation density regions at the cellular (or dendritic) boundaries. Consequently, finer cells (or dendrites) lead to increased interactions between mobile dislocations and immobile dislocations at the dislocation-rich regions. Thus, the primary factors influencing this property are laser power and speed, as they dictate the cooling rate, hence the cell (dendrite) spacing [86, 87]. Whereas elongation is dependent on both consolidation and work hardening. As such, it is expected to exhibit a similar influence of process parameters as the as the combined effect of these two properties. The analysis also revealed that interactions between process parameters can significantly affect the studied quality variables, indicating the presence of synergistic effects. This can be seen with hatch spacing as the total effect index is always greater than the main effect index. This suggests that although hatch spacing alone has weak influence on properties, the interaction of hatch spacing with other process parameters was found to have substantial impact on properties. Figure 11: Bar plots comparing the main effect sensitivity index and the total effect sensitivity index for the CatBoost model. The main effect sensitivity index measures the influence of an individual process parameter on a given output property, without considering its interactions with other process parameters. Whereas total effect sensitivity index measures the overall influence of an individual process parameter on a given output property, including both the main effect sensitivity and effects through its interactions with other process parameters. Conclusion A considerably large literature data for metal additive manufacturing (AM) was created in this study. A comprehensive and in-depth examination of the data highlights major biases and limitation of the literature data, limiting the understanding of the process, microstructure, mechanical property (PMP) relationship: (1) Most studies only reported consolidation, with almost three times more reports than mechanical properties such as yield stress and elongation. (2) Most literature data obtained were in optimized or near optimized conditions, with 84% of the consolidation data reported above values of 95%. Correlation analysis shows that this bias limits the literature data in revealing the strong correlation between process parameters and quality variables such as the consolidation and mechanical properties. (3) significantly lack of quantitative data on microstructure such as the spacing of primary dendrites or cells. Meta-analyses of the collected were done, showing weak correlation between the process parameters (i.e. input) including the volumetric energy density (VED), with the consolidation and mechanical properties (i.e. output). Such weak correlation is likely due to (1) the stated biases and the (2) the correlation between input process parameters and quality variables are non-monotonic with high multicollinearity. The hatch spacing and layer thickness are found to be most collinear, reflecting a common practice in AM process identification: the values of these two parameters are often balanced by tuning the beam power and beam speed. While the correlation analysis and study of ML performances demonstrate potential for data-driven approaches for metal additive manufacturing, the quality of the dataset hinders the results due to current reporting biases and practices. The bias (1) reflects another common practice (in AM publications): the identification of process maps is commonly based only on consolidation. Such bias noises a serious limitation on the optimization of parameters because the quality of an AM build is ultimately governed by mechanical properties such as yield strength, elongation and hardening. The significant data enables us to identify the process map on the basis of not only consolidation, but also yield stress, elongation and work hardening. The process map identification results show that amongst all the alloys considered in this study, 316L and Inconel 718 are the most printable alloys followed by Hastelloy X and Inconel 625, where the least printable alloy is Ti6Al4V. The present study also investigates dimensionality reduction of processing parameters using principal component analysis. The dimensionality reduction was compared to VED, a common metric that consolidates multiple processing parameters for process optimization. Two principal components show much stronger correlation between the PC with the output, suggesting an alternate way (in comparison to the VED) to reduce dimension in optimizing the build consolidation and quality. The bias (2) seriously limits the use of machine learning in learning the full spectrum of the process parameter - consolidation relationship, hence negatively affecting the ML performance in predicting the consolidation. This effect results in a low accuracy of ML predictions for consolidation, and most evidently in the low value range of consolidation. Furthermore, the minor increase in accuracy by using boosting algorithms compared to non-boosting algorithms further suggests the quality of the obtained dataset is the most significant factor in improving the ML models' performance. We are, therefore, calling the AM community to publicly share data of wide spectrums, in particular process parameters producing low and intermediate ranges of consolidation and mechanical properties beyond consolidation or density. Furthermore, increase data availability through open-access and standardize reporting formats to provide easily accessible data. To aid in this effort, an online template for users to contribute data or use the training dataset and code associated with this work are available in the open-source Github repository at [https://github.com/RaymondWKWong/MetaAnalysis_MetalAM](https://github.com/RaymondWKWong/MetaAnalysis_MetalAM). Last but not least, due to insufficient data reported for the microstructure and mechanical properties, ML would not able to learn the PMP relationship. Given the inherent correlation between microstructure and mechanical properties (in particular for the long term performance such as fatigue), next significant efforts should be given to generating microstructure data and fatigue. **Acknowledgements** The authors would like to thank the EPSRC for supporting the research [grant number EP/K503733/1]. R. Wong and M.S. Pham would like to thank Jalal Al-Lami for providing part of Inconel 718 literature data used for this study. M.S. Pham, R. Wong and C.S. Maldonado thank the Imperial College London's support via a Imperial-Nanyang Technological University seed fund. The views expressed in the article do not necessarily represent the views of the U.S. Department of Energy or the United States Government. Sandia National Laboratories is a multimission laboratory managed and operated by National Technology and Engineering Solutions of Sandia, LLC., a wholly owned subsidiary of Honeywell International, Inc., for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-NA-0003525. ## Supporting Information Figure S3: Non-treated Ti6Al4V \(Hmap_{1}\), individual processing window maps for yield strength, average work-hardening, elongation and consolidation. Figure S5: Non-treated IN718 \(Hmap_{1}\), individual processing window maps for yield strength, average work-hardening, elongation and consolidation.
2309.09428
Explicit Results for the Distributions of Queue Lengths for a Non-Preemptive Two-Level Priority Queue
Explicit results are derived using simple and exact methods for the joint and marginal queue-length distributions for the M/M/c queue with two non-preemptive priority levels. Equal service rates are assumed. Two approaches are considered. One is based on numerically robust quadratic recurrence relations. The other is based on a complex contour-integral representation that yields exact closed-form analytical expressions, not hitherto available in the literature, that can also be evaluated numerically with very high accuracy.
Josef Zuk, David Kirszenblat
2023-09-18T01:58:17Z
http://arxiv.org/abs/2309.09428v1
Explicit Results for the Distributions of Queue Lengths for a Non-Preemptive Two-Level Priority Queue ###### Abstract Explicit results are derived using simple and exact methods for the joint and marginal queue-length distributions for the M/M/\(c\) queue with two non-preemptive priority levels. Equal service rates are assumed. Two approaches are considered. One is based on numerically robust quadratic recurrence relations. The other is based on a complex contour-integral representation that yields exact closed-form analytical expressions, not hitherto available in the literature, that can also be evaluated numerically with very high accuracy. queueing theory; non-preemptive priority; queue length distribution Msc2000 subject classification Primary: 90B22; secondary: 60K25, 60J74 Or/MS subject classification Primary: Queues: Priority; secondary: Queues: Markovian H.: Date created: July 03, 2023. Last update: November 7, 2021 ## 1 Introduction This work is concerned with the development of practical algorithms for the computation of joint and marginal distributions of queue lengths for the M/M/\(c\) queue with a non-preemptive priority discipline. Applications of this model are found in telecommunications [5], health care [10, 25], radar [21, 22], air traffic control [23] and numerous other areas. The non-preemptive priority queue discipline is as stated by Dressin and Reich [7]: Once a client's service has begun, it is permitted to proceed to completion. If a server becomes empty, and there is at least one client waiting, then a client of the highest priority present in the queue is admitted to the server. Clients of equal priority are served on a first-come, first-served basis. This is also known as the 'head of the line' discipline. Thus, let us consider a non-preemptive queue with K priority levels, each with a distinct Poisson arrival rate \(\lambda_{k}\), \(k\!=\!1,2,\ldots,K\) and corresponding level traffic intensity1\(r_{k}\!=\!\lambda_{k}/(N\mu)\), leading to a total traffic intensity for the aggregation of all arrivals of \(r\!=\!\sum_{k=0}^{K}r_{k}\). We adopt the usual convention that smaller priority-level indices \(k\) represent higher priorities. Thus, \(r_{1}\) denotes the traffic intensity associated with the highest priority level. For simplicity, we have assumed a common exponential service rate \(\mu\) among all priority levels. The number of servers is denoted by \(c\!=\!N\). Footnote 1: Consistent with [8, 11], the notation \(\rho_{k}\) reserved for \(\rho_{k}\!\equiv\!\lambda_{k}/\mu\), so that \(r_{k}\!=\!\rho_{k}/N\). In this work, attention is confined to the two-level problem \(K\!=\!2\). Analysis of this case is amenable to a number of analytical techniques that do not extend easily, or at all, to the general multi-level priority problem. Also, the two-level problem has a distinguished status. All marginal distributions for the multi-level problem can be inferred from the low-priority marginal pertaining to just two priority levels [6]. If we let \(r_{\rm hi}\) and \(r_{\rm lo}\) denote the level traffic intensities for the high and low priority arrivals, respectively, for the two-level problem, then the wait-conditional2 marginal distribution of the queue length for priority level \(k=1,2,\ldots,K\) in the multi-level problem is obtained by making the identifications \[r_{\rm lo}=r_{k}\,,\quad r_{\rm hi}=\sum_{\ell=1}^{k-1}r_{\ell}\,, \tag{1}\] so that the total traffic intensity in the effective (wait-conditional) two-level problem becomes \(r=r_{\rm sum}\), with \[r_{\rm sum}=r_{\rm lo}+r_{\rm hi}=\sum_{\ell=1}^{k}r_{\ell}\,. \tag{2}\] For the actual two-level problem, we have the identifications \(r_{\rm hi}\equiv r_{1}\), \(r_{\rm lo}\equiv r_{2}\), and we shall use both sets of notation interchangeably. It is also convenient to introduce the parameter \(\nu\) that represents the fraction of all arrivals that are of high priority (which we abbreviate as 'hifrac'). Thus \(r_{\rm hi}=\nu r\), \(r_{\rm lo}=(1-\nu)r\), \(0\leq\nu\leq 1\). Previous work on the non-preemptive priority queue has focused, almost entirely, on calculating moments and the waiting-time distributions per priority level. In early work, Cobham [3, 4], followed by Holley [9], were first to consider the mean waiting times and queue lengths. Waiting-time means and second moments for general service-time distributions were subsequently given by Kesten and Runnenberg [15]. Gail et al. [8] studied the non-preemptive M/M/\(c\) system for two priority levels with different exponential service rates. While they developed a matrix algorithm for determining various characteristics of a generating function for this problem, explicit results were also confined to the mean waiting times and queue lengths. For the waiting-time problem, Davis [6] improved on previous work by Dressin and Reich [7] to derive an explicit integral expression for the waiting-time distribution for the non-preemptive priority queue. He analysed the two-level problem, as the waiting time distribution for the multi-level problem can be inferred from the two-level case. He did not study the queue-length marginals, and they cannot be directly inferred from the waiting-time distributions by appealing to the distributional form of Little's law [1, 13] as the no-overtaking assumption is violated. Kella and Yechiali [14] covered the same ground as Davis for the probability waiting function (PGF) of the waitingtime, but using a different methodology. The moment generating function (MGF) of the waiting time and associated moments have also been considered in [19]. More recently, Wagner [26] has studied the waiting-time MGF for a finite-capacity, multi-server version of the same problem as Davis. For the queue-length distributions, Miller [17, 18] uses a matrix-geometric method for the two-level problem that results in a complex algorithm involving multiple levels of recursion. Little is presented about the numerical stability of this approach, and it is known to deteriorate for traffic intensities close to unity. Kao and Narayanan [11] and Kao and Wilson [12] also employ matrix-geometric methods for the two-level problem which, as they point out, unavoidably require finite-state truncation. The aforementioned papers deal with unequal service rates. The matrix-geometric method [20] applied to queueing models has the singular disadvantage that it necessitates truncation of the problem to prescribed finite maximum values of queue lengths. While powerful, it is complex and not elegant. Thus, its use should best be avoided whenever simpler alternatives are available, and this is manifestly the case for the present problem, as will become clear. In earlier work, Marks [16] studied the two-level problem with common service rate and derived a highly complex system of linear partial difference equations that must be solved recursively. The required manipulations are cumbersome and no insight into the analytic structure of the problem is gained. However, it is most likely the first paper where actual queue-length probabilities, rather than the PGF, were computed. No light is shed on the numerical stability of the method. A different approach, based on a partial PGF, is due to Cohen [5], who studied the two-level problem with equal service rates; and it is this approach that we pursue in the discussion that follows. We take up the programme where Cohen [5] left off, in devising simple and practical schemes for extracting actual probabilities from the PGF. There is the additional benefit that this approach can be extended to the general multi-level problem. Shortle et al. [24] have remarked that 'the determination of stationary probabilities in a non-preemptive Markovian system is an exceedingly difficult matter, well near impossible when the number of priorities exceeds two'. In a separate forthcoming paper, we shall demonstrate otherwise. The present work focuses on explicit results that are useful for practical applications. While we do not purport to have made general theoretical advances in priority queues, the work does serve to fill a large gap in the literature by establishing basic results for a paradigmatic model that one would expect to have been uncovered decades ago. We believe that it also has pedagogical value. For the two-level non-preemptive priority queue, Shortle et al. [24], in the most recent edition of their textbook, set up the stationary balance equations but remark that 'obtaining a reasonable solution to these stationary equations is very difficult,... The most we can do comfortably is obtain expected values via two-dimensional generating functions'. The simplicity of the methods described herein might render a more detailed treatment of the subject suitable for elementary texts. ## 2 Non-Preemptive Priority Queue The no-wait probability \(P_{\rm NW}\) is the probability that a new arrival will find at least one server idle. It is clearly independent of the queue discipline, and is given by [6] \[\frac{1}{1-P_{\rm NW}}=1+(1-r)\frac{N!}{(Nr)^{N}}\cdot\sum_{k=0}^{N-1}\frac{( Nr)^{k}}{k!}\,. \tag{3}\] Let \(P(n,m)\) denote the steady-state probability that there are \(n\) low-priority clients in the queue (rather than in the system) and \(m\) high-priority clients in the queue. We have the decomposition [_cf._ 6] \[P(n,m)=P_{\rm NW}\cdot\delta_{n0}\delta_{m0}+(1-P_{\rm NW})\cdot f(n,m)\,, \tag{4}\] where \(f(n,m)\) represents the wait-conditional joint PMF, _i.e._ the probability that there are \(n\) low-priority clients and \(m\) high-priority clients in the queue, given that all servers are busy. The wait-conditional distribution does not explicitly depend on the number of servers \(N\). There is only an indirect dependence on \(N\) through the total traffic intensity \(r\). Our starting point is the paper of Cohen [5], which introduced a partial PGF for the problem that summed only over the low-priority argument: \[G_{m}(p)\equiv\sum_{n=0}^{\infty}p^{n}P(n,m)\,. \tag{5}\] This turns out to be a very convenient strategy, especially given the fact that the wait-conditional high-priority marginal is a simple geometric distribution. Only the low-priority marginal is non-trivial. We introduce a wait-conditional version \(g_{m}(p)\) of this PGF such that \[g_{m}(p)\equiv\sum_{n=0}^{\infty}p^{n}f(n,m)\,. \tag{6}\] It follows that \[G_{m}(p)=P_{\rm NW}\cdot\delta_{m0}+(1-P_{\rm NW})\cdot g_{m}(p)\,. \tag{7}\] Cohen's result [5] for the wait-conditional PGF for the two-level non-preemptive priority queue with equal service rates is3 Footnote 3: The quantities \(\lambda_{1,2}(p)\) should not be confused with the arrival rates introduced earlier. We are adhering to Cohen’s original, but less than ideal, notation. \[g_{m}(p)=\frac{(1-r)(1-p)}{1-p\lambda_{2}(p)}\cdot\lambda_{1}^{m}(p)\,, \tag{8}\] where \(\lambda_{1,2}(p)\) are defined as follows: Let us introduce \(\lambda(p)=\lambda_{\pm}(p)\) as the two solutions of the quadratic equation \[\lambda^{2}-(1+r-r_{2}p)\lambda+r_{1}=0\,, \tag{9}\] such that \[\lambda_{\pm}(p)=[b(p)\pm\sqrt{b^{2}(p)-4r_{1}}]/2\;,\quad b(p)\equiv 1+r-r_{2}p\,. \tag{10}\] Then in (8), we have \(\lambda_{1}(p)=\lambda_{-}(p)\), \(\lambda_{2}(p)=\lambda_{+}(p)\), and it is useful to note that \[\lambda_{+}(p)+\lambda_{-}(p)=b(p)\;,\quad\lambda_{+}(p)\cdot\lambda_{-}(p)=r _{1}\;. \tag{11}\] Another way to express the PGF for the wait-conditional distribution is \[g_{m}(p)=g_{\rm lo}(p)\cdot[1-\lambda_{1}(p)]\lambda_{1}^{m}(p)\;,\quad g_{\rm lo }(p)=\frac{1-r}{\lambda_{2}(p)-r}\;. \tag{12}\] That \(g_{\rm lo}(p)\) represents the wait conditional PGF for the low-priority marginal is clear from observing that \[\sum_{m=0}^{\infty}g_{m}(p)=g_{\rm lo}(p)\;. \tag{13}\] On the other hand, it follows directly from (12) that the wait-conditional PMF for the high priority marginal is given by \[f_{\rm hi}(m)=g_{m}(1)=(1-r_{1})r_{1}^{m}\;. \tag{14}\] Consequently, the only marginal distribution of interest in the present study is that for the low-priority level. By construction, the wait-conditional joint PMF \(f(n,m)\) is recovered from the PGF \(g_{m}(p)\) according to \[f(n,m)=\frac{1}{n!}\cdot\left.\frac{d^{n}}{dp^{n}}g_{m}(p)\right|_{p=0}\;. \tag{15}\] The multiple derivative is prohibitively cumbersome to directly perform analytically. Thus, we proceed to present two alternative strategies that render the problem tractable. ## 3 Quadratic Recurrence The first method constructs a recurrence relation based on the fact that the functions \(\lambda_{\pm}(p)\) solve a quadratic equation. We begin by considering the low-priority marginal, whose PGF can be expressed as \[g_{\rm lo}(p)=\frac{1-r}{\lambda_{2}(p)-r}\;. \tag{16}\] Since \(\lambda_{2}(p)\) satisfies a quadratic equation, then so does \(g_{\rm lo}(p)\). Let us set \[u\equiv r_{2}p\,,\quad g(u)\equiv\frac{1}{\lambda_{2}-r}=\sum_{k=0}^{\infty} \frac{g_{k}}{k!}u^{k}\quad\Rightarrow\quad g_{k}=\frac{d^{k}g(u)}{du^{k}} \bigg{|}_{u=0}\;. \tag{17}\] Then we obtain \[(ru-r_{2})g^{2}+(u-1+r)g+1=0\;. \tag{18}\] We now differentiate this equation \(n\) times with respect to \(u\), and use the identities \[\left.\frac{1}{n!}\cdot\left.\frac{d^{n}}{du^{n}}(ug)\right|_{u=0} =\frac{g_{n-1}}{(n-1)!}\,,\right. \tag{19}\] \[\left.\frac{1}{n!}\cdot\left.\frac{d^{n}}{du^{n}}(g^{2})\right|_{u =0} =\sum_{k=0}^{n}\frac{g_{k}}{k!}\cdot\frac{g_{n-k}}{(n-k)!}\,,\right.\] \[\left.\frac{1}{n!}\cdot\left.\frac{d^{n}}{du^{n}}(ug^{2})\right|_{u =0} =\sum_{k=0}^{n-1}\frac{g_{k}}{k!}\cdot\frac{g_{n-k-1}}{(n-k-1)!}\,.\] For the quantities \(f_{k}\equiv g_{k}/k!\), this leads to the non-linear recurrence relations \[f_{n}=\frac{1+rf_{0}}{1-r+2r_{2}f_{0}}\cdot f_{n-1}+\frac{1}{1-r+2r_{2}f_{0}} \sum_{k=1}^{n-1}f_{k}\cdot(rf_{n-k-1}-r_{2}f_{n-k})\,, \tag{20}\] for \(n=1,2,\ldots\), with \[f_{0}=\frac{1}{2r_{2}}\left[\sqrt{(1-r)^{2}+4r_{2}}-(1-r)\right]=\frac{2}{1-r+ \sqrt{(1-r)^{2}+4r_{2}}}>0\,. \tag{21}\] The expression for \(f_{0}\) follows from \(f_{0}^{-1}=g_{0}^{-1}=\lambda_{2}(0)-r\). We observe that \(f_{\mbox{\tiny lo}}(n)=(1-r)r_{2}^{n}f_{n}\). Efficient vectorized implementations in Matlab are possible. Practical implementation proceeds as follows: Let us introduce an arbitrary scale factor \(\Lambda\), define \[D\equiv 1-r+2r_{2}f_{0}=\sqrt{(1-r)^{2}+4r_{2}}\,,\quad c_{1}\equiv r_{2}/ \Lambda\,,\quad c_{2}\equiv\Lambda/D\,, \tag{22}\] and scale according to \(\tilde{f}_{n}\equiv\Lambda^{n}f_{n}=(r_{2}/c_{1})^{n}f_{n}\). Then we solve the recurrence \[\tilde{f}_{n}=c_{2}\cdot\left(\tilde{f}_{n-1}+\sum_{k=0}^{n-1}\tilde{f}_{k} \Delta_{(n-1)-k}\right) \tag{23}\] and recover the marginal as \(f_{\mbox{\tiny lo}}(n)=(1-r)c_{1}^{n}\tilde{f}_{n}\). At each step, we set \[\Delta_{k}\equiv r\tilde{f}_{k}-c_{1}\tilde{f}_{k+1}\,, \tag{24}\] for \(k=1,2,\ldots,n-1\), subject to the initialization \(\tilde{f}_{n}\gets 0\) within the scope of evaluating \(\Delta_{n-1}\). We find that good numerical performance is achieved with \(\Lambda=r_{2}\), so that \(c_{1}=1\). Analogous treatment of the joint PMF is only marginally more complex. Based on the quadratic \[\lambda_{\pm}^{2}+(u-1-r)\lambda_{\pm}+r_{1}=0\,, \tag{25}\] we solve for the Taylor-series coefficients \(\lambda_{\pm}^{(k)}\) in \[\lambda_{\pm}=\sum_{k=0}^{\infty}\lambda_{\pm}^{(k)}p^{k}=\sum_{k=0}^{\infty} \Lambda^{-k}f_{k}^{\pm}u^{k}\,, \tag{26}\] for some arbitrary scale factor \(\Lambda\), using the non-linear recurrence \[f_{n}^{\pm}=\mp\frac{1}{\sqrt{(1-r)^{2}+4r_{2}}}\left(\Lambda f_{n-1}^{\pm}+ \sum_{k=1}^{n-1}f_{k}^{\pm}\cdot f_{n-k}^{\pm}\right)\,, \tag{27}\] \(n=1,2,\ldots\), where \[f_{0}^{\pm}=\frac{1}{2}\left(1+r\pm\sqrt{(1-r)^{2}+4r_{2}}\right)\,. \tag{28}\] The \(\lambda\)-coefficients are recovered according to \(\lambda_{\pm}^{(k)}=(r_{2}/\Lambda)^{k}f_{k}^{\pm}\). As with the marginal, the choice \(\Lambda=r_{2}\) results in good numerical performance. All that remains to be done is to use the standard recursion for multiplication of power series as dictated by (12). The simplest way to proceed is via repeated convolutions: \[\phi_{0}=(1-r)\cdot\mbox{conv}\left(\frac{1}{\lambda_{2}-r},1- \lambda_{1}\right)\,, \tag{29}\] \[\phi_{k}=\mbox{conv}(\phi_{k-1},\lambda_{1})\,,\] for \(k=1,2,\ldots,m\). Then \(f(n,m)=\phi_{m}(n)\). The conv function is defined like the Matlab function of the same name: Suppose that \(C(u)=A(u)B(u)\), with \[A(u)=\sum_{n=0}^{n_{1}}a(n)u^{n}\,,\quad B(u)=\sum_{n=0}^{n_{2}}b(n)u^{n}\,, \quad C(u)=\sum_{n=0}^{n_{1}+n_{2}}c(n)u^{n}\,. \tag{30}\] Then \(c=\mathrm{conv}(a,b)\), where \[c(n)=\mathrm{conv}(a,b)(n)\equiv\sum_{k=0}^{n}a(k)b(n-k)\,, \tag{31}\] for \(n=0,1,2,\ldots,n_{1}+n_{2}\). The foregoing recurrence relations constitute a significant improvement over the strategy implemented in [2], and are vastly simpler than those arising from the matrix-geometric method as considered in [11, 12, 17, 18]. While the quadratic recurrence method exhibits excellent numerical behaviour, it gives little insight into the analytical structure of the distributions. This deficiency is addressed in the next section. ## 4 Complex Contour Integral Another strategy in dealing with (15) is to represent it in terms of a complex contour integral in accordance with Cauchy's integral theorem. This yields \[f(n,m)=(1-r)\oint_{\mathcal{C}}\frac{dp}{2\pi i}\frac{(1-p)\lambda_{1}^{m}}{p^ {n+1}(1-p\lambda_{2})}\,, \tag{32}\] Figure 1: where \(\mathcal{C}\) is an anti-clockwise circle centred about the origin with radius less than \(1/r\). It follows directly that the low-priority marginal PMF, defined by \[f_{\mbox{\tiny lo}}(n)\equiv\sum_{m=0}^{\infty}f(n,m)\,, \tag{33}\] is represented as a complex contour integral by \[f_{\mbox{\tiny lo}}(n)=(1-r)\oint_{\mathcal{C}}\frac{dp}{2\pi i}\,\frac{1-p}{p^ {n+1}}.\frac{1}{(1-p\lambda_{2})(1-\lambda_{1})}\,. \tag{34}\] The conventional approach in dealing with such contour integrals, mirroring the approach adopted previously for the waiting-time distribution [6], would be to deform the contour by expanding it to the circle at infinity while avoiding a cut of finite extent on the real axis that is generated by the square-root component of \(\lambda_{\pm}(p)\), and a possible simple pole that also lies on the real axis. The circle at infinity yields a vanishing contribution, which leaves a (potential) pole term and a real-valued integral along the cut. We shall explore this approach separately in a forthcoming paper, where we shall show that it leads to integral expressions that are amenable to efficient quadrature algorithms, and can also be evaluated analytically in terms of a generalized form of the associated Legendre functions. In the present work, we pursue a different method based on a change of integration variable. Let \(\lambda=z_{1},z_{2}\) be the roots of the polynomial equation \(\lambda^{2}-(1+r)\lambda+r_{1}=0\), so that we have \(z_{1}+z_{2}=1+r\), \(z_{1}z_{2}=r_{1}=\nu r\). Then, the inversion of \(z=\lambda_{\pm}(p)\) yields \[p=-(z-z_{1})(z-z_{2})/(r_{2}z)\,. \tag{35}\] Thus, \[dp=-\frac{1}{r_{2}}\left(1-\frac{z_{1}z_{2}}{z^{2}}\right)dz\,, \tag{36}\] in which case \[\frac{dp}{p^{n+1}}=(-r_{2})^{n}\frac{(z^{2}-z_{1}z_{2})z^{n-1}}{\left[(z-z_{1} )(z-z_{2})\right]^{n+1}}\cdot dz\,. \tag{37}\] We make the change of integration variable \(p\mapsto z\dvtx z=\lambda_{1}(p)\), in which case \(\lambda_{2}(p)=r/z\), and we make the identifications \[z_{0}=r_{1}/r\,,\quad z_{1}=\lambda_{-}(p=0)\,,\quad z_{2}=\lambda_{+}(p=0)\,, \tag{38}\] or, equivalently, \[z_{0}=\nu\,,\quad z_{1}=\tfrac{1}{2}\left[1+r-\sqrt{(1+r)^{2}-4\nu r}\right] \,,\quad z_{2}=\tfrac{1}{2}\left[1+r+\sqrt{(1+r)^{2}-4\nu r}\right]\,. \tag{39}\] Then, we obtain \[\frac{1-p}{1-p\lambda_{2}}=\frac{z}{r}.\frac{z-1}{z-z_{0}}\,. \tag{40}\] It follows that the joint PMF is given by \[f(n,m)=\frac{(1-r)(-r_{2})^{n}}{r}\oint_{\mathcal{C}^{\prime}}\frac{dz}{2\pi i }\,\frac{z^{m+n}}{z-z_{0}}.\frac{(z-1)(z^{2}-z_{1}z_{2})}{[(z-z_{1})(z-z_{2})] ^{n+1}}\,, \tag{41}\] where \(\mathcal{C}^{\prime}\) is a closed anti-clockwise contour that encloses the pole at \(z=z_{1}\). but with the poles at \(z=z_{0},z_{2}\) in the exterior. For the low-priority marginal PMF, we have \[f_{\mbox{\tiny lo}}(n)=-\frac{(1-r)(-r_{2})^{n}}{r}\oint_{\mathcal{C}^{\prime }}\frac{dz}{2\pi i}\,\frac{z^{n}}{z-z_{0}}.\frac{(z^{2}-z_{1}z_{2})}{[(z-z_{1}) (z-z_{2})]^{n+1}}\,. \tag{42}\] ## 5 R-Integrals In order to evaluate the integral representations for the joint and marginal PMFs, derived the foregoing section, we introduce a collection of complex contour integrals, to which we shall refer as the R-integrals, according to the definition \[R_{n}^{m}\equiv\oint_{\mathcal{C}^{\prime}}\frac{dz}{2\pi i}\,\frac{1}{z-z_{0}} \cdot\frac{z^{m}}{[(z-z_{1})(z-z_{2})]^{n}}\,, \tag{43}\] for \(m,n=0,1,2,\ldots\), where \(\mathcal{C}^{\prime}\) is a closed anti-clockwise contour that encloses the pole at \(z=z_{1}\). but with the poles at \(z=z_{0},z_{2}\) in the exterior. An immediate consequence of this definition is the (backwards) recurrence relation \[R_{n-1}^{m}=R_{n}^{m+2}-(z_{1}+z_{2})R_{n}^{m+1}+z_{1}z_{2}R_{n}^{m}\,. \tag{44}\] One may also note the scaling behaviour \[R_{n}^{m}(z_{0},z_{1},z_{2})=z_{0}^{m-2n}R_{n}^{m}(1,z_{1}/z_{0},z_{2}/z_{0})\,, \tag{45}\] or, more generally, \[R_{n}^{m}(z_{0},z_{1},z_{2})=\zeta^{m-2n}R_{n}^{m}(z_{0}/\zeta,z_{1}/\zeta,z_ {2}/\zeta)\,, \tag{46}\] for any \(\zeta>0\). In the present application to the priority queue, the parameters \(z_{0},z_{1},z_{2}\) are given by (39). In terms of the R-integrals, the joint PMF is given by \[f(n,m)=\frac{(1-r)(-r_{2})^{n}}{r}\left(R_{n+1}^{m+n+3}-R_{n+1}^{m+n+2}-z_{1} z_{2}R_{n+1}^{m+n+1}+z_{1}z_{2}R_{n+1}^{m+n}\right)\,. \tag{47}\] If we introduce the difference functions \(\Delta R_{n}^{m}\equiv R_{n}^{m+1}-R_{n}^{m}\), then we can write \[f(n,m)=\frac{(1-r)(-r_{2})^{n}}{r}\left(\Delta R_{n+1}^{m+n+2}-z_{1}z_{2} \Delta R_{n+1}^{m+n}\right)\,. \tag{48}\] Likewise, in terms of the R-integrals, we have for the low-priority marginal PMF, \[f_{\rm lo}(n)=-\frac{(1-r)(-r_{2})^{n}}{r}\left(R_{n+1}^{n+2}-z_{1}z_{2}R_{n +1}^{n}\right)\,, \tag{49}\] for \(n=0,1,2,\ldots\). For the exclusively-low distribution, defined by \(f_{\rm{xlo}}(n)\equiv f(n,0)\), we can write \[f_{\rm{xlo}}(n)=\frac{(1-r)(-r_{2})^{n}}{r}\left(\Delta R_{n+1}^{n+2}-z_{1}z_{ 2}\Delta R_{n+1}^{n}\right)\,. \tag{50}\] It gives the probability of finding \(n\) low-priority clients in the queue and no high-priority clients. It has a form that is similar to the low-priority marginal \(f_{\rm{lo}}(n)\), and we will show later that the two are, in fact, closely related. This relationship will provide a useful diagnostic test of the numerical performance of the R-integral computation. We have succeeded in recasting the problem into one that involves complex contour integration over a collection of totally meromorphic functions. In Figure 1, we plot the \(z\)-contour \(\mathcal{C}^{\prime}\) that results from taking the \(p\)-contour \(\mathcal{C}\) to be the unit circle centred on the origin, plotted for the case of total traffic intensity \(r=0.95\) and fraction of high-priority arrivals \(\nu=0.75\). Also displayed are the locations of R-integral poles \(z_{0},z_{1},z_{2}\). ### Recurrence If we cast the recurrence relation (44) as \[R_{n}^{m}=R_{n-1}^{m-2}-z_{1}z_{2}R_{n}^{m-2}+(z_{1}+z_{2})R_{n}^{m-1}\,, \tag{51}\] for \(m=2,3,\ldots\), \(n=1,2,\ldots\), then it may, in principle, be solved recursively for the \(R_{n}^{m}\) starting from the seed values \[\begin{array}{ll}R_{0}^{m}&=0\,,\\ R_{n+1}^{0}&=\frac{(-1)^{n}}{\left[(z_{1}-z_{0})(z_{1}-z_{2})\right]^{n+1}} \cdot p_{n}\left(\frac{z_{1}-z_{0}}{z_{1}-z_{2}}\right)\,,\\ R_{n+1}^{1}&=\frac{(-1)^{n}}{(z_{1}-z_{2})^{2n+1}}\binom{2n}{n}+z_{0}R_{n+1}^{ 0}\,,\end{array} \tag{52}\] with the polynomials \(p_{n}(x)\) defined by \[p_{n}(x)\equiv\sum_{k=0}^{n}\binom{k+n}{k}x^{k}\,. \tag{53}\] Unfortunately, this recursion scheme is numerically unstable, especially for small \(\nu\). ### Series Representation Applying Cauchy's theorem to (43), followed by an invocation of Leibniz's formula, we obtain \[\begin{array}{ll}R_{n+1}^{m}&=\frac{1}{n!}\cdot\frac{d^{n}}{dz_ {1}^{n}}\left[\frac{1}{(z_{1}-z_{2})^{n+1}}\cdot\frac{z_{1}^{m}}{z_{1}-z_{0}} \right]\\ &=\sum_{k=0}^{n}\frac{1}{(n-k)!}\frac{d^{n-k}}{dz_{1}^{n-k}}\left[ \frac{1}{(z_{1}-z_{2})^{n+1}}\right]\cdot\frac{1}{k!}\frac{d^{k}}{dz_{1}^{k}} \left[\frac{z_{1}^{m}}{z_{1}-z_{0}}\right]\,.\end{array} \tag{54}\] The first differentiation is trivial to perform, yielding \[R_{n+1}^{m}=(-1)^{n}\sum_{k=0}^{n}\binom{2n-k}{n}\frac{z_{0}^{m-k-1}}{(z_{2}- z_{1})^{2n+1-k}}S_{k}^{m}(z_{1}/z_{0})\,, \tag{55}\] where \[S_{k}^{m}(x)\equiv\frac{1}{k!}\frac{d^{k}}{dx^{k}}\left(\frac{x^{m}}{1-x} \right)\,. \tag{56}\] The functions \(S_{k}^{m}(x)\) satisfy the relationship \[S_{k}^{m+1}(x)-S_{k}^{m}(x)=-\binom{m}{k}x^{m-k}\,. \tag{57}\] It is convenient to introduce polynomials \[P_{k}^{m}(x)\equiv(1-x)^{k+1}S_{k}^{m}(x)\,, \tag{58}\] so that \(P_{k}^{0}(x)=1\), \(P_{0}^{m}(x)=x^{m}\). Then we can write \[R_{n+1}^{m}=\frac{(-1)^{n}}{(z_{2}-z_{1})^{2n+2}}\sum_{k=0}^{n}\binom{2n-k}{n} \left(\frac{z_{2}-z_{1}}{1-z_{1}/z_{0}}\right)^{k+1}z_{0}^{m-k-1}P_{k}^{m}(z_{ 1}/z_{0})\,. \tag{59}\] Combining (56) and (58), we can establish that, for \(m>k\), \[\begin{split} P_{k}^{m}(x)&=(1-x)^{k+1}\sum_{\ell=0}^{ k}\frac{1}{\ell!}\frac{d^{\ell}x^{m}}{dx^{\ell}}\cdot\frac{1}{(k-\ell)!}\frac{d^{k- \ell}}{dx^{k-\ell}}\left(\frac{1}{1-x}\right)\\ &=\sum_{\ell=0}^{k}D_{\ell}^{m}(x)\,,\end{split} \tag{60}\] where \[D_{\ell}^{m}(x)\equiv\binom{m}{\ell}x^{m-\ell}(1-x)^{\ell}\,. \tag{61}\] Equation (61) represents a cumulative sum, each term of which can be computed recursively. For example, when \(x\) is bounded away for zero, \[D_{\ell}^{m}(x)=\left(\frac{m+1}{\ell}-1\right)\cdot\left(\frac{1}{x}-1\right) D_{\ell-1}^{m}(x)\,, \tag{62}\] for \(\ell=1,2,\ldots\), with \(D_{0}^{m}(x)=x^{m}\). A similar recursion holds for small \(x\), computed backwards from \(D_{m}^{m}(x)=(1-x)^{m}\). An explicit representation of the polynomials \(P_{k}^{m}(x)\) is given by \[P_{k}^{m}(x)=1-(1-x)^{k+1}\sum_{\ell=0}^{m-k-1}\binom{k+\ell}{\ell}x^{\ell}\,. \tag{63}\] It may be observed that \(P_{k}^{m}(x)=1\) whenever \(m\leq k\), and that \(P_{k}^{m}(x)\geq 0\) for all \(0\leq x\leq 1\). These polynomials also satisfy the recurrence relation \[P_{k}^{m+1}(x)=P_{k}^{m}(x)+\frac{m}{k}(1-x)\left[P_{k-1}^{m}(x)-P_{k-1}^{m-1}( x)\right] \tag{64}\] for \(k,m=1,2,\ldots\), subject to \[P_{0}^{m}(x)=x^{m}\,,\quad P_{k}^{0}(x)=1\,,\quad P_{k}^{1}(x)=1-(1-x)\delta_{ k0}\,. \tag{65}\] ### Evaluation In order to achieve good numerical behaviour as \(\nu\to 1\), it is convenient to work with the scaled integrals \(\hat{R}_{n+1}^{m}\equiv(-r_{\rm lo})^{n}R_{n+1}^{m}\), for which we have the well-behaved series representation \[\begin{split}\hat{R}_{n+1}^{m}=&\frac{1}{(z_{2}-z_{ 1})(1-z_{1}/z_{0})}\cdot\left(\frac{r_{\rm lo}}{(z_{2}-z_{1})^{2}}\right)^{n }\\ &\times\sum_{k=0}^{n}\binom{2n-k}{n}\left(\frac{z_{2}-z_{1}}{1-z_ {1}/z_{0}}\right)^{k}z_{0}^{m-k-1}P_{k}^{m}(z_{1}/z_{0})\,.\end{split} \tag{66}\] Thus, we consider the computation of the vectors \[\hat{\mathbf{R}}^{(m)}\equiv[\hat{R}_{1}^{m},\hat{R}_{2}^{m},\ldots,\hat{R}_{N +1}^{m}]^{\top}\,. \tag{67}\] To assist with this, we define the constant \[\kappa\equiv\frac{1}{(z_{2}-z_{1})(1-z_{1}/z_{0})}\,, \tag{68}\] the diagonal matrices \[\begin{array}{l}A\equiv\operatorname{diag}[a^{0},a^{1},\ldots,a^{N}]\,,\quad a \equiv r_{\text{lo}}/(z_{2}-z_{1})^{2}\,,\\ B\equiv\operatorname{diag}[b^{0},b^{1},\ldots,b^{N}]\,,\quad b\equiv(z_{2}-z_{1 })/(1-z_{1}/z_{0})\,,\end{array} \tag{69}\] and the combinatorial matrix \[C_{nk}\equiv\binom{2n-k}{n} \tag{70}\] provided \(k\leq n\) and is zero otherwise. We also introduce the polynomial vectors \[\mathbf{P}^{(m)}\equiv[P_{0}^{(m)},P_{1}^{(m)},\ldots,P_{N}^{(m)}]^{\top}\,, \quad P_{k}^{(m)}\equiv z_{0}^{m-k-1}P_{k}^{m}(z_{1}/z_{0})\,. \tag{71}\] Then, we can write (66) as \[\hat{\mathbf{R}}^{(m)}= \,\kappa\cdot ACB\mathbf{P}^{(m)} \tag{72}\] \[= \,\kappa\cdot(ACA^{-1})\cdot AB\cdot\mathbf{P}^{(m)}\,.\] At this point, we note that the product \(AB\) is the diagonal matrix of increasing powers \[AB=\operatorname{diag}[\gamma^{0},\gamma^{1},\ldots,\gamma^{N}]\,,\quad \gamma\equiv r_{\text{lo}}/[(z_{2}-z_{1})(1-z_{1}/z_{0})]\,, \tag{73}\] and that \((ACA^{-1})_{nk}=a^{n-k}C_{nk}\), which is easily computed by observing the cumulative product form \[a^{\ell}\binom{n+\ell}{n}=\prod_{j=1}^{\ell}\left(1+n/j\right)a\,. \tag{74}\] If we combine the column vectors \(\hat{\mathbf{R}}^{(m)}\) and \(\mathbf{P}^{(m)}\) into respective matrices, so that \[\begin{array}{l}\hat{\mathbf{R}}\equiv[\hat{\mathbf{R}}^{(0)},\hat{\mathbf{ R}}^{(1)},\ldots,\hat{\mathbf{R}}^{(M)}]\,,\\ \mathbf{P}\equiv[\mathbf{P}^{(0)},\mathbf{P}^{(1)},\ldots,\mathbf{P}^{(M)}] \,,\end{array} \tag{75}\] then we obtain the matrix equation \[\hat{\mathbf{R}}=\kappa\cdot(ACA^{-1})\cdot AB\cdot\mathbf{P}\,. \tag{76}\] In Figure 2, we plot the queue-length PMF for the low-priority arrivals, as the negative base-10 logarithm, for total traffic intensity \(r=0.99\) and a range of hifrac values \(\nu\). Overlaid, are the asymptotic curves in the large queue-length limit. This is given by \[f_{\text{lo}}(n)\underset{n\to\infty}{\sim}\sqrt{\frac{1-r}{\pi r}}\cdot \frac{r^{n}}{\sqrt{n}}\,, \tag{77}\] when \(r_{\text{hi}}=r^{2}\) (or equivalently \(\nu=r\)). Otherwise, the low-priority marginal PMF can be decomposed into two components according to \[f_{\text{lo}}(n)=f_{\text{pol}}(n)\cdot\Theta(r^{2}-r_{\text{hi}})+f_{\text{ cut}}(n)\,, \tag{78}\] where \(\Theta(x)\) denotes the Heaviside function such that \(\Theta(x)=1\) for \(x\geq 0\) and vanishes otherwise. The large-\(n\) behaviour of these components is given by \[\begin{array}{l}f_{\text{pol}}(n)\underset{n\to\infty}{\sim}\left[1-\frac{r (1-r)}{r_{\text{lo}}}\right](1-r)r^{n-1}\,,\\ f_{\text{cut}}(n)\underset{n\to\infty}{\sim}\frac{(\sqrt{r_{\text{hi}}/r_{ \text{lo}}})^{1/2}}{2\sqrt{\pi}r}\cdot\frac{1-r}{(\chi-1/r)\chi^{n-1/2}n^{3/2 }}\,,\end{array} \tag{79}\] where \(\chi\equiv 1+(1-\sqrt{r_{\rm hi}})^{2}/r_{\rm lo}>1/r\). The derivation of these results, which will be presented in a forthcoming paper, follows directly from the pole/cut integral representation of the distribution, mentioned in Section 4. The computed points, represented by the coloured dots, are interpolated by black curves. The asymptotic curves are indicated by a coloured dashed line-style. Thus, when the interpolation between the data points becomes coloured, this indicates that the agreement between the computation and asymptotic limit is within the linewidth of the graph. In Figure 3, we plot the queue-length PMF for the low-priority arrivals, as the negative base-10 logarithm, for the case of total traffic intensity \(r=0.99\) and fraction of high-priority arrivals \(\nu=0.95\), where asymptotic behaviour is slow to set in. We see that the computation remains robust up to a queue length of at least \(n=1000\) which lies deep in the asymptotic region. In Figure 4, we plot a two-dimensional map of the joint probability distribution \(f(n,m)\) of the queue lengths, for total traffic intensity \(r=0.75\) and fraction of high-priority arrivals \(\nu=0.9\). A logarithmic scaling has been applied, such that \(f(n,m)\leftarrow\max\{0,1+\log_{10}(f(n,m)/f_{\rm max})/20\}\), where \(f_{\rm max}\equiv\max\{f(n,m)\}\). ### Limiting Cases The \(\nu\to 0\) limiting behaviour of the R-integrals is given by \[R_{n}^{m}\underset{\nu\to 0^{+}}{\sim}\left\{\begin{array}{cl}0&\mbox{ for }m>n\\ (-1)^{n-1}\left[1-r^{n-1}/(1+r)^{2n-1}\right]&\mbox{ for }m=n\\ (-1)^{n-1}/\nu^{n-m}&\mbox{ for }m<n\end{array}\right.. \tag{80}\] At the opposite extreme, for \(\nu=1\), we have \(z_{0}=1\), \(z_{1}=r\), \(z_{2}=1\), in which case \[\frac{z_{2}-z_{1}}{1-z_{1}/z_{0}}=1\,. \tag{81}\] It follows that \[R_{n+1}^{m}\underset{\nu\to 1^{-}}{\sim}\frac{(-1)^{n}}{(1-r)^{2n+2}}\sum_{k=0 }^{n}\binom{2n-k}{n}P_{k}^{m}(r)\,. \tag{82}\] Equation (80) shows that the R-integrals become singular for small \(\nu\) when \(m<n\). This is one reason for the numerical instability of the recurrence relations (51), given that the seed values always reside in this region. ## 6 Numerical Tests Various tests can be applied to quantify the numerical performance of the algorithm for the computation of the joint PMF. ### Aggregation Test The aggregated queue-length distribution describes the total number of entities in the queue, regardless of priority level. This is equivalent to the queue-length distribution of the basic M/M/\(c\) queueing model with traffic intensity \(r=r_{\rm lo}+r_{\rm hi}\), which is known to be a simple geometric distribution. Hence, the exact aggregate PMF is given by \[f_{\rm agg}^{(\rm ex)}(k)=(1-r)r^{k}\,, \tag{83}\] for \(k=0,1,2,\ldots\) One diagnostic test of the R-integral computational methodology is to check how well the aggregate PMF constructed from the computed joint PMF reproduces the exact result. This test is more convenient than similarly testing against the marginals as only a finite summation is required. Considering the joint PMF as a matrix whose rows and columns are labelled by its integer arguments, values of the aggregate PMF are given by successive finite sums along the anti-diagonals. Specifically, in terms of the R-integrals, the aggregate PMF is expressed as \[\begin{split} f_{\rm agg}(k)&=\sum_{n=0}^{k}f(n,k- n)\\ &=\frac{1-r}{r}\sum_{n=0}^{k}(-r_{\rm lo})^{n}(\Delta R_{n+1}^{k+2 }-r_{\rm hi}\Delta R_{n+1}^{k})\,,\end{split} \tag{84}\] Figure 2: Note. Queue-length PMF for the low-priority arrivals, plotted as the negative base-10 logarithm, for total traffic intensity \(r=0.9\) and a range of hirfac values (\(\nu\)). Asymptotic curves for the large queue-length limit are overlaid. Figure 3: Note. Queue-length PMF for the low-priority arrivals, plotted as the negative base-10 logarithm, for total traffic intensity \(r=0.99\) and fraction of high-priority arrivals \(\nu=0.9\), with queue lengths extending far into the asymptotic region. It is compared with the exact asymptotic curve in the large queue-length limit. for \(k=0,1,2,\ldots\). We then consider the measure of performance (MOP) \[\Xi_{\rm agg}\equiv-\max_{k\geq 0}\left\{\log_{10}\left(|\ln(f_{\rm agg}(k))-\ln \bigl{(}f_{\rm agg}^{(\rm ex)}(k)\bigr{)}|\bigr{)}\right\}\, \tag{85}\] where the maximum is taken over all values \(0\leq k\leq n_{\rm lim}\) such that \(f_{\rm agg}^{(\rm ex)}(k)>p_{\rm lim}>0\). Since we are working in double-precision arithmetic4, all MOPs of this kind are capped at a maximum allowed value of 16. The interpretation of \(\Xi_{\rm agg}\) (and similarly for all of the subsequent MOPs) is that it indicates the number of decimal places of numerical agreement in the worst case. Footnote 4: All computation is performed in Matlab R2020a, which implements IEEE Standard 754 for double precision. ### Xhi-Test The exclusively-high distribution, defined by \(f_{\rm xhi}(m)\equiv f(0,m)\), gives the probability of finding \(m\) high-priority clients in the queue and no low-priority clients. An exact expression for the exclusively-high probability is given by \[f_{\rm xhi}^{(\rm ex)}(m)=(1-r)(r_{\rm hi}/z_{2})^{m}\,, \tag{86}\] for \(m=0,1,2,\ldots\) It is simple to calculate directly as the R-integral has only a simple pole when \(n=0\). One should note that \(f_{\rm xhi}(m)\) is not a proper PMF since \(\sum_{m=0}^{\infty}f_{\rm xhi}(m)<1\), unless \(\nu=1\), but can be turned into a conditional PMF by means of an overall scale factor. In terms of the R-integrals, the exclusively-high PMF is expressed as \[f_{\rm xhi}(m)=\frac{1-r}{r}\left(\Delta R_{1}^{m+2}-z_{1}z_{2}\Delta R_{1}^{ m}\right)\,, \tag{87}\] Figure 4: 2D map of the joint probability distribution of the queue lengths, with a logarithmic scaling applied, for total traffic intensity \(r=0.75\) and fraction of high-priority arrivals \(\nu=0.9\). and we consider the MOP \[\Xi_{\rm xhi}\equiv-\max_{m\geq 0}\left\{\log_{10}\left(|\ln(f_{\rm xhi}(m))-\ln (f_{\rm xhi}^{\rm(ex)}(m))|\right)\right\}\,, \tag{88}\] where the maximum is taken over all values \(0\leq m\leq n_{\rm lim}\) such that \(f_{\rm xhi}^{\rm(ex)}(m)>p_{\rm lim}>0\). ### Xlo-Test Checking whether the computed joint PMF gives rise to the correct marginal distribution, numerically, is not a convenient enterprise as it necessitates an infinite summation. However, it is possible to devise an alternative test that checks the consistency of the numerical low-priority marginal with the numerically computed joint PMF. In the xlo-test, we relate the exclusively-low distribution \(f_{\rm xlo}(n)\) with the low priority marginal \(f_{\rm lo}(n)\). To achieve this, we consider the PGF (8) recast into the form \[g_{m}(p)=(1-r)\frac{1-\lambda_{1}(p)}{\lambda_{2}(p)-r}\cdot\lambda_{1}^{m}(p)\,. \tag{89}\] Specialized to the case \(m=0\), this may be expressed as \[g_{0}(p)=(1-r)\left[1+r_{\rm lo}p\cdot\frac{1}{\lambda_{2}(p)-r}\right]\,. \tag{90}\] Figure 5: Since the PGF of the low-priority marginal is given by \[g_{\rm lo}(p)=\sum_{m=0}^{\infty}g_{m}(p)=\frac{1-r}{\lambda_{2}(p)-r}\,, \tag{91}\] we arrive at the result \[g_{0}(p)=1-r+r_{\rm lo}p\cdot g_{\rm lo}(p)\,. \tag{92}\] There is a generalization of this result to non-zero values of \(m\) that relates \(g_{m}(p)\) to \(g_{\rm lo}(p)\). Its derivation is presented in the Appendix. From the relationships \[g_{0}(p)=\sum_{n=0}^{\infty}p^{n}f_{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{ \rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{ \rm{ }}}}}}}}}}}}} \,g_{\rm{lo}}(p)=\sum_{n=0}^{\infty}p^{n}f_{\rm{lo}}(n)\,, \tag{93}\] we can equate powers to read off that \[f_{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{ \rm{\rm{\rm{ \rm{ \rm{ }}}}}}}}}}}}}}}} (n)=r_{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{ \rm{\rm{\rm{\rm{\rm{\rm{\rm{ \rm{ \rm{ }}}}}}}}}}}}}}}}}}} \,f_{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{ \rm{\rm{\rm{\rm{\rm{\rm{\rm{ \rm{ \rm{ }}}}}}}}}}}}}}}}}}}} \,f_{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{ \rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{ \rm{ \rm{ \rm{ }}}}}}}}}}}}}}} \, \,\rm{\rm{\rm{\rm{\rm{\{\rm{\{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{ \rm{ \rm{ \rm{ \rm{ \rm{ \rm{ }}}}}}}}}}}}}}}}} \,f_{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{ \rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{ \rm{ \rm{ }}}}}}}}}}}}}}}} \,f_{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{ \rm{\rm{\rm{\rm{ \rm{ \rm{ \rm{ }}}}}}}}}}}}}}}} \,\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{ \rm{\rm{\rm{\rm{ \rm{ \rm{\rm{ }}}}}}}}}}}}}} \,\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{ \rm{ \rm{ \rm{ \rm{ \rm{ }}}}}}}}}}}} \,\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{ \rm{ \rm{\rm{ }}}}}}}}}}} \,\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{ \rm{ \rm{ }}}}}}}}}} \rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{ \rm{ \rm{ }}}}}}}}} \rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{ \rm{ \rm{ }}}}}}}}} \rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{ \rm{ \rm{ \rm{ }}}}}}}} \rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{ \rm{ \rm{ }}}}}}}}} \rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{ \rm{ \rm{ \rm{ }}}}}}} \rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm } \rm{ \rm{ \rm{ \rm{ }}}}}}} \rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{ \rm{ \rm{ \rm{ \rm{ }}}}}}} \rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm } \rm{ \rm{ \rm{ \rm{ \rm{ \rm{ }}}}}}} \rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{ \rm{ \rm{ \rm{\rm{\rm \rm \rm{ \rm{ \rm{ }}}}}}} \rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{ \rm{ \rm{ \rm{\rm \rm \rm{\rmrmrm{ \rm{{ \rm \rm{\rm \rm{\rm \rm{ }}}}}}} \rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{ \rm{ \rm \rm \ \ \ \rm{ \rm{ \rm{\rm \rm{ \rm \rm{\rm \ \rm{\rm{\rm \rm{ }}}}}}}} \,\] where the maximum is taken over all values \(0<m,n\leq n_{\rm lim}\) such that \(f(n,m)>p_{\rm lim}>0\). ### Quadratic Test In this test, we compare the results for the joint queue-length PMF computed from the R-integral (denoted \(f_{\rm ri}(n,m)\)) with that computed by the quadratic recurrence (denoted \(f_{\rm qr}(n,m)\)). The MOP is taken to the be number of decimal places of agreement, as given by \[\Xi_{\rm qr}\equiv-\max_{m,n>0}\left\{\log_{10}\left(|\ln(f_{\rm ri}(n,m))-\ln( f_{\rm qr}(n,m))|\right)\right\}\,, \tag{100}\] where the maximum is taken over all values \(0<m,n\leq n_{\rm lim}\) such that \(f(n,m)>p_{\rm lim}>0\). ### Results Figure 5 presents the results of the numerical tests. The MOP values relevant to the R-integral computations are displayed on the vertical axis against the full range of high-priority arrival fraction (hifrac) \(\nu\) on the horizontal axis. Individual curves are plotted for a discrete collection of traffic intensities, spanning a wide range. Agreement always exceeds eight decimal places, and is generally much higher. The nearest-neighbour and xlo-tests check the internal consistency of the computations, while the aggregation and xhi-tests check against exact analytical results. The maximum queue occupancy to be examined was taken to be \(n_{\rm lim}=1000\). PMF intervals examined included everything down to a tail value of \(p_{\rm lim}=10^{-20}\) except in the xhi-test where \(p_{\rm lim}=10^{-30}\) was used. Figure 6 presents the results of comparing the joint queue-length distribution computed from the R-integral with that computed by the quadratic recurrence. The close agreement observed implies a high level of accuracy for each method across the complete range of parameters. Worst case accuracy occurs when both the traffic intensity \(r\) and hifrac \(\nu\) approach unity. In Table 1, we present results that investigate this region in more detail. Values of \(r\) close to unity have been reported to be problematic for the matrix-geometric approach [17; 18]. The table shows that both of the present methods behave well in this region. The fourth and fifth columns indicate the smallest rectangular subset \([0,n_{\rm lo}]\times[0,n_{\rm hi}]\) of \([0,n_{\rm lim}]\times[0,n_{\rm lim}]\) that contains all grid points \((m,n)\) with probability greater than \(p_{\rm lim}=10^{-20}\). A value of \(n_{\rm lim}=1000\) in one or both columns indicates that \(p_{\rm lim}\) was not attained in some direction. The last column is the minimum probability that was achieved over all considered grid points whose probability values exceed \(p_{\rm lim}\). Computation time for the quadratic recurrence method is two orders of magnitude faster than for the R-integral method. Finally, Figure 7 repeats the quadratic test as described above, but for the low-priority marginal PMFs, with the distribution arising from the quadratic recurrence computed by the algorithm of (23). The legend indicates the of range maximum queue lengths \(n\) that had to be considered across the full range of hifrac values \(\nu\) in order to the attain the limiting probability level \(p_{\rm lim}=10^{-20}\) for the given traffic intensity \(r\). Agreement between the R-integral and quadratic recurrence approaches is observed to exceed ten decimal places in the worst case.. The exact results for the queue-lengths distributions derived here were also tested against Monte-Carlo simulation. Excellent agreement was found across the entire parametric domain. Details will be presented elsewhere. We have also checked against the results in Table 3 of [8] (where the service times are equal) to find complete agreement. There, in the case of the present problem, the quantity \(P_{\rm Q}\) is related to the no-wait probability \(P_{\rm NW}\) given in (3) by \(P_{\rm Q}=1-P_{\rm NW}\), and \(p(0,0)\) is the probability that the system is empty, given by \[\frac{1}{p(0,0)}=\frac{(rN)^{N}}{N!}\left[\frac{1}{1-r}+\Gamma_{\rm scl}(rN,N) \right]\,, \tag{101}\] where \[\Gamma_{\rm scl}(x,\nu)\equiv\frac{\nu e^{x}}{x^{\nu}}\int_{x}^{\infty}dt\ t^{\nu-1}e^{-t} \tag{102}\] is the scaled upper incomplete gamma function as implemented in Matlab. In the present problem, neither of these quantities depend on the priority structure. We relate the mean waiting times given in the table to the mean queue lengths via Little's law. [MISSING_PAGE_POST] 29. [MISSING_PAGE_POST] 29. Figure 29. 29. Figure 29. 29. 29. 29. 29. 29. 29. 29. 29. 29. 29. 29. 29. 29. 29. 29. 29. 29. 29. 29. ## 7 Conclusions Simple methods for highly accurate computation of the joint and marginal distributions for a non-preemptive two-level priority queue have been developed. Explicit closed-form representations for the joint and marginal PMFs have also been derived, something that has not been achieved previously. Future work could entail extension of the present methods to unequal services rates among the priority levels. ## Appendix. Convolutional Form In this appendix, we derive a relationship between joint PMF \(f(n,m)\) and the low-priority marginal \(f_{\rm lo}(n)\). One may observe the general structure \[g_{m}(p)=A^{(m)}(p)+B^{(m)}(p)\cdot g_{\rm lo}(p)\:, \tag{103}\] for polynomials \[A^{(m)}(p)=\sum_{n=0}^{m}A_{n}^{(m)}p^{n}\:,\quad B^{(m)}(p)=\sum_{n=0}^{m}B_{n }^{(m)}p^{n}\:. \tag{104}\] Thus, we obtain the convolutional form for the joint PMF: \[f(n,m)=A_{n}^{(m)}+\sum_{k=0}^{n}B_{k}^{(m)}\cdot f_{\rm lo}(n-k)\:, \tag{105}\] which generalizes the relationship between \(f_{\rm{xlo}}\) and \(f_{\rm{lo}}\) given in (95) to non-zero values of \(m\). One may also note the special case \[f_{\rm{shi}}(m)\equiv f(0,m)=A_{0}^{(m)}+B_{0}^{(m)}f_{\rm lo}(0)\:. \tag{106}\] In what follows, we derive explicit expressions for \(A^{(m)}(p)\) and \(B^{(m)}(p)\). Since \(\lambda_{1}(p)\) satisfies a quadratic equation, we have that \(\lambda_{1}^{m}(p)=\alpha_{m}(p)\lambda_{1}(p)+\beta_{m}(p)\) for some polynomials \(\alpha_{m}(p)\), \(\beta_{m}(p)\). The fact that \(\alpha_{m},\beta_{m}\) are polynomials follows from examining \(\lambda_{1}^{2}\). Let us now recall that the Chebyshev polynomials of the first and second kind, \(T_{n}(x)\) and \(U_{n}(x)\) respectively, may be expressed as \[\begin{array}{rcl}T_{n}(x)&=&\frac{1}{2}\left[(x+\sqrt{x^{2}-1})^{n}+(x- \sqrt{x^{2}-1})^{n}\right]\:,\\ \sqrt{x^{2}-1}U_{n-1}(x)&=&\frac{1}{2}\left[(x+\sqrt{x^{2}-1})^{n}-(x-\sqrt{x ^{2}-1})^{n}\right]\:,\end{array} \tag{107}\] \begin{table} \begin{tabular}{|c|c||c|c|c|c|} \hline \(r\) & \(\nu\) & \(\Xi_{\rm{qr}}\) & \(n_{\rm{hi}}\) & \(n_{\rm{lo}}\) & \(p_{\rm{min}}\) \\ \hline \hline 0.99 & 0.95 & 9.3279 & 609 & 1000 & \(1.0000\times 10^{-20}\) \\ & 0.99 & 8.1611 & 1000 & 1000 & \(1.0000\times 10^{-20}\) \\ & 0.999 & 6.6633 & 1000 & 1000 & \(1.0000\times 10^{-20}\) \\ & 1.00 & 11.7428 & 1000 & 0 & \(4.3171\times 10^{-7}\) \\ \hline 0.999 & 0.95 & 9.4247 & 685 & 1000 & \(1.0000\times 10^{-20}\) \\ & 0.99 & 8.4169 & 1000 & 1000 & \(1.0017\times 10^{-20}\) \\ & 0.999 & 7.2251 & 1000 & 1000 & \(6.6926\times 10^{-18}\) \\ & 1.00 & 9.6972 & 1000 & 0 & \(3.6770\times 10^{-4}\) \\ \hline 0.9999 & 0.95 & 9.4344 & 657 & 1000 & \(1.0000\times 10^{-20}\) \\ & 0.99 & 8.4361 & 1000 & 1000 & \(1.0000\times 10^{-20}\) \\ & 0.999 & 7.2455 & 1000 & 1000 & \(1.0540\times 10^{-18}\) \\ & 1.00 & 7.8504 & 1000 & 0 & \(9.0483\times 10^{-5}\) \\ \hline \end{tabular} \end{table} Table 1: Joint PMF Comparison which implies the identity \[(x-\sqrt{x^{2}-1})^{n}=T_{n}(x)-xU_{n-1}(x)+(x-\sqrt{x^{2}-1})\cdot U_{n-1}(x)\,. \tag{108}\] Since we have \[\lambda_{1}(p)=b(p)-\sqrt{b^{2}(p)-r_{1}}\,,\quad b(p)\equiv(1+r-r_{2}p)/2\,, \tag{109}\] it follows that \[\frac{\lambda_{1}^{m}(p)}{r_{1}^{m/2}}=T_{m}(x(p))-x(p)\cdot U_{m-1}(x(p))+ \frac{\lambda_{1}(p)}{\sqrt{r_{1}}}\cdot U_{m-1}(x(p))\,, \tag{110}\] with \(x(p)\equiv b(p)/\sqrt{r_{1}}\). Consequently, \[\begin{array}{l}\alpha_{m}(p)=r_{1}^{(m-1)/2}U_{m-1}(x(p))\,,\\ \beta_{m}(p)=r_{1}^{m/2}\left[T_{m}(x(p))-x(p)\cdot U_{m-1}(x(p))\right]\,, \end{array} \tag{111}\] for \(x=0,1,\ldots\), with \(U_{-1}(x)\equiv 0\). The first few \(\alpha\)-coefficients are given by \[\alpha_{0}(p)=0\,,\quad\alpha_{1}(p)=1\,,\quad\alpha_{2}(p)=1+r-r_{2}p\,. \tag{112}\] The first few \(\beta\)-coefficients are given by \[\beta_{0}(p)=1\,,\quad\beta_{1}(p)=0\,,\quad\beta_{2}(p)=-r_{1}\,. \tag{113}\] Using identities satisfied by the Chebyshev polynomials, (111) can be simpified as \[\alpha_{m}(p)=r_{1}^{(m-1)/2}U_{m-1}(x(p))\,,\quad\beta_{m}(p)=-r_{1}^{m/2}U _{m-2}(x(p))\,, \tag{114}\] which implies the relationship \(\beta_{m}(p)=-r_{1}\alpha_{m-1}(p)\), where we formally set \(\alpha_{-1}(p)\equiv 1\). Noting that \[\begin{array}{l}(1-\lambda_{1})\lambda_{1}^{m}=(\alpha_{m}-\alpha_{m+1}) \lambda_{1}+(\beta_{m}-\beta_{m+1})\\ =-(\alpha_{m}-\alpha_{m+1})(\lambda_{2}-r)+(1-r_{2}p)(\alpha_{m}-\alpha_{m+1} )+(\beta_{m}-\beta_{m+1})\,,\end{array} \tag{115}\] followed by substitution into the representation \[g_{m}(p)=\frac{1-r}{\lambda_{2}-r}\cdot(1-\lambda_{1})\lambda_{1}^{m}\,, \tag{116}\] yields \[g_{m}(p)=-(1-r)(\alpha_{m}-\alpha_{m+1})+g_{\rm lo}(p)\left[(1-r_{2}p)(\alpha _{m}-\alpha_{m+1})+(\beta_{m}-\beta_{m+1})\right]\,, \tag{117}\] from which we can read off \[\begin{array}{l}A^{(m)}(p)=(1-r)\left[\alpha_{m+1}(p)-\alpha_{m}(p)\right]\,, \\ B^{(m)}(p)=(1-r_{2}p)\left[\alpha_{m}(p)-\alpha_{m+1}(p)\right]+\left[\beta_{m }(p)-\beta_{m+1}(p)\right]\,.\end{array} \tag{118}\] In terms of Chebyshev polynomials, this becomes \[\begin{array}{l}A^{(m)}(p)=-(1-r)r_{1}^{(m-1)/2}\left[U_{m-1}(x(p))-\sqrt{r _{1}}U_{m}(x(p))\right]\,,\\ B^{(m)}(p)=-\frac{1-r_{2}p}{1-r}\cdot A^{(m)}(p)+\frac{r_{1}}{1-r} \cdot A^{(m-1)}(p)\,,\end{array} \tag{119}\] for \(m=0,1,\ldots\), where we set \(U_{-1}(x)\equiv 0\), \(U_{-2}(x)\equiv-1\). As a sanity check, it is straightforward to confirm that \(A^{(m)}(1)+B^{(m)}(1)=(1-r_{1})r_{1}^{m}\). Another check is given by \[\begin{array}{l}\sum_{m=0}^{\infty}A^{(m)}(p)=\left\{\begin{array}{cl}(1-r )/(1-r_{1})&\mbox{for }p=1\\ 0&\mbox{for }p\neq 1\end{array}\right.\,,\\ \sum_{m=0}^{\infty}B^{(m)}(p)=\left\{\begin{array}{cl}(r-r_{1})/(1-r_{1})& \mbox{for }p=1\\ 1&\mbox{for }p\neq 1\end{array}\right.\,.\end{array} \tag{120}\] Acknowledgments.The authors gratefully acknowledge useful discussions with Dr. Stephen Bocquet.
2308.00043
A Lagrangian filling for every cluster seed
We show that each cluster seed in the augmentation variety is inhabited by an embedded exact Lagrangian filling. This resolves the matter of surjectivity of the map from Lagrangian fillings to cluster seeds. The main new technique to produce these Lagrangian fillings is the construction and study of a quiver with potential associated to curve configurations. We prove that its deformation space is trivial and show how to use it to manipulate Lagrangian fillings with $\mathbb{L}$-compressing systems via Lagrangian disk surgeries.
Roger Casals, Honghao Gao
2023-07-31T18:01:05Z
http://arxiv.org/abs/2308.00043v3
# A Lagrangian filling for every cluster seed ###### Abstract. We show that each cluster seed in the augmentation variety is inhabited by an embedded exact Lagrangian filling. This resolves the matter of surjectivity of the map from Lagrangian fillings to cluster seeds. The main new technique to produce these Lagrangian fillings is the construction and study of a quiver with potential associated to curve configurations. We prove that its deformation space is trivial and show how to use it to manipulate Lagrangian fillings with \(\mathbb{L}\)-compressing systems via Lagrangian disk surgeries. 2010 Mathematics Subject Classification: Primary: 53D12. Secondary: 57K33, 13F60 ## 1. Introduction We show that each cluster seed is inhabited by an embedded exact Lagrangian filling. Heretofore, the surjectivity of the map from Lagrangian fillings to cluster seeds remained open for essentially all braids. The argument is based on applying Lagrangian disk surgeries to an initial Lagrangian filling with an \(\mathbb{L}\)-compressing system. We are able to avoid the appearance of immersed curves, a known technical issue of this problem, by introducing and studying a new object: a quiver with potential for each such \(\mathbb{L}\)-compressing system. The manuscript first establishes the key properties of these new quivers with potentials, including their rigidity and invariance. We then show how to use these properties to construct a Lagrangian filling for every cluster seed. **Scientific context**. Let \(\beta\) be a positive braid word. Consider the Legendrian link \(\Lambda_{\beta}\subset(\mathbb{R}^{3},\xi_{st})\) obtained as the Legendrian lift of the rainbow closure of \(\beta\), as defined in [10, Section 2.2] or cf. Section 4.1. Here \(\xi_{st}=\ker\{dz-ydx\}\) where \((x,y,z)\in\mathbb{R}^{3}\) are Cartesian coordinates. Consider its augmentation variety \(X(\Lambda_{\beta},T)\), where \(T\subset\Lambda_{\beta}\) is a set of marked points, one per component. See Section 4.7 or [10, Section 5.1] and references therein for more details on such augmentation varieties. This affine algebraic variety \(X(\Lambda_{\beta},T)\) is smooth and isomorphic to the open (double) Bott-Samelson variety with pair of braids \((\beta,\mathrm{e})\), where \(e\) is the identity braid, up to trivial \(\mathbb{C}^{\times}\)-factors. It is known that its ring of regular functions \(\mathbb{C}[X(\Lambda_{\beta},T)]\) is a cluster algebra. See [11, 12] for both these facts or cf. Section 4.7 below. An alternative, intrinsically symplectic construction of such cluster structures is provided in [11] via the microlocal theory of sheaves. A salient property of this cluster algebra structure constructed on \(\mathbb{C}[X(\Lambda_{\beta},T)]\) is that, in all known cases, an oriented embedded exact Lagrangian filling \(L\) of \(\Lambda_{\beta}\subset(\mathbb{R}^{3},\xi_{st})\), embedded in the symplectization \((\mathbb{R}^{4},\lambda_{st})\) of \((\mathbb{R}^{3},\xi_{st})\), gives a cluster seed in \(\mathbb{C}[X(\Lambda_{\beta},T)]\). Specifically, the filling \(L\) gives an open toric chart in \(X(\Lambda_{\beta},T)\) and there is a choice of \(\mathbb{L}\)-compressing system \(\Gamma\) for \(L\) that endows this toric chart with cluster \(\mathcal{A}\)-coordinates, thus defining a seed \(\mathfrak{c}(L,\Gamma)\) in \(\mathbb{C}[X(\Lambda_{\beta},T)]\). The description of the cluster seed is explicit for Lagrangian fillings \(L\) associated to pinching sequences of \(\beta\), see [10] and [11], and [12] provides a diagrammatic calculus to describe more general cluster seeds. Note that these embedded exact Lagrangians \(L\) for \(\Lambda_{\beta}\) must typically be surfaces of higher genus. In addition, if \(L\) and \(L^{\prime}\) are compactly supported Hamiltonian isotopic, then the toric charts in \(X(\Lambda_{\beta},T)\) associated to these seeds must be equal. Again, see [1, 12] or [11]. Such invariance property has been successfully used to distinguish Lagrangian fillings, cf. [1, 10, 12]. **The Question**. Let \(\mathrm{Lag}(\Lambda_{\beta})\) be the set of Hamiltonian isotopy classes of embedded exact Lagrangian fillings of \(\Lambda_{\beta}\) in the symplectization of \((\mathbb{R}^{3},\xi_{st})\). A Hamiltonian isotopy class is given by the equivalence relation \(L\sim L^{\prime}\) iff there exists a compactly supported Hamiltonian diffeomorphism \(\varphi\in\mathrm{Ham}^{c}(\mathbb{R}^{4},\lambda_{st})\) such that \(\varphi(L)=L^{\prime}\). By [1, Section 3.5], cf. also [10, Theorem 3.6], there exists a map \(\mathfrak{C}^{\circ}:\mathrm{Lag}(\Lambda_{\beta})\longrightarrow\mathrm{ Toric}(X(\Lambda_{\beta},T))\), where \(\mathrm{Toric}(X(\Lambda_{\beta},T))\) is the set of open unparametrized algebraic toric charts \((\mathbb{C}^{\times})^{d}\subset X(\Lambda_{\beta},T)\) in the affine variety \(X(\Lambda_{\beta},T)\), where \(d=\dim_{\mathbb{C}}X(\Lambda_{\beta},T)\). An \(\mathbb{L}\)-compressing system \(\Gamma\) for a Lagrangian filling \(L\in\operatorname{Lag}(\Lambda_{\beta})\) endows the toric chart \(\mathfrak{C}^{\circ}(L)\) with toric coordinates \(A(\Gamma)\), e.g. see [12, Section 4.6]. Let \(\operatorname{Lag}^{c}(\Lambda_{\beta})\) be the set of pairs \((L,\Gamma)\) consisting of a Lagrangian filling in \(L\in\operatorname{Lag}(\Lambda_{\beta})\) and an \(\mathbb{L}\)-compressing system \(\Gamma\) for \(L\), up to Hamiltonian isotopy, such that \(A(\Gamma)\) are cluster coordinates for a cluster seed of the cluster algebra structure in \(\mathbb{C}[X(\Lambda_{\beta},T)]\) above. Finally, let \(\operatorname{Seed}(X(\Lambda_{\beta},T))\) be the set of cluster seeds in \(\mathbb{C}[X(\Lambda_{\beta},T)]\). In summary, at its coarsest level, the constructions cited above, sending a Lagrangian filling with an \(\mathbb{L}\)-compressing system \((L,\Gamma)\) to the cluster seed \(\mathfrak{c}(L,\Gamma):=(\mathfrak{C}^{\circ}(L),A(\Gamma))\), yields a map of sets: \[\mathfrak{C}:\operatorname{Lag}^{c}(\Lambda_{\beta})\longrightarrow \operatorname{Seed}(X(\Lambda_{\beta},T)),\quad(L,\Gamma)\mapsto\mathfrak{C}( L,\Gamma):=\mathfrak{c}(L,\Gamma).\] In our view, the surjectivity and injectivity of this map \(\mathfrak{C}\) is a central open problem in low-dimensional contact and symplectic topology. It lies at the core of the study of Legendrian knots in \((\mathbb{R}^{3},\xi_{st})\) and, more generally, the symplectic topology of Weinstein \(4\)-manifolds. In fact, just understanding the exact cardinality of any fiber of \(\mathfrak{C}\) would be remarkable. Note that [10, Prop. 5.3] implies that the forgetful map \(\iota:\operatorname{Seed}(X(\Lambda_{\beta},T))\longrightarrow\operatorname{ Toric}(X(\Lambda_{\beta},T))\) sending a cluster seed to its underlying toric chart is injective. Thus, surjectivity of \(\mathfrak{C}\) would imply that \(\mathfrak{C}^{\circ}\) surjects onto \(\iota(\operatorname{Seed}(X(\Lambda_{\beta},T)))\). The state of affairs is as follows: * Injectivity of \(\mathfrak{C}\) is a generalization of the nearby Lagrangian conjecture for surfaces with boundary to a statement about embedded exact Lagrangians in Weinstein neighborhoods of Lagrangian skeleta. At core, it states that any embedded exact Lagrangian in a neighborhood of the arboreal Lagrangian skeleton consisting of \(L\) and Lagrangian disks attached to it is either Hamiltonian isotopic to \(L\) or to those Lagrangians obtained from it by Lagrangian surgeries along the disks in the skeleton. By [1], \(\mathfrak{C}\) is injective if \(\Lambda_{\beta}\) is the max-tb Legendrian unknot. Injectivity of \(\mathfrak{C}\) remains open for any other Legendrian link \(\Lambda_{\beta}\subset(\mathbb{R}^{3},\xi_{st})\). * Surjectivity of \(\mathfrak{C}\) is a reconstruction statement, from an algebraic invariant back to actual \(4\)-dimensional symplectic topology. Indeed, it inputs algebraic data, provided by the ring of functions \(\mathbb{C}[X(\Lambda_{\beta},T)]\) and a seed for its cluster structure, and should output symplectic topological data, an embedded exact Lagrangian filling of \(\Lambda_{\beta}\) and an \(\mathbb{L}\)-compressing system. By the finite type classification of cluster algebras, established in [11], \(\operatorname{Seed}(X(\Lambda_{\beta},T))\) is known to be a finite set only for a few exceptional cases, i.e. the ADE cases. For those ADE cases, \(\mathfrak{C}\) can be verified to be surjective by direct computation: this has recently been established in [16, ABL21b] by using weaves [12]. See also [1] for the affine ADE cases where \(\operatorname{Seed}(X(\Lambda_{\beta},T))\) is still finite, up to the natural tame quotient. These finite type and affine type are: essentially all braids \(\beta\) have \(\operatorname{Seed}(X(\Lambda_{\beta},T))\) be an infinite set, cf. [10, Section 4] or [15, Section 5]. Confer [11, 12, 13] and references therein for partial results on \(\mathfrak{C}\) and [15, Section 5] for further discussions. Surjectivity of \(\mathfrak{C}\) remains open for any other Legendrian link \(\Lambda_{\beta}\subset(\mathbb{R}^{3},\xi_{st})\). There has been significant activity in recent times in the study of Lagrangian fillings, see e.g. [1, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24]. These results use techniques from and connecting to the microlocal theory of sheaves, Floer theory and cluster algebras among others. That said, all fall objectively short of showing anything close to the surjectivity of \(\mathfrak{C}\). This manuscript introduces a genuinely new technique: the definition, study and use of a quiver with potential \((Q,W)\) to construct Lagrangian fillings. ### The Main Result The goal of this manuscript is to establish the surjectivity of the map \(\mathfrak{C}\). In order to state the main result, which is a stronger version of surjectivity, we introduce the following concepts. Let \(L\subset(\mathbb{D}^{4},\lambda_{st})\) be an exact oriented Lagrangian filling, \(\gamma\subset L\) an embedded oriented curve and \(\Lambda_{\gamma}\) its Legendrian lift to the ideal contact boundary \(\partial\mathcal{O}p(L)\) of a convex open neighborhood \(\mathcal{O}p(L)\) of \(L\) in \(\mathbb{D}^{4}\). Note that \(\mathcal{O}p(L)\) is symplectomorphic to a convex neighborhood of the zero section in \((T^{*}L,\lambda_{st})\). By definition, \(\gamma\) is said to be \(\mathbb{L}\)-compressible if there exists a properly embedded Lagrangian \(2\)-disk \(D\subset(T^{*}\mathbb{R}^{2}\setminus\mathcal{O}p(L))\) such that \(\partial\overline{D}\cap\partial\mathcal{O}p(L)=\Lambda_{\gamma}\subset \mathbb{R}^{4}\) and the union of \(\overline{D}\cup\nu_{\gamma}\) is a smooth Lagrangian disk, where \(\nu_{\gamma}\subset\mathcal{O}p(L)\) is the Lagrangian conormal cone of \(\gamma\). A collection \(\Gamma=\{\gamma_{1},\dots,\gamma_{b}\}\) of oriented embedded curves in \(L\), with a choice of \(\mathbb{L}\)-compressing disks \(\mathscr{D}(\Gamma)=\{D_{1},\dots,D_{b}\}\), one for each curve, is said to be an \(\mathbb{L}\)-compressing system for \(L\) if \(D_{i}\cap D_{j}=\emptyset\) for all \(i,j\in[b]\) and the homology classes of the curves in \(\Gamma\) form a basis of \(H_{1}(L;\mathbb{Z})\). Two \(\mathbb{L}\)-compressing systems \(\Gamma,\Gamma^{\prime}\) for \(L\) are said to be equivalent if there exists a sequence of triple point moves and bigon moves, i.e. Reidemeister IIIs and non-dangerous tangencies, that applied to the curves in \(\Gamma\) lead to the curves in \(\Gamma^{\prime}\). See Section 2 for further discussions and cf. Figure 2 and Figure 8 for such moves. Given a Lagrangian filling \(L\) with an \(\mathbb{L}\)-compressing system \(\Gamma\) and a disk \(D\in\mathscr{D}(\Gamma)\), Lagrangian disk surgery produces an embedded exact Lagrangian filling \(\mu_{D}(L)\). For details on Lagrangian disk surgeries, see Section 4 below or cf. [11, 20]. It is not always the case that \(\mu_{D}(L)\) inherits an \(\mathbb{L}\)-compressing system from \((L,\Gamma)\): curves in \(\Gamma\) might become immersed under Lagrangian disk surgery. Therefore, the set of curves \(\mu_{D}(\Gamma)\) obtained from \(\Gamma\) after Lagrangian disk surgery on \(D\) might not be an \(\mathbb{L}\)-compressing system. This is a well-known problem in this approach, cf. Section 4. The main technical achievement of this manuscript is to show that this can be corrected, showing that certain \(\mathbb{L}\)-compressing systems persist under arbitrary sequences of Lagrangian disk surgeries. The quiver with potential that we introduce and study is a crucial ingredient for implementing this correction. The core result of the manuscript is the following theorem, which implies the surjectivity of \(\mathfrak{C}\) for all positive braids \(\beta\). The entire article is devoted to developing a new technique that proves this result. **Theorem 1.1**.: _Let \(\Lambda_{\beta}\subset(\mathbb{R}^{3},\xi_{st})\) be the Legendrian link associated to a positive braid word \(\beta\), and \(T\subset\Lambda_{\beta}\) a set of marked points with one marked point per component._ _Then there exists an embedded exact Lagrangian filling \(L\subset(\mathbb{R}^{4},\lambda_{st})\) of \(\Lambda_{\beta}\) and an \(\mathbb{L}\)-compressing system \(\Gamma\) for \(L\) such that the following holds:_ * _If_ \(\mu_{\nu_{t}}\dots\mu_{\nu_{1}}\) _is any sequence of mutations, where_ \(v_{1},\dots,v_{\ell}\) _are mutable vertices of the quiver_ \(Q(\mathfrak{c}(L,\Gamma))\) _associated to the cluster seed_ \(\mathfrak{c}(L,\Gamma)\) _of_ \(L\) _in_ \(\mathbb{C}[X(\Lambda_{\beta},T)]\)_, then there exists a sequence of embedded exact Lagrangian fillings_ \(L_{k}\) _of_ \(\Lambda_{\beta}\)_, each equipped with an_ \(\mathbb{L}\)_-compressing system_ \(\Gamma_{k}\)_, with associated cluster seeds_ \[\mathfrak{c}(L_{k},\Gamma_{k})=\mu_{\nu_{k}}\dots\mu_{\nu_{1}}(\mathfrak{c}(L, \Gamma))\] _in_ \(\mathbb{C}[X(\Lambda_{\beta},T)]\)_, for all_ \(k\in[\ell]\)_._ * _Each_ \(\mathbb{L}\)_-compressing system_ \(\Gamma_{k}\) _for_ \(L_{k}\) _is such that Lagrangian disk surgery on_ \(L_{k}\) _along any Lagrangian disk in_ \(\mathscr{D}(\Gamma_{k})\) _yields an_ \(\mathbb{L}\)_-compressing system. In addition,_ \(\Gamma_{k+1}\) _is equivalent to such an_ \(\mathbb{L}\)_-compressing system via a sequence of triple point moves and local bigon moves._ \(\square\)__ Theorem 1.1 is a reconstruction result that we prove by first introducing a new object: a quiver with potential \((Q(\mathcal{C}),W(\mathcal{C}))\) associated to a curve configuration \(\mathcal{C}\). Such a curve configuration can be extracted from the Legendrian link \(\Lambda_{\beta}\) and its initial Lagrangian filling \(L\). See [10] for general algebraic results on quiver with potentials.1 Our results will show that the right-equivalence class of this particular quiver with potential \((Q(\mathcal{C}),W(\mathcal{C}))\) associated to \(\Lambda_{\beta}\) is rigid, it is invariant under triple point moves and bigon moves, and changes according to QP-mutation under Lagrangian disk surgeries. A different choice of potential \(W\) for the quiver \(Q(\mathcal{C})\), not tailored to our problem and the specific geometry of embedded polygons for curve configurations \(\mathcal{C}\), would be of no use.2 Footnote 1: The word potential has different meanings in the literature. Here a quiver with potential is meant in the sense of [10]. It is unrelated to the potentials from [PT], for instance, which count holomorphic disks bounded by Lagrangians and thus vanish for exact Lagrangians. Footnote 2: Similarly, previously constructed potentials in the context of CY3 categories or triangulated surfaces, e.g. [11, Section 5.1], [12] or [13], cannot be used either, unless proven to faithfully count embedded polygons and be invariant under the necessary 4D moves. This quiver with potential encodes an arboreal skeleton for the Weinstein relative pair \((\mathbb{C}^{2},\Lambda_{\beta})\), cf. [1, Section 2] or [1, Section 2]. Intuitively, proving its rigidity is showing that its right-equivalence class is invariant under infinitesimal deformations. Technically, we show that its deformation space, the trace space of the Jacobian algebra modulo the ground ring, is trivial. Thanks to the constructions in Sections 2 and 3, we are able to use this rigidity in Section 4 to show that we can geometrically realize any arbitrary sequence of algebraic mutations in the cluster algebra \(\mathbb{C}[X(\Lambda_{\beta},T)]\) by a sequence of Lagrangian disk surgeries on that arboreal skeleton for \((\mathbb{C}^{2},\Lambda_{\beta})\). Near the entirety of the manuscript is devoted to the construction, study and use of this new quiver with potential. **Remark 1.2**.: Each embedded exact Lagrangian filling \(L\) in Theorem 1.1 gives a closed embedded exact Lagrangian surface \(\overline{L}\) in \(W(\Lambda_{\beta})\), the Weinstein \(4\)-fold obtained by attaching a Weinstein \(2\)-handle to each component of \(\Lambda_{\beta}\). If the cluster seeds of \(L,L^{\prime}\) are different, then \(\overline{L}\) and \(\overline{L^{\prime}}\) are not Hamiltonian isotopic in \(W(\Lambda_{\beta})\), only possibly Lagrangian isotopic, cf. [10, Section 7]. Theorem 1.1 shows that each of these (typically infinitely many) different closed exact Lagrangians in \(W(\Lambda_{\beta})\) is embedded in a different closed arboreal Lagrangian skeleton, cf. [11, Section 2.4]. Note also that for each \((L_{k},\Gamma_{k})\), the object \(\mathscr{L}_{k}:=C_{1}\oplus\ldots\oplus C_{\pi_{0}(\Lambda_{\beta})}\oplus T _{p_{i}}^{*}D_{1}\oplus\ldots\oplus T_{p_{i}}^{*}D_{b}\) is a compact generator of the wrapped Fukaya category of \(W(\Lambda_{\beta})\), where \(p_{i}\in D_{i}\) are interior points, \(T_{p_{i}}^{*}D_{i}\) the local cotangent fibers and \(C_{j}\) are the co-cores of the Weinstein \(2\)-handles. Therefore, Theorem 1.1 geometrically constructs a compact generator for each vertex of the cluster exchange graph of the cluster algebra \(\mathbb{C}[X(\Lambda_{\beta},T)]\). In particular, when the dg-algebras \(\operatorname{End}(\mathscr{L}_{k})\) are non-positively graded, Theorem 1.1 geometrically constructs bounded \(t\)-structures for the wrapped Fukaya category. ### The map is surjective The construction of the cluster algebra structure for \(\mathbb{C}[X(\Lambda_{\beta},T)]\) has the following property, relating Theorem 1.1 to the map \(\mathfrak{C}\); cf. [13] or Section 4.7 below. Given a Lagrangian filling \(L\) of \(\Lambda_{\beta}\) and an \(\mathbb{L}\)-compressing system \(\Gamma\), the cluster variables in \(\mathfrak{c}(L,\Gamma)\) are indexed by the curves in \(\Gamma\) and the arrows in the quiver of \(\mathfrak{c}(L,\Gamma)\) record geometric intersections of curves in \(\Gamma\). The cluster variables are described by a microlocal parallel transport, cf. [13, Section 4]. In addition, the cluster seed \(\mu_{v}(\mathfrak{c}(L,\Gamma))\) obtained by algebraically mutating at a vertex \(v=\gamma\) of the quiver, indexed by some \(\gamma\in\Gamma\), is precisely \(\mathfrak{c}(\mu_{D_{\gamma}}(L,\Gamma))\), where \(D_{\gamma}\) is the \(\mathbb{L}\)-compressing disk associated to the curve \(\gamma\in\Gamma\). Therefore, Theorem 1.1 implies the desired surjectivity: **Corollary 1.3**.: _Let \(\Lambda_{\beta}\subset(\mathbb{R}^{3},\xi_{st})\) be the Legendrian link associated to a positive braid word \(\beta\), \(T\subset\Lambda_{\beta}\) a set of marked points, one per component, and \(X(\Lambda_{\beta},T)\) its augmentation variety. Then_ \[\mathfrak{C}:\operatorname{Lag}^{c}(\Lambda_{\beta})\longrightarrow \operatorname{Seed}(X(\Lambda_{\beta},T))\] _is surjective, i.e. each cluster seed is induced by an embedded exact Lagrangian filling endowed with an \(\mathbb{L}\)-compressing system._ We conjecture that the map \(\mathfrak{C}\) is injective. Corollary 1.3 and a proof that \(\mathfrak{C}\) is injective would settle the core symplectic aspects of the classification of Hamiltonian isotopy classes of Lagrangian fillings for \(\Lambda_{\beta}\), proving that they are equivalent to studying a class of cluster algebras, an algebraic matter. ### Acknowledgements We are grateful to Mikhail Gorsky for his thorough reading of the manuscript and his useful comments, and to Daping Weng and James Hughes for helpful remarks. We also thank Joel Hass for kindly explaining his algorithm with P. Scott to us. R. Casals is supported by the NSF CAREER DMS-1942363, a Sloan Research Fellowship of the Alfred P and a UC Davis College of L&S Dean's Fellowship. Sloan Foundation. H. Gao is supported by a Tsinghua start-up grant and a Tsinghua Du-Shi research grant. ### Structure of the manuscript The study and use of our quiver with potential (QP) is as follows: 1. Section 2 introduces and develops the new concept: the quiver with potential \((Q(\mathcal{C}),W(\mathcal{C}))\) associated to a curve configuration \(\mathcal{C}\). Proposition 2.5 and Lemma 2.7 show that the right equivalence class of \((Q(\mathcal{C}),W(\mathcal{C}))\) is invariant under triple point moves and local bigon moves. Then Lemma 2.21 and Proposition 2.22 show that the QP associated to a reduction of the \(\gamma\)-exchange on \(\mathcal{C}\) yields the QP-mutation of \((Q(\mathcal{C}),W(\mathcal{C}))\) at the corresponding vertex. 2. Section 3 introduces the curve configurations \(\mathcal{C}(\mathbb{G})\) associated to plabic fences \(\mathbb{G}\). Proposition 3.12 shows that the curve QP associated to \(\mathcal{C}(\mathbb{G})\) is rigid. 3. Section 4 constructs an initial Lagrangian filling with an \(\mathbb{L}\)-compressible system whose associated curve configuration is of the form \(\mathcal{C}(\mathbb{G})\) for a plabic fence \(\mathbb{G}\). It then develops a few necessary technical results until Section 4.8, where we show how to use the rigidity of the QP for \(\mathcal{C}(\mathbb{G})\) to prove Theorem 1.1. It also includes Theorem 4.10, a variant of Theorem 1.1. **Notation**. We denote by \([n]\) the set \(\{1,\ldots,n\}\). The group of compactly supported diffeomorphisms of a smooth manifold \(\Sigma\) is denoted by \(\operatorname{Diff}^{\operatorname{c}}(\Sigma)\). In this article, a quiver will refer to a multidigraph with no loops, but possibly with \(2\)-cycles. The set of vertices of \(Q\) is denoted by \(Q_{0}\) and its set of arrows by \(Q_{1}\). We often abbreviate _quiver with potential_ to QP, as in [1]. For specificity, we use the ground ring \(R=\mathbb{C}\) for the (complete) path algebra of the quiver. ## 2. The curve quiver with potential Let \(\Sigma\) be an oriented surface and \(\mathcal{C}=\{\gamma_{1},\ldots,\gamma_{b}\}\), \(b\in\mathbb{N}\), a collection of embedded oriented closed connected curves \(\gamma_{i}\subset\Sigma\), \(i\in[b]\), whose pairwise intersections are all transverse. Two such collections \(\mathcal{C}\) and \(\mathcal{C}^{\prime}\) will be considered equal if there exist a diffeomorphism \(\varphi\in\operatorname{Diff}^{\operatorname{c}}(\Sigma)\) such that \(\varphi(\mathcal{C})=\mathcal{C}^{\prime}\). In particular, two such collections \(\mathcal{C}\) and \(\mathcal{C}^{\prime}\) related by compactly supported isotopies are considered to be equal. We assume that the only intersection points of curves in \(\mathcal{C}\) are double intersection points, i.e. exactly two curves \(\gamma_{i},\gamma_{j}\in\mathcal{C}\) intersect transversely at a given intersection point. **Definition 2.1**.: Let \(\Sigma\) be an oriented surface and \(\mathcal{C}=\{\gamma_{1},\ldots,\gamma_{b}\}\), \(b\in\mathbb{N}\), a collection of embedded oriented closed connected curves \(\gamma_{i}\subset\Sigma\), \(i\in[b]\). By definition, \(\mathcal{C}\) is said to be a curve configuration, or simply a configuration, if the classes \([\gamma_{1}],\ldots,[\gamma_{b}]\) in \(H_{1}(\Sigma;\mathbb{Z})\) form a basis. In particular, for a curve configuration \(\mathcal{C}=\{\gamma_{1},\ldots,\gamma_{b}\}\) in \(\Sigma\), we must have \(b=b_{1}(\Sigma)\). The collections of curves \(\mathcal{C}\) needed to prove Theorem 1.1 will be configurations, i.e. consist of \(b_{1}(\Sigma)\) curves whose homology classes span \(H_{1}(\Sigma;\mathbb{Z})\). Throughout this manuscript, from now onward, we assume all collections of curves \(\mathcal{C}\) that we use satisfy such hypothesis, i.e. they are curve configurations. ### The QP \((Q(\mathcal{C}),W(\mathcal{C}))\) associated to \(\mathcal{C}\) Let \(\mathcal{C}\) be a curve configuration. Consider the integers \[A_{ij}(\mathcal{C}):=|p\in\gamma_{i}\cap\gamma_{j}:\text{sign}(p)\text{ is positive}|\in\mathbb{N},\] where \(\text{sign}(p)=\pm\) is the sign of the intersection point \(p\). **Definition 2.2**.: Let \(\mathcal{C}\) be a curve configuration. The quiver \(Q(\mathcal{C})\) is defined to have vertex set \(Q(\mathcal{C})_{0}=\{\gamma_{1},\ldots,\gamma_{b}\}\) and the arrow set \(Q(\mathcal{C})_{1}\) is given by the condition that the number of arrows from \(\gamma_{i}\) to \(\gamma_{j}\) is \(A_{ij}(\mathcal{C})\), \(i,j\in[b]\). The quiver \(Q(\mathcal{C})\) is referred to as the curve quiver of \(\mathcal{C}\). Since the curves in \(\mathcal{C}\) are embedded, the quiver \(Q(\mathcal{C})\) has no loops; it might nevertheless have \(2\)-cycles. The arrows from the vertex associated to \(\gamma_{i}\) to that of \(\gamma_{j}\) are in natural bijection with the positive intersection points \(p\in\gamma_{i}\cap\gamma_{j}\) between \(\gamma_{i}\) and \(\gamma_{j}\): we indistinctly identify arrows in \(Q(\mathcal{C})\) and such intersection points \(p\in\Sigma\). Let us now discuss the preliminaries to introduce the potential \(W(\mathcal{C})\). By definition, an \(\ell\)-gon \(P\) bounded by \(\mathcal{C}\), \(\ell\in\mathbb{N}\) and \(\ell\geq 2\), is a closed contractible subset \(P\subset\Sigma\) with a PL-smooth boundary \(\partial P\) and embedded interior such that: 1. There are \(\ell\) connected components in \(\partial P\setminus V\), where \(V\subset\partial P\) is the set of non-smooth points of \(\partial P\), i.e. \(V\) is the set of vertices of \(P\). That is, there are \(\ell\) sides to \(P\). 2. For each smooth connected component \(\partial P_{j}\subset(\partial P\setminus V)\), \(j\in[\ell]\), there exists a \(\gamma_{i_{j}}\) such that \(\partial P_{j}\subset\gamma_{i_{j}}\) and the orientations either coincide for all \(j\in[\ell]\) or they are opposite for all \(j\in[\ell]\). That is, each oriented side of \(P\) is an oriented subspace of a curve in \(\mathcal{C}\) or each oriented side of \(P\) is an oriented subspace of a curve in \(\overline{\mathcal{C}}\). Here \(\overline{\mathcal{C}}=\{-\gamma_{1},\ldots,-\gamma_{b}\}\) denotes the same configuration of curves as in \(\mathcal{C}\) where we have switched the orientation of each curve \(\gamma\in\mathcal{C}\). 3. At a small enough neighborhood \(U\subset\Sigma\) of a vertex \(v\in V\), which locally is given by the intersection of two curves \(\gamma_{i},\gamma_{j}\in\mathcal{C}\), the intersection \(P\cap(U\setminus(U\cap(\gamma_{i}\cup\gamma_{j})))\) is a unique quadrant. That is, vertices of an \(\ell\)-gon only use one of the quadrants; in a combinatorial sense, \(\ell\)-gons bounded by \(\mathcal{C}\) are convex. An \(\ell\)-gon \(P\) bounded by \(\mathcal{C}\) determines a cyclically ordered set of vertices. Conversely, it is uniquely determined by its cyclically ordered set of vertices and its orientation, clockwise or counter-clockwise. Since the vertices of \(P\) must be intersection points between the curves in \(\mathcal{C}\), which bijectively correspond to arrows in \(Q(\mathcal{C})\), such \(P\) is uniquely determined by a (cyclic) word of composable arrows starting and ending at the same vertex, along with its orientation. In other words, by a signed monomial in \(\operatorname{HH}_{0}(Q(\mathcal{C}))\), the trace space of the path algebra \(\mathbb{C}\langle Q(\mathcal{C})\rangle\) of \(Q(\mathcal{C})\). This correspondence is written \(P=v_{1}\dots v_{\ell}\) where \(v_{1},\dots,v_{\ell}\) are the vertices of \(P\) read according to the order induced by the orientation of \(\partial P\). **Remark 2.3**.: Note that the connected components of \(\Sigma\setminus(\gamma_{1}\cup\dots\cup\gamma_{b})\) might or might not be \(\ell\)-gons bounded by \(\mathcal{C}\), due to the orientations of the curves in \(\mathcal{C}\). Also, typically polygons bounded by \(\mathcal{C}\) are not connected components of \(\Sigma\setminus(\gamma_{1}\cup\dots\cup\gamma_{b})\) as they might have curves in \(\mathcal{C}\) cross through them.\(\Box\) Let \(\Gamma_{\ell}^{+}\), resp. \(\Gamma_{\ell}^{-}\), be the set of \(\ell\)-gons \(P\) bounded by \(\mathcal{C}\) where the orientation of \(\partial P\) coincides with, resp. it is opposite to, the orientations of \(\Sigma\). For each intersection point of two curves \(\gamma_{i},\gamma_{j}\in\mathcal{C}\) that represents an arrow from \(\gamma_{i}\) to \(\gamma_{j}\) in \(Q(\mathcal{C})\), or vice-versa, we decorate (shade) two consecutive quadrants as follows. If the tangent vectors of \(\gamma_{i}\) and \(\gamma_{j}\), in this order, are an oriented basis of the tangent space at the intersection point, then we shade the two quadrants in the side of \(\gamma_{i}\) where \(\gamma_{j}\) points outwards from the intersection point; see Figure 1, where the shading of the quadrants is depicted. If that basis gives the reverse orientation, we shade the two quadrants in the side of \(\gamma_{j}\) where \(\gamma_{i}\) points outwards from the intersection point. Given a polygon \(P=v_{1}\dots v_{\ell}\) bounded by \(\mathcal{C}\), each vertex \(v_{i}\) of \(P\) is assigned the sign \(\sigma(v_{i};v_{1}\dots v_{\ell})=1\) if \(v_{i}\) uses a non-shaded quadrant, and the sign \(\sigma(v_{i};v_{1}\dots v_{\ell})=-1\) if \(v_{i}\) uses a shaded quadrant. By definition, the vertex sign \(\sigma(v_{1}\dots v_{\ell})\) of the polygon \(P\) is \[\sigma(v_{1}\dots v_{\ell})=\prod_{i=1}^{\ell}\sigma(v_{i};v_{1}\dots v_{\ell }).\] **Definition 2.4**.: The potential \(W(\mathcal{C})\in\operatorname{HH}_{0}(Q(\mathcal{C}))\) of \(Q(\mathcal{C})\) is defined by \[W(\mathcal{C})=\sum_{v_{1}\dots v_{\ell}\in\Gamma_{\ell}^{+}}\sigma(v_{1} \dots v_{\ell})\cdot v_{\ell}\dots v_{1}\quad-\sum_{w_{1}\dots w_{\ell}\in \Gamma_{\ell}^{-}}\sigma(w_{1}\dots w_{\ell})\cdot w_{1}\dots w_{\ell},\] where the sums run over all possible \(\ell\in\mathbb{N}\), \(\ell\geq 2\), and all possible elements of \(\Gamma_{\ell}^{\pm}\). The pair \((Q(\mathcal{C}),W(\mathcal{C}))\) is referred to as the curve quiver with potential of \(\mathcal{C}\). We often abbreviate and refer to such a pair as a _curve QP_ or a _cQP_. \(\Box\) In Definition 2.4 we always write the vertices on the boundary left to right as read counter-clockwise; in this manner the monomial is an actual cycle in the quiver \(Q(\mathcal{C})\). We often consider QPs up to right-equivalence, i.e. up to automorphisms of the path algebra; see [1, Definition 4.2] for details. ### Properties of curve QPs under planar moves A configuration of curves \(\mathcal{C}\) can be modified by smooth isotopies of \(\Sigma\). The combinatorics of \(\mathcal{C}\), including the intersection pattern and polygons bounded by \(\mathcal{C}\), do not change under such isotopies. We can modify \(\mathcal{C}\) more significantly by choosing one curve \(\gamma\in\mathcal{C}\) and smoothly isotope it to another curve \(\gamma^{\prime}\subset\Sigma\). The new configuration \(\mathcal{C}^{\prime}:=(\mathcal{C}\cup\{\gamma^{\prime}\})\setminus\{\gamma\}\) has different combinatorics than that of \(\mathcal{C}\). In this article, we will consider two configurations \(\mathcal{C}\) and \(\mathcal{C}^{\prime}\) equivalent if they can be connected by a sequence of triple moves and bigon moves; moves that we now introduce. Figure 1. The shading of two quadrants at an intersection point. The two consecutive quadrants that are shaded are the first and the second quadrants, where we take the oriented basis as the two axes. #### 2.2.1. Behavior under triple point moves Let \(\gamma_{1},\gamma_{2},\gamma_{3}\in\mathcal{C}\). The two moves in Figure 2 will be referred to as triple point moves. In general, triple moves will refer to any local move that is smoothly isotopic to either of the two moves in Figure 2, possibly after switching orientations of arrows. The two moves in the figure capture all triple moves, up to rotational symmetries. A triple move applied to a configuration \(\mathcal{C}\) is a local operation: there exists a neighborhood \(U\subset\Sigma\) such that \(\mathcal{C}\cap U\) is as in Figure 2 (left), the new configuration \(\mathcal{C}^{\prime}\) coincides with \(\mathcal{C}\) outside of \(U\), and \(\mathcal{C}^{\prime}\cap U\) is as in Figure 2 (right). That is, this is a change in a configuration \(\mathcal{C}\) which is compactly supported, as the boundary conditions in the local model in Figure 2 coincide before and after the move. Let us study how the curve QP \((Q(\mathcal{C}),W(\mathcal{C}))\) behaves under such triple point moves applied to \(\mathcal{C}\). **Proposition 2.5**.: _Let \((Q(\mathcal{C}),W(\mathcal{C}))\) be a curve QP associated to \(\mathcal{C}\). Then \((Q(\mathcal{C}),W(\mathcal{C}))\) is invariant under triple point moves, up to right-equivalence._ Proof.: There are two cases to consider for a triple point move, depending on the orientations, see Figure 2 for the two cases. First, we consider the local model in Figure 3 (left) and denote by \(\gamma_{i}^{in}\), resp. \(\gamma_{i}^{out}\), the tail, resp. the head, of the segment of \(\gamma_{i}\) in the local model; the head is where the arrow is drawn. Let \(R=(r_{ij})\) be the matrix such that \(r_{ij}\) equals the sums of monomials on \(p_{lk}\) in \(W(\mathcal{C})\) that are used by regions that intersect the local model entering at \(\gamma_{l}^{in}\) and exiting at \(\gamma_{k}^{out}\), from either side. Then the matrix \(R\) reads \[R=\begin{pmatrix}0&p_{21}+p_{23}p_{31}&p_{31}\\ p_{21}&0&p_{23}\\ p_{31}&p_{23}&0\end{pmatrix}.\] Indeed, the second entry in the first row corresponds to regions that enter in the upper right through \(\gamma_{1}^{in}\) and exist through \(\gamma_{2}^{out}\) on the upper left. There are two such regions: using \(p_{21}\) or using \(p_{23}p_{31}\). Since both regions are oriented clockwise, there is an overall positive sign, and since none of the quadrants being used are shaded, the sign remains positive. The third entry of the first row is similar. The first entry \(p_{21}\) and the third entry \(p_{23}\) on the second row are actually \(p_{21}=-(-p_{21})\) and \(p_{23}=-(-p_{23})\): both of the regions they record are oriented counter-clockwise, which gives an overall Figure 3. (Left) Local model for Case I before triple move. (Right) Local quiver. Figure 2. The two triple point moves. minus sign, and in addition both of them use a unique quadrant which happens to be shaded. This introduces another minus sign and therefore the entry has two minus signs, thus a positive coefficient in the end. The first entry on the third row is also \(p_{31}=-(-p_{31})\), in that sense, whereas the second entry on the third row has two positive signs directly. From now onwards we will compute with both types of signs in mind (orientation sign and shaded quadrants signs), without further specifying when a positive sign is actually an even number of negative signs. The local quiver, recording the intersections that occur only in the local model, is depicted in Figure 3 (right). After the triple point move we have the local model in Figure 4 (left) and its corresponding local quiver in Figure 4 (right). Note that the quivers in Figure 3 (right) and Figure 4 (right), before and after, are identical: thus it is clear that \(Q(\mathcal{C})\) is invariant under this particular triple point move. The matrix of regions \(R^{\prime}\) after the triple move is computed as with \(R\) above. It reads: \[R^{\prime}=\begin{pmatrix}0&q_{21}&q_{31}\\ q_{21}-q_{23}q_{31}&0&q_{23}\\ q_{31}&q_{23}&0\end{pmatrix}\] Note that the signs in the entry \(q_{21}-q_{23}q_{31}\) are indeed correct: the region associated to \(q_{21}\) is oriented counter-clockwise and it uses a shaded quadrant, thus it is positive, and the region associated to \(q_{23}q_{31}\) is oriented counter-clockwise and it uses two shaded quadrants, thus it is negative. Let us now compare \(R\) and \(R^{\prime}\), which keep track of the regions in the potential \(W(\mathcal{C})\) before and after a triple point move. For that, consider the automorphism \(\phi\in\operatorname{Aut}(\mathbb{C}\langle Q(\mathcal{C})\rangle)\) of the path algebra which is the identity on all arrows \(p_{ij}\) except for \((i,j)=(2,1)\) and sends the arrow \(p_{21}\) to \[p_{21}\mapsto p_{21}-p_{23}p_{31}.\] After applying this automorphism, \(R\) becomes \[\phi(R)=\begin{pmatrix}0&(p_{21}-p_{23}p_{31})+p_{23}p_{31}&p_{31}\\ (p_{21}-p_{23}p_{31})&0&p_{23}\\ p_{31}&p_{23}&0\end{pmatrix}=\begin{pmatrix}0&p_{21}&p_{31}\\ p_{21}-p_{23}p_{31}&0&p_{23}\\ p_{31}&p_{23}&0\end{pmatrix}.\] The matrix \(R^{\prime}\) and \(\phi(R)\) are now related by the trivial relabeling \(p_{ij}\longmapsto q_{ij}\). This shows that the quiver \(Q(\mathcal{C})\) remains invariant under this first case of the triple points move and the right-equivalence class of \((Q(\mathcal{C}),W(\mathcal{C}))\) is also invariant. Second, we now consider the other local model for a triple move, as depicted in Figure 5 (left). The local quiver is drawn in Figure 5 (right). Figure 4. (Left) Local model for Case I after triple move. (Right) Local quiver. Figure 5. (Left) Local model for Case II before triple move. (Right) Local quiver. In the same notation as above, the matrix \(R\) before the triple point move is \[R=\begin{pmatrix}0&p_{21}&p_{13}\\ p_{21}&0&p_{32}\\ p_{13}&p_{32}&0\end{pmatrix},\] except that in this case we also have the closed triangle region associated to the monomial \(p_{13}p_{32}p_{21}\), entirely contained in this local model. Notice that this is a clockwise oriented triangle with none of its (interior) quadrants being shaded: the sign is therefore positive. After the triple point move we have the local model depicted in Figure 6 (left), whose quiver is illustrated in Figure 6 (right). Therefore, the quiver \(Q(\mathcal{C})\) also remains invariant under this second type of triple point move. In the notation as above, the matrix \(R^{\prime}\) after the triple point move is \[R^{\prime}=\begin{pmatrix}0&q_{21}&q_{13}\\ q_{21}&0&q_{32}\\ q_{13}&q_{32}&0\end{pmatrix},\] and we also have the triangle region associated to the monomial \(q_{13}q_{32}q_{21}\), again entirely enclosed in this local model. The sign of this triangle is indeed positive: it is oriented counter-clockwise and its three (interior) quadrants are shaded. This accounts for a total of four negative signs, and therefore a resulting positive sign. In this second type of triple move the comparison before and after is given by the relabeling \(p_{ij}\mapsto q_{ij}\), which indeed maps \(R\) to \(R^{\prime}\) and the triangle \(p_{13}p_{32}p_{21}\) to the triangle \(q_{13}q_{32}q_{21}\). Therefore the quiver \(Q(\mathcal{C})\) and the potential \(W(\mathcal{C})\) are both identical before and after this triple point move. This concludes the result. #### 2.2.2. A property of local bigons An \(\ell\)-gon with \(\ell=2\) will be referred to as a bigon \(B\subset\Sigma\). Note that a bigon \(B\subset\Sigma\) must be oriented, either clockwise or counter-clockwise.3 By definition, a bigon \(B\subset\Sigma\) is said to be local if \(\operatorname{int}(B)\cap\gamma_{i}=\emptyset\) for all \(\gamma_{i}\in\mathcal{C}\). Footnote 3: If \(\gamma_{i},\gamma_{j}\in\mathcal{C}\) bound an “unoriented bigon”, then \(Q(\mathcal{C})\) has two arrows from \(\gamma_{i}\) to \(\gamma_{j}\), or viceversa. We do not consider this a bigon. It does not yield a 2-cycle \(Q(\mathcal{C})\) and (thus) it does not contribute a quadratic monomial to \(W(\mathcal{C})\). Given a local bigon \(B\subset\Sigma\) as in Figure 7, we refer to the region \(\rho(B)\) drawn in red, resp. the region \(\lambda(B)\) drawn in blue, as its right region, resp. as its left region. **Assumption 1**.: _Every local bigon \(B\subset\Sigma\) is assumed to satisfy that \(\lambda(B)\) is a different region than \(\rho(B)\), i.e. \(\lambda(B)\neq\rho(B)\) for all local bigons \(B\). _ Figure 6. (Left) Local model for Case II after triple move. (Right) Local quiver. Figure 7. The regions \(\rho(B)\) (red) and \(\lambda(B)\) (blue). The bigons depicted in orange. It will be proven in Section 3.4, specifically Proposition 3.15, that this assumption is satisfied for the configurations of curves that we shall use, i.e. for those configurations constructed in Section 3. For now we work under Assumption 1: we suppose it holds for all local bigons discussed subsequently. #### 2.2.3. Behavior under bigon moves By definition, the local bigon moves are the local moves depicted in Figure 8. This move applies to local bigons, i.e. bigons \(B\subset\Sigma\) such that \(\operatorname{int}(B)\cap\gamma_{i}=\emptyset\) for all \(\gamma_{i}\in\mathcal{C}\). This is the reason for referring to it as a local bigon move, instead of just a bigon move. In order to understand how \((Q(\mathcal{C}),W(\mathcal{C}))\) changes under local bigon moves, we introduce the following: **Definition 2.6**.: Let \((Q,W)\) be a QP and \(a,b\) be two arrows in \(Q\) such that \(ab\) is a 2-cycle and \(ab\) appears as a quadratic monomial in \(W\). The \(ab\)-reduction of \((Q,W)\), or the local reduction \((Q,W)\) at \(ab\), is the QP \((Q^{\prime},W^{\prime})\) obtained as follows: 1. The quiver \(Q^{\prime}\) coincides with \(Q\) except that both arrows \(a\) and \(b\) have been erased. 2. The potential \(W^{\prime}\) is constructed as follows. Suppose that there exist polynomials \(U,V\), neither of them containing \(a\) or \(b\), such that \[W=(a-U)(b-V)+W^{\prime},\] where \(W^{\prime}\) does not contain \(a\) or \(b\), and the equality is up to cyclic permutation of each monomial. Then \(W^{\prime}:=W-(a-U)(b-V)\). If such polynomials \(U,V\) do not exist, then the \(ab\)-reduction of \((Q,W)\) is said not to exist. Definition 2.6 and the use of the word reduction for such an operation is a direct influence of [10, Section 4]. Note that an oriented bigon \(B\subset\Sigma\) uniquely determines its two intersection points, and it is uniquely determined by them if we know they bound a bigon. Equivalently, these are two arrows \(a,b\) in \(Q(\mathcal{C})\) such that \(ab\) is a 2-cycle and \(ab\) is a monomial appearing in \(W(\mathcal{C})\). In these cases, where \(ab\) is the 2-cycle corresponding to an oriented bigon \(B\), we also refer to an \(ab\)-reduction as a \(B\)-reduction. **Lemma 2.7**.: _Let \((Q(\mathcal{C}),W(\mathcal{C}))\) be the curve QP associated to a curve configuration \(\mathcal{C}\), \(B\subset\Sigma\) be a local bigon and \(\mathcal{C}^{\prime}\) the configuration \(\mathcal{C}\) after a local bigon move at \(B\). Then the \(B\)-reduction of \((Q(\mathcal{C}),W(\mathcal{C}))\) exists and it equals \((Q(\mathcal{C}^{\prime}),W(\mathcal{C}^{\prime}))\)._ Proof.: Let us consider a bigon \(B\subset\Sigma\) bounded by two curves \(\gamma_{1}\) and \(\gamma_{2}\). There are two cases, depending on whether the bigon \(B\) is oriented clockwise or counter-clockwise. The two cases are almost identical and thus we focus on that of a clockwise oriented bigon, as depicted in Figure 9 (left). The local quiver is drawn in Figure 9 (right) and contains the 2-cycle \(p_{12}p_{21}\). Since the bigon is oriented clockwise and none of the two quadrants of the bigon is shaded, its contribution to the potential \(W(\mathcal{C})\) is the monomial \(p_{12}p_{21}\).4 Footnote 4: In the case of a counter-clockwise oriented bigon, the bigon would contribute to the potential with \(-p_{12}p_{21}\). There would be a minus sign because the bigon would be oriented counter-clockwise and use exactly two shaded quadrants. Figure 8. The two local bigon moves. First, let us prove that the \(B\)-reduction of \((Q(\mathcal{C}),W(\mathcal{C}))\) exists. Let \(S_{21}\) be the set of polygons that use the south-west corner of \(p_{21}\), i.e. the set of polygons that come in from \(\gamma_{2}^{in}\), turn left at \(p_{21}\) and exit via \(\gamma_{1}^{out}\).5 A polygon in \(S_{21}\) has the opposite orientation from the bigon \(B\) and contains \(p_{21}\) in its associated monomial in \(W(\mathcal{C})\). Therefore, the contribution from polygons in \(S_{21}\) to the potential \(W(\mathcal{C})\) has the form \(-Up_{21}\) for some polynomial \(U\) that does not contain \(p_{21}\). Similarly, let \(S_{12}\) be the set of polygons that use the south-east corner of \(p_{12}\), i.e. the set of polygons that come from \(\gamma_{1}^{in}\), turn down at \(p_{12}\) and exit via \(\gamma_{2}^{out}\). The contribution from the polygons in \(S_{12}\) to the potential \(W(\mathcal{C})\) has the form \(-Up_{12}\) for some polynomial \(V\) that does not contain \(p_{12}\). Footnote 5: There are no polygons using the north-west or south-east quadrants at \(p_{21}\) because of orientations. Similarly, there are no polygons using the north-east and south-west quadrants at \(p_{12}\). By Assumption 1, the polynomial \(U\) does not contain \(p_{12}\) either. Indeed, any polygon in \(S_{21}\) uses \(\lambda(B)\). If its contribution to the potential contained \(p_{12}\), then \(\lambda(B)=\rho(B)\) as it would be using the region \(\rho(B)\) and polygons contributing to \(W(\mathcal{C})\) are embedded. Similarly \(V\) does not contain \(p_{21}\) either. Let us write the potential as \[W(\mathcal{C})=p_{12}p_{21}-Up_{21}-Up_{12}+\tilde{W}.\] for some \(\tilde{W}\). Now, monomials in \(W(\mathcal{C})\) are precisely given by boundaries of embedded regions, and thus their associated cycles in the quiver are irreducible, i.e. not the composition of two cycles. Therefore, the construction above is such that \(\tilde{W}\) does not contain \(p_{12}\) nor \(p_{21}\). Following the formulation in Definition 2.6, we rewrite \[W(\mathcal{C})=(p_{12}-U)(p_{21}-V)-UV+\tilde{W},\] where \(\tilde{W}\) does not contain \(p_{12}\) or \(p_{21}\). Therefore, we can select \(W^{\prime}:=-UV+\tilde{W}\) in Definition 2.6. Thus we conclude that \((Q(\mathcal{C}),W(\mathcal{C}))\) is indeed \(B\)-reducible. Second, let us now show that the \(B\)-reduction of \((Q(\mathcal{C}),W(\mathcal{C}))\) equals \((Q(\mathcal{C}^{\prime}),W(\mathcal{C}^{\prime}))\). After the bigon move at \(B\) we have the local configuration in Figure 10 (left). The local quiver \(Q^{\prime}(\mathcal{C})\) becomes two vertices with no arrows between them, as there are no intersections in this local piece; it is depicted in Figure 10 (right). For the same reason, the potential \(W(\mathcal{C}^{\prime})\) after the bigon move has no contributions coming from this local configuration. By comparing the two quivers directly, \(Q(\mathcal{C}^{\prime})\) is obtained from \(Q(\mathcal{C})\) by exactly removing the arrows \(p_{12}\) and \(p_{21}\). We claim that the \(B\)-reduction of the curve potential \(W(\mathcal{C})\) equals the curve potential after the bigon removal, namely \(W^{\prime}=W(\mathcal{C}^{\prime})\), where \(W^{\prime}=-UV+\tilde{W}\) as above. Indeed, since the bigon region disappears, \(p_{12}p_{21}\) is not in \(W(\mathcal{C}^{\prime})\). Also, any region in \(S_{21}\) i Figure 10. (Left) Local model after bigon move. (Right) Local quiver. Figure 9. (Left) Local model for a bigon before bigon move. The intersection points are highlighted (red) and we have shaded the quadrants. (Right) Local quiver. bignon move, and thus the polynomial term \(-Up_{12}\) no longer appears in \(W(\mathcal{C}^{\prime})\). The argument for \(-Vp_{21}\) is identical. This justifies that \(p_{12}p_{21}\), \(-Up_{12}\) and \(-Vp_{21}\) must be subtracted from \(W(\mathcal{C})\) to obtain the potential \(W(\mathcal{C}^{\prime})\). It suffices to prove that the term \(UV\) must be added (with signs) to \(W(\mathcal{C}^{\prime})\). Indeed, when we remove the local bigon, a polygon in \(S_{21}\) and a polygon in \(S_{12}\) will be connected along the strip between \(\gamma_{1}\) and \(\gamma_{2}\), creating new polygons that contribute \(-UV\). This leads to the addition of \(-UV\) in the potential \(W(\mathcal{C}^{\prime})\). In conclusion, the curve QP \((Q(\mathcal{C}^{\prime}),W(\mathcal{C}^{\prime}))\) is the \(B\)-reduction of \((Q(\mathcal{C}),W(\mathcal{C}))\), as required. Before moving forward to the next subsection, we introduce the following definition: **Definition 2.8**.: A configuration of curves \(\mathcal{C}\) such that there is no bigon bounded by \(\mathcal{C}\) is said to be reduced. A configuration which is not reduced is said to be non-reduced. #### 2.2.4. Hass-Scott algorithm and the reduced part of \((Q(\mathcal{C}),W(\mathcal{C}))\) Consider a non-reduced configuration \(\mathcal{C}_{0}\) with a collection of bigons \(\{B_{1},\ldots,B_{m}\}\). Note that these bigons \(B_{i}\) might not be local, i.e. the interior of each \(B_{i}\) might intersect curves in \(\mathcal{C}_{0}\) in a non-empty set. The first step is to modify \(\mathcal{C}_{0}\) into a reduced configuration. The Hass-Scott algorithm [10] allows us to do this: **Theorem 2.9** ([10]).: _Let \(\mathcal{C}_{0}\) be a non-reduced configuration with a collection of bigons \(\{B_{1},\ldots,B_{m}\}\). Then, for any \(i\in[m]\), there exists a sequence of triple point moves and one local bigon move on \(\mathcal{C}_{0}\) that yields a new configuration \(\mathcal{C}_{1}\) such that the collection of bigons of \(\mathcal{C}_{1}\) is \(\{B_{1},\ldots,B_{m}\}\setminus\{B_{i}\}\). _ The proof of Theorem 2.9 is local on a given bigon \(B_{i}\), in that it only modifies the configuration \(\mathcal{C}_{0}\) in a neighborhood of \(B_{i}\). It is worth remarking that the Hass-Scott algorithm is able to remove a bigon without introducing further (unnecessary) intersections: just with triple points moves any bigon (possibly non-local) becomes a local bigon. In summary, given a non-reduced configuration \(\mathcal{C}\subset\Sigma\), the algorithm in [10] implies that there exists a sequence of moves, both possibly containing and intertwining triple point moves and local bigon moves, that when applied to \(\mathcal{C}\) yields a reduced configuration. ### Curve QPs under \(\gamma\)-exchange We introduce the following operation on a configuration \(\mathcal{C}\). #### 2.3.1. Definition of \(\gamma\)-exchange of \(\mathcal{C}\) Let \(\gamma\in\mathcal{C}\) be a curve. By definition, the configuration of curves \(\mu_{\gamma}(\mathcal{C})\) is given by the configuration of curves \[\mu_{\gamma}(\mathcal{C}):=\{\mu_{\gamma}(\gamma_{1}),\ldots,\mu_{\gamma}( \gamma_{b})\},\] where the curves \(\mu_{\gamma}(\gamma_{i})\) are obtained as follows, \(i\in[b]\). Consider a neighborhood \(U\subset\Sigma\) of \(\gamma\) such that any curves in \(\mathcal{C}\setminus\{\gamma\}\) intersect \(U\) as depicted in Figure 13. That is, up to planar isotopy, each curve \(\gamma_{i}\in\mathcal{C}\setminus\{\gamma\}\) intersects \(U\) at a collection of intervals \(\{I_{i}^{p}\}\), \(p\in[q_{i}]\) for some \(q_{i}\in\mathbb{N}\), where each such interval only intersects \(\gamma\) once and intervals do not intersect each other; note that this collection \(\{I_{i}^{p}\}\) might be empty for some curves \(\gamma_{i}\in\mathcal{C}\). The neighborhood \(U\) must be a cylinder, since it is the neighborhood of an embedded connected curve in an oriented surface \(\Sigma\). The curves \(\mu_{\gamma}(\gamma_{i})\) in \(\mu_{\gamma}(\mathcal{C})\) are constructed as follows. We apply a positive Dehn twist of the cylinder \(U\) along the simple embedded curve \(\gamma\) to all the segments in \(U\) that intersect \(\gamma\) positively, depicted in blue in Figure 13, and the identity map to all the segments in \(U\) that intersect \(\gamma\) negatively, depicted in green in Figure 13. Note that a Dehn twist often refers to a mapping class, i.e. an element of \(\pi_{0}(\mathrm{Diff}^{c}(U))\). In this construction we explicit mean a representative of that mapping class: at this stage we make one such choice of representative and continue. Since both a Dehn twist and the identity are (represented by) compactly supported diffeomorphisms, each resulting segment \(f(I_{i}^{p})\subset U\), \(f\) a Dehn twist or the identity, can be glued to the corresponding curve \(\gamma_{i}\cap(\Sigma\setminus U)\). By definition, the result of applying such operation to \(\gamma_{i}\) is the curve \(\mu_{g}(\gamma_{i})\) if \(\gamma_{i}\neq\gamma\). We define \(\mu_{\gamma}(\gamma):=-\gamma\). See Figure 14 for the result of applying \(\mu_{\gamma}\) to Figure 13. **Remark 2.10**.: The choices in the above construction will not affect any aspects of our results. For instance, the choice of neighborhood \(U\) with the required properties only modifies the resulting configuration \(\mu_{\gamma}(\mathcal{C})\) by a global isotopy of \(\Sigma\). Similarly, the choice of representative of the Dehn twist class in \(\pi_{0}(\mathrm{Diff}^{c}(U))\) results in the same configuration \(\mu_{\gamma}(\mathcal{C})\) up to global isotopy. By Remark 2.10 and the fact that we only consider configurations \(\mathcal{C}\) up to a global diffeomorphism of \(\Sigma\), the resulting configuration \(\mu_{\gamma}(\mathcal{C})\) is well-defined. We therefore define: **Definition 2.11**.: The configuration of curves \(\mu_{\gamma}(\mathcal{C})\) is said to be the \(\gamma\)-exchange of \(\mathcal{C}\). Note that applying a \(\gamma\)-exchange twice along the same curve, first to \(\gamma\) and then to \(-\gamma\), leads to the same configuration; same up to an overall compactly supported diffeomorphism of \(\Sigma\). Indeed, consecutively applying a \(\gamma\)-exchange at the same vertex yields the configuration \(\mu_{-\gamma}\mu_{\gamma}(\mathcal{C})=\tau_{\gamma}(\mathcal{C})\), given by applying a (representative of a) Dehn twist \(\tau_{\gamma}\in\operatorname{Diff}^{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! 1. All arrows in \(Q_{1}\) not incident to \(v_{k}\) also belong to \(\mu_{k}(Q)_{1}\). 2. For each pair \(a,b\in Q_{1}\) of incoming arrow \(a\) and outgoing arrow \(b\) at \(v_{k}\), create the composite arrow \([ba]\in\mu_{k}(Q)_{1}\). 3. Replace each incoming arrow \(a\in Q_{1}\) (resp. each outgoing arrow \(b\in Q_{1}\)) at \(v_{k}\) by a corresponding arrow \(a^{*}\in\mu_{k}(Q)_{1}\) (resp. \(b^{*}\in\mu_{k}(Q)_{1}\)) now oriented in the opposite way. The potential \(\mu_{k}(W)\) is defined as \[\mu_{k}(W):=[W]+\Delta_{k},\quad\Delta_{k}:=\sum_{a,b\in Q_{1},t(a)=h(b)=k}[ba ]b^{*}a^{*}\] where \([W]\) is obtained by substituting the composite arrow \([ab]\) for each factor \(ab\) with \(t(a)=h(b)=k\) of any cyclic path occurring in the expansion of \(W\) that contains \(ab\). By definition, the QP \((\mu_{k}(Q),\mu_{k}(W))\) is said to be obtained by non-reduced QP-mutation of \((Q,W)\) at the vertex \(v_{k}\). It is shown in [10, Theorem 5.2] that the right-equivalence class of \((\mu_{k}(Q),\mu_{k}(W))\) depends only on the right-equivalence class of \((Q,W)\). **Remark 2.15**.: The mutation \(\mu_{k}(Q)\) of a quiver \(Q\), without the potentials \(W\) and \(\mu_{k}(W)\), was previously defined in [11, Definition 4.2]. The result of a QP-mutation of \((Q,W)\) with no 2-cycles might result in a QP \((\mu_{k}(Q),\mu_{k}(W))\) with 2-cycles. In [10] the notions of _reduced_ and _trivial_ QPs are introduced, as follows. A QP \((Q,W)\) is said to be reduced if the degree-2 homogeneous part \(W^{(2)}\) of \(W\) is trivial, i.e. \(W^{(2)}=0\). That is, \((Q,W)\) is reduced if \(W\) contains no quadratic monomial terms, i.e. no terms of the form \(ab\), \(a,b\in Q_{1}\). A QP \((Q,W)\) is said to be trivial if \(W\) is entirely quadratic, i.e. \(W\in\mathbb{C}\langle Q\rangle^{(2)}\) belongs to the degree-2 homogenous part of the path algebra \(\mathbb{C}\langle Q\rangle\) and the Jacobian algebra of \((Q,W)\) is isomorphic to \(\mathbb{C}\). **Remark 2.16**.: By [10, Prop. 4.4.], there is a more pragmatic criterion to detect triviality: \((Q,P)\) is trivial if and only if the set of arrows \(Q_{1}\) consists of \(2N\) distinct arrows \(a_{1},b_{1},\ldots,a_{N},b_{N}\) such that each \(a_{k}b_{k}\) is a cyclic 2-path, and there is a change of arrows \(\varphi\) such that \(\varphi(W)\) is cyclically equivalent to \(a_{1}b_{1}+\ldots+a_{N}b_{N}\). **Remark 2.17**.: Note that the quiver \(Q\) is not enough to determine the reduced and trivial parts of a QP \((Q,W)\). For instance, the quiver \(Q\) consisting of two vertices \(Q_{0}=\{v_{1},v_{2}\}\) and two arrows \(Q_{1}=\{a,b\}\) with \(h(a)=t(b)=v_{1}\), \(t(a)=h(b)=v_{2}\) is trivial if the potential is chosen to be \(W=ab\), and it is reduced if \(W=0\) is chosen to vanish. The following structural result is established in [10, Theorem 4.6]: **Theorem 2.18** ([10]).: _For every \(QP\)\((Q,W)\) with trivial arrow span \(Q_{triv}\) and reduced arrow span \(Q_{red}\), there exist a trivial QP \((Q_{triv},W_{triv})\) and a reduced QP \((Q_{red},W_{red})\) such that \((Q,W)\) is right-equivalent to the direct sum \((Q_{triv},W_{triv})\oplus(Q_{red},W_{red})\). Also, the right-equivalence class of each of the QPs \((Q_{triv},W_{triv})\) and \((Q_{red},W_{red})\) is determined by the right-equivalence class of \((Q,W)\)._ This allows us to finally define QP-mutation: **Definition 2.19** ([10]).: The QP-mutation of \((Q,W)\) at \(v_{k}\) is the QP \((\mu_{k}(Q)_{red},\mu_{k}(W)_{red})\) given by the reduced part of the non-reduced mutation \((\mu_{k}(Q),\mu_{k}(W))\). This is an involutive operation if performed at the same vertex \(v_{k}\) consecutively, i.e. performing QP-mutation of \((Q,W)\) at \(v_{k}\) twice consecutively leads to \((Q,W)\) again. Note also that Definition 2.6 of local reduction is one step towards extracting the reduced part of a QP. #### 2.3.3. Reduced part for curve QPs Let \(\mathcal{C}\) be a curve configuration and \((Q(\mathcal{C}),W(\mathcal{C}))\) its associated curve QP. The Hass-Scott algorithm, as stated in Theorem 2.9 above, geometrically explains Theorem 2.18 in the case of curve QPs. Indeed, consider a non-reduced configuration \(\mathcal{C}_{0}\) with a non-empty collection of bigons \(\{B_{1},\ldots,B_{m}\}\). Then its associated curve QP \((Q(\mathcal{C}),W(\mathcal{C}))\) is not reduced: by definition, it contains one 2-cycle in \(Q(\mathcal{C})\) for each bigon, and such 2-cycles each appear as quadratic monomial in \(W(\mathcal{C})\). **Definition 2.20**.: Let \(\mathcal{C}\) be a configuration with a collection of bigons \(\{B_{1},\ldots,B_{m}\}\). Any reduced curve configuration \(\mathcal{C}_{red}\) obtained by iteratively applying the Hass-Scott algorithm (Theorem 2.9) \(m\) times to \(\mathcal{C}\), where at each time exactly one bigon is eliminated, is said to be a reduction of \(\mathcal{C}\). Namely, Theorem 2.9 allows us to remove one bigon at a time, by applying a sequence of triple point moves and then a local bigon move. By Proposition 2.5, the sequence of triple point moves does not change the right-equivalence class of \((Q(\mathcal{C}),W(\mathcal{C}))\). Each time that we apply Theorem 2.9 we need exactly one local bigon move. By Lemma 2.7, the QP \((Q(\mathcal{C}),W(\mathcal{C}))\) then undergoes a \(B\)-reduction at a bigon \(B\). By iteratively applying Theorem 2.9, we obtain the following: **Lemma 2.21**.: _Let \(\mathcal{C}\) be a configuration, \((Q(\mathcal{C}),W(\mathcal{C}))\) its associated curve QP, \((Q(\mathcal{C})_{red},W(\mathcal{C})_{red}))\) its reduced QP part, and \(\mathcal{C}_{red}\) a reduction of the configuration \(\mathcal{C}\). Then_ 1. \((Q(\mathcal{C}_{red}),W(\mathcal{C}_{red}))=(Q(\mathcal{C})_{red},W(\mathcal{ C})_{red}))\)_, up to right equivalence._ 2. _If_ \((Q(\mathcal{C}),W(\mathcal{C}))\) _is non-degenerate, then_ \(Q(\mathcal{C}_{red})\) _has no 2-cycles._ The proof of Lemma 2.21 uses the notion of a non-degenerate QP, discussed in Section 3.3 below. This proof is thus postponed until Section 3.5. Note that \(Q(\mathcal{C}_{red})\) might a priori have 2-cycles, even if there are no bigons bounded by \(\mathcal{C}_{red}\), cf. Remark 2.17. The non-degeneracy of a quiver with potential precisely rules out these type of situations, where a 2-cycle in \(Q\) is not kept track by the potential \(W\). #### 2.3.4. Curve QP under \(\gamma\)-exchanges change via QP-mutations We conclude this section by relating the \(\gamma\)-exchanges in Definition 2.11 to the QP-mutation in Definition 2.19. **Proposition 2.22**.: _Let \((Q(\mathcal{C}),W(\mathcal{C}))\) be a curve QP associated to \(\mathcal{C}\) and \(\gamma\in\mathcal{C}\). Then, possibly after applying a sequence of triple point moves and bigon moves to \(\mu_{\gamma}(\mathcal{C})\), the QP \((Q(\mu_{\gamma}(\mathcal{C})),W(\mu_{\gamma}(\mathcal{C})))\) is right-equivalent to the QP-mutation of the QP \((Q(\mathcal{C}),W(\mathcal{C}))\) at the vertex \(\gamma\)._ Proof.: Given Definition 2.19, there are three pieces to justify: 1. The change in the quiver \(Q(\mathcal{C})\) under \(\gamma\)-exchange. 2. The change in the potential \(W(\mathcal{C})\) under \(\gamma\)-exchange. 3. The reduced part of the non-reduced QP-mutation is indeed \((Q(\mu_{\gamma}(\mathcal{C})_{red}),W(\mu_{\gamma}(\mathcal{C})_{red}))\). Parts (1) and (2) will be argued directly, as they indeed need a new computation. In fact, we will show that \((Q(\mu_{\gamma}(\mathcal{C})),W(\mu_{\gamma}(\mathcal{C})))\) is the non-reduced QP-mutation \((\mu_{\gamma}(Q(\mathcal{C})),\mu_{\gamma}(W(\mathcal{C})))\) of \((Q(\mathcal{C}),W(\mathcal{C}))\) at the vertex associated to \(\gamma\). Part (3) follows directly from Lemma 2.21 applied to the configuration \(\mu_{\gamma}(\mathcal{C})\), once \((Q(\mu_{\gamma}(\mathcal{C})),W(\mu_{\gamma}(\mathcal{C})))=(\mu_{\gamma}(Q( \mathcal{C})),\mu_{\gamma}(W(\mathcal{C})))\) is proven. First, we focus on the case of \(\gamma\) and just two intersecting intervals \(\tau_{+}\) and \(\tau_{-}\), which intersect \(\gamma\) in two points \(p_{+}\) and \(p_{-}\) with opposite signs. The core computations in the general case essentially reduce to this situation. We have depicted this configuration in Figure 11 (left), where the corresponding local quiver is drawn beneath the configuration. The local quiver is the linear \(A_{3}\)-quiver, with a unique arrow \(p_{+}\) from (the vertex associated to) \(\tau_{+}\) to \(\gamma\), and a unique arrow \(p_{-}\) from \(\gamma\) to \(\tau_{-}\). In Figure 11 (and the upcoming figures in this proof) we always identify the right hand side of the figure with the left hand side via the identity map: these are all configurations drawn in the resulting cylinder; indeed, \(\gamma\) is a circle and it is depicted as a flat horizontal segment with its right endpoint being identified with its left endpoint. After performing a \(\gamma\)-exchange, according to Definition 2.11, the resulting configuration is as depicted in Figure 11 (right). The local quiver, which is drawn right beneath the configuration, is the quiver obtained by quiver mutation at (the vertex corresponding to \(\gamma\)), according to Definition 2.19. Indeed, the previous two intersection points \(p_{\pm}\) persist in the configuration but since the orientation of \(\gamma\) is opposite, their associated arrows go from \(\tau_{-}\) to \(\gamma\), for \(p_{-}\), and from \(\gamma\) to \(\tau_{+}\), for \(p_{+}\). As illustrated in Figure 11, a third intersection point \(q\) is created after the \(\gamma\)-exchange: it is an intersection between \(\tau_{-}\) and \(\tau_{+}\). This yields a new arrow \(q\) in the quiver which is precisely the composite arrow \(q=[p_{-}p_{+}]\). In conclusion, for these local configurations, we have verified that \(Q(\mu_{g}(\mathcal{C}))\) equals \(\mu_{g}(Q(\mathcal{C}))\). Let us now study the change of the potential \(W(\mathcal{C})\) under \(\gamma\)-exchange, also in these particular configurations just with \(\gamma\), \(\tau_{+}\) and \(\tau_{-}\). For that we need to understand how polygons bounded by \(\mathcal{C}\) change under \(\gamma\)-exchange. The key image is Figure 12, that we now explain. Before the \(\gamma\)-exchange, the potential \(W(\mathcal{C})\) records polygons in \(\mathcal{C}\). Consider the intersection of any such polygon recorded by \(W(\mathcal{C})\) with the region \(U\) and assume again that we obtain the configuration Figure 11. The effect of \(\gamma\)-exchange in the case of exactly one positive and one negative intersections. The configuration before the \(\gamma\)-exchange is depicted on the left, and the configuration after the \(\gamma\)-exchange is depicted on the right. The local quivers recording the intersection patterns are drawn under the configurations. Figure 12. (Left) Potential pieces of polygons in \(\mathcal{C}\) that have \(p_{-}\) and \(p_{+}\) as vertices: the regions are highlighted in yellow. The diagram in the third row of this left column represents an empty region. (Right) Potential pieces of polygons in \(\mathcal{C}\) that have \(p_{-},p_{+}\) and \(q\) as vertices, now after a \(\gamma\)-exchange. with just \(\gamma\), \(\tau_{+}\) and \(\tau_{-}\), intersecting as above. It suffices to focus on polygons that use \(p_{-}\) and \(p_{+}\). In this configuration, there are exactly two, drawn in the first and second rows of Figure 12 (left). The potential \(W(\mathcal{C})\) records such polygons with monomials (on the arrows of \(\mathcal{C}\)) that contain \(p_{-}p_{+}\). After the \(\gamma\)-exchange, as drawn in Figure 12 (right), those polygons still exist but now must use the new intersection point \(q\).6 Indeed, the boundary conditions for the polygons are given (entering in \(\tau_{\pm}\) and exiting in \(\tau_{\mp}\)) and there is a unique manner in which those region can exist in the configurations of Figure 12 (right). This is precisely the term \([W(\mathcal{C})]\) in Definition 2.19 of the mutation of the potential: each time we see \(p_{-}p_{+}\) in a monomial in \(W(\mathcal{C})\) we must substitute it by the composite arrow \(q=[p_{-}p_{+}]\). There is an additional contribution to \(W(\mu_{\gamma}(\mathcal{C}))\): there is a triangle, as drawn in the third row of Figure 12. Since its vertices are \(q,p_{+}\) and \(p_{-}\), it contributes the monomial \(qp_{+}p_{-}\) to \(W(\mu_{\gamma}(\mathcal{C}))\), which is precisely the \(\Delta_{\gamma}\) term in Definition 2.19. Therefore, we have Footnote 6: From the perspective of \(H_{1}(U;\mathbb{Z})\), this intersection point appears because of the Picard-Lefschetz formula in its most elementary setting: explaining how homology classes change under Dehn twists. \[W(\mu_{\gamma}(\mathcal{C}))=[W(\mathcal{C})]+ap_{+}p_{-}=[W(\mathcal{C})]+ \Delta_{\gamma}=\mu_{\gamma}(W(\mathcal{C})).\] Thus far we have argued \((Q(\mu_{\gamma}(\mathcal{C})),W(\mu_{\gamma}(\mathcal{C})))=(\mu_{\gamma}(Q( \mathcal{C})),\mu_{\gamma}(W(\mathcal{C})))\) for the configuration with \(\gamma\), \(\tau_{+}\) and \(\tau_{-}\) as in Figure 11. The general case is concluded as follows. Consider a neighborhood \(U\) of \(\gamma\) such that7 the configuration \(\mathcal{C}\) intersected with \(U\) is as depicted in Figure 13. That is, only curves \(\gamma_{i}\in\mathcal{C}\) that intersect \(\gamma\) do intersect \(U\) and the intersections are (up to planar isotopy) straight vertical segments that either point upwards or downwards and only intersect \(\gamma\) once (and these segments do not intersect each other in \(U\)). Footnote 7: Such a neighborhood always exists because \(\mathcal{C}\) has finitely many curves and they all intersect transversely. Footnote 8: We drew the blue curves \(d_{l,k}\) as PL-curves for convenience: the curves are smooth, and we smooth this PL depiction by an arbitrarily smoothing of the corners, so no further intersections are created. Choose an arbitrary but fixed point \(g\in\gamma\). As we scan the arrows starting at \(g\) and move in the direction given by the orientation of \(\gamma\), there will be: * A number of blocks \(U_{1},\ldots,U_{v}\), \(v\in\mathbb{N}\), of arrows pointing upward, each with \(u_{i}:=|U_{i}|\) arrows, \(i\in[v]\). The curves in \(U_{i}\) will be labeled by \(u_{i,j}\), \(i\in[v]\), \(j\in[u_{i}]\), where the index \(j\) is ordered left to right as we transverse \(\gamma\) according to its orientation. * A number of blocks \(D_{1},\ldots,D_{e}\), \(e\in\mathbb{N}\), of arrows pointing upward, each with cardinality \(d_{i}:=|D_{i}|\), \(i\in[e]\). The curves in \(D_{i}\) will be labeled by \(d_{i,j}\), \(i\in[e]\), \(j\in[d_{i}]\), where the index \(j\) is ordered left to right as we transverse \(\gamma\) according to its orientation. Figure 14 illustrates this notation in a specific example. Now, for every pair of curves \(u_{i,j}\) and \(d_{l,k}\), the configuration of three curves \(\gamma\), \(u_{i,j}\) and \(d_{l,k}\) is precisely the local configuration we studied above, as in Figure 11 (left). Since a \(\gamma\)-exchange fixes all the \(u_{i,j}\) and applies a Dehn twist to all the \(d_{l,k}\), the behaviour of _any_ such pair under \(\gamma\)-exchange is identical and it coincides with the one studied above, see Figure 11 (right). As emphasized in Subsection 2.3.1, a choice of representative for the (mapping class of the) Dehn twist must be made: we specifically choose a compactly supported diffeomorphism \(f\in\mathrm{Diff}^{e}(U)\) which acts on the set of curves \(d_{l,k}\) as drawn9 in Figure 14. Namely, each curve \(f(d_{l,k})\) intersects each \(u_{i,j}\) and \(\gamma\) exactly once.9 Footnote 9: Such a choice exists: the standard model for the Dehn twist defined via the geodesic flow in the punctured disk bundle (extended by the antipodal map to the zero section) has this property. Figure 13. The general configuration \(\mathcal{C}\) in a neighborhood of \(\gamma\) before a \(\gamma\)-exchange. Since the only intersections before the \(\gamma\)-exchange are between \(u_{i,j}\) and \(\gamma\), or \(d_{l,k}\) and \(\gamma\), the new intersections afterwards will be precisely as recorded by the local configuration in Figure 11 (right). Note that this applies to every pair of \(u_{i,j}\), \(d_{l,k}\) and hence for every pair of arrows in-and-out of \(\gamma\) in \(Q(\mathcal{C})\) we will add the composite arrow (and flip that pair of arrows). Thus the quiver changes according to a mutation at \(\gamma\) also in this general case. Similarly, since the only regions in Figure 13 using intersection points with \(\gamma\) are precisely those entering a \(u_{i,j}\) and exiting a \(d_{l,k}\), or vice-versa, the change of polygons in \(\mathcal{C}\) will be recorded by tracking all the polygons that appear in the local configuration in Figure 11 (right) for _all_ pairs of \(u_{i,j}\) and \(d_{l,k}\). We have already argued that the potential in each local configuration precisely changes as a (non-reduced) QP-mutation, and therefore it is also the case for the potential in the general configuration. ## 3. Non-degeneracy of curve QPs for plabic fences In this section we construct curve configurations \(\mathcal{C}\) from a certain type of plabic graphs and show rigidity for their associated curve QP \((Q(\mathcal{C}),W(\mathcal{C}))\). Let us start with the plabic graphs that we need: **Definition 3.1**.: An embedded planar bicolored graph \(\mathbb{G}\subset\mathbb{R}^{2}\) is said to be a _plabic fence_ if it satisfies the following conditions. 1. The vertices of \(\mathbb{G}\subset\mathbb{R}^{2}\) belong to the standard integral lattice \(\mathbb{Z}^{2}\subset\mathbb{R}^{2}\), and they are colored in either black or white. 2. The edges of \(\mathbb{G}\subset\mathbb{R}^{2}\) belong to the standard integral grid \((\mathbb{Z}\times\mathbb{R})\cup(\mathbb{R}\times\mathbb{Z})\subset\mathbb{R }^{2}\). Edges that are contained in \(\mathbb{Z}\times\mathbb{R}\) are said to be _vertical_, and edges that are contained in \(\mathbb{R}\times\mathbb{Z}\) are said to be _horizontal_. 3. A maximal connected union of horizontal edges is called a _horizontal line_. All horizontal lines must start, on the left, at univalent white vertices with the same \(x\)-coordinate and must end, on the right, at univalent white vertices with the same \(x\)-coordinate. 4. Each vertical edge must end at trivalent vertices of opposite colors, with white on top and black on bottom, and the end points of a vertical edge must be contained in the interior of a horizontal line. In addition, no two vertical edges are contained in the same (vertical) line. These are a special type of plabic graphs, studied in [14, Section 12], [15, Section 2.5], [16, Section 5], and also [17, 18], the latter in the context of triangulations of flag configuration diagrams. For a visual instance, two plabic fences \(\mathbb{G}\) are drawn in Figure 15. Figure 14. The general configuration \(\mathcal{C}\) in a neighborhood of \(\gamma\) after a \(\gamma\)-exchange. Figure 15. Two plabic fences. The fence on the right encodes \(\beta=(\sigma_{1}\sigma_{2})^{4}\). **Remark 3.2**.: The definition of plabic fences in [12, 13] is more general but, for our purposes, it is without loss of generality that we can work with Definition 3.1. This is due to the fact that cyclic rotation is a quasi-cluster transformation, as shown in [13, 14]; see also [15]. We use the following bijection between plabic fences with \(n\) horizontal edges and positive braid words in \(n\)-strands. For a plabic fence \(\mathbb{G}\) we construct a braid word \(\beta(\mathbb{G})\) iteratively by scanning the plabic fence left-to-right: starting with the empty word \(\beta(\mathbb{G})\), when we encounter a vertical edge between the \(k\) and \((k+1)\)st horizontal edges, counting from the bottom, we add the Artin generator \(\sigma_{k}\) to the right of \(\beta(\mathbb{G})\). The plabic fence that gives the braid word \(\beta\) will be denoted by \(\mathbb{G}(\beta)\). ### Curve configuration \(\mathcal{C}(\mathbb{G})\) of a plabic fence \(\mathbb{G}\) A plabic fence gives rise to a curve configuration \(\mathcal{C}(\mathbb{G})\) as follows. Consider the conjugate surface \(\Sigma(\mathbb{G})\) of [10, Section 1.1.1]; see also [1, Section 2], [13, Section 4] or [11, Section 2.1] for definition and details on conjugate surfaces. For the purposes of this manuscript, this is a (ribbon) surface obtained from a plabic graph by using the three local models in Figure 16.10 It retracts to \(\mathbb{G}\) and thus \(H_{1}(\Sigma(\mathbb{G}),\mathbb{Z})\cong H_{1}(\mathbb{G},\mathbb{Z})\), the latter being a free \(\mathbb{Z}\)-module of rank equal to the number of faces of \(\mathbb{G}\).11 Footnote 10: The boundary of the surface is given by the alternating strand diagram of \(\mathbb{G}\), see [14, 15]. Footnote 11: A face is a bounded connected component of the complement of the plabic fence in \(\mathbb{R}^{2}\). We allow the equivalence in Figure 17 for conjugate surfaces. The fact that this move does not affect the combinatorics, nor the symplectic geometric in Section 4, is verified rather simply12, e.g. it is proven in [11, Section 2.1.3]. We always consider the diagram for the conjugate surface \(\Sigma(\mathbb{G})\) after we have applied this move (left to right in Figure 17) to every local double-crossing configuration as in Figure 17 (left). That is, we remove any instances of Figure 17 (left) in the diagram of \(\Sigma(\mathbb{G})\), substituting them by Figure 17 (right). Footnote 12: The move removes/creates a trivial 2-cycle that would also feature in the potential, therefore the reduced part of the QP quiver of the associated curve configuration will remain invariant. Therefore, near a face \(F\subset\mathbb{G}\), the conjugate surface can be drawn as in Figure 18; the face \(F\) is the central face, immediately to the left of the green vertical edge. Note that this is the general pattern near a face \(F\): bounded between two vertical edges at the same level, with a series of vertical edges arriving with a black vertex from the level right above, and a series of vertical edges arriving with a white vertex from the level right below. Figure 16. The three local models needed to draw a (projection of a) conjugate surface associated to a plabic fence \(\mathbb{G}\). The boundary of the surface is in dark blue and the surface itself in light blue. Figure 17. An equivalence of conjugate surfaces. This corresponds to a non-dangerious tangency in the alternating strand diagram. The curve configuration \(\mathcal{C}(\mathbb{G})\) from \(\mathbb{G}\) is built as follows: **Definition 3.3**.: Let \(\mathbb{G}\) be a plabic fence and \(\Sigma(\mathbb{G})\) its conjugate surface. The curve configuration \(\mathcal{C}(\mathbb{G})\) is the configuration of embedded closed curves in \(\Sigma(\mathbb{G})\) constructed as follows: 1. There is a curve \(\gamma_{F}\in\mathcal{C}(\mathbb{G})\) for every face \(F\subset\mathbb{G}\). 2. Each curve \(\gamma_{F}\subset\Sigma(\mathbb{G})\) is obtained by applying the local models in Figure 19 near each vertex of \(\mathbb{G}\) and connecting the resulting segments by following the boundary of (the diagram of) \(\Sigma(\mathbb{G})\) in a planar parallel manner. The configuration \(\mathcal{C}(\mathbb{G})\) of curves in \(\Sigma(\mathbb{G})\) is said to be the (curve) configuration associated to \(\mathbb{G}\). Figure 20 (left) depicts a plabic fence with a face \(F\) and Figure 20 (right) depicts its associated closed embedded curve \(\gamma_{F}\) in \(\Sigma(\mathbb{G})\). Note again that this is the generic form of any plabic fence near a face \(F\), possibly adding a few more vertical edges at the levels right above and below, which would in any case not change \(\gamma_{F}\). Note that we can (and do) draw the curves in \(\mathcal{C}(\mathbb{G})\) such that two curves \(\gamma_{F_{1}}\) and \(\gamma_{F_{2}}\) in \(\mathcal{C}(\mathbb{G})\) only intersect at the precise twist point of the ribbon diagram for \(\Sigma(\mathbb{G})\). That is, the only intersections of curves in \(\mathcal{C}(\mathbb{G})\) that occur are of the form depicted in Figure 21. Figure 19. The two local models needed to associate a closed curve \(\gamma_{F}\) in \(\Sigma(\mathbb{G})\) to every face of \(\mathbb{G}\). Figure 21. The two local models near an intersection point of two curves in \(\mathcal{C}(\mathbb{G})\). Figure 18. (Left) A plabic fence in black with the boundary of the conjugate surface drawn in blue. (Right) The conjugate surface associated to the plabic fence, with its interior drawn in turquoise. Figure 20. (Left) A plabic fence \(\mathbb{G}\) in black with a face \(F\) highlighted. (Right) The curve \(\gamma_{F}\) in \(\Sigma(\mathbb{G})\) associated to the face \(F\). In the next subsection we introduce a QP \((Q(\mathbb{G}),W(\mathbb{G}))\) that will be useful to understand the properties of the curve QP associated \(\mathcal{C}(\mathbb{G})\). **Remark 3.4**.: A wave \(\mathfrak{w}\) naturally produces a curve configuration \(\mathcal{C}(\mathfrak{w})\) as well. If one chooses the weave \(\mathfrak{w}=\mathfrak{w}(\mathbb{G})\) associated to a plaic fence, as constructed in [12, Section 3], then the curve configuration \(\mathcal{C}(\mathfrak{w})\) is equivalent to \(\mathcal{C}(\mathbb{G})\). This can be proven with the same techniques as in [10, Section 3] but will not be needed for the present manuscript. ### The QP associated to a plaic fence There are several descriptions of quivers \(Q(\mathbb{G})\) associated to a plaic fence \(\mathbb{G}\), see [1, 13, 14, 15] for some; they are all equivalent. For specificity, we explicitly describe the quiver \(Q(\mathbb{G})\) in the upcoming Defininion 3.5. A piece of notation: suppose that \(e\subset\mathbb{G}\) is a vertical edge at level \(k\), we denote by \(F_{e}\) the face of \(\mathbb{G}\) that has \(e\) as its right vertical edge. This face \(F_{e}\) is unique or it does not exist. By definition, a black pente-row (resp. white pente-row) is a consecutive collection of black vertices in the same horizontal edge of \(\mathbb{G}\) such that: * There must be two white (resp. black) vertices bounding it: one white vertex at its left, and one white vertex at its right.13 Footnote 13: This is in the spirit of pente capture configurations, following the board game Pente. * Each connected component of \(\mathbb{R}^{2}\setminus\mathbb{G}\) whose closure contains any segment between two vertices of the black vertices above or between a black vertex and one of the two white vertices above must be a face. The total number of black (resp. white) vertices in a black (resp. white) pente-row is said to be its length. See Figure 22 for a length four black pente-row and a length four white pente-row, where we have marked the connected components of \(\mathbb{R}^{2}\setminus\mathbb{G}\) that must be faces with a blue dot. The rightmost face in a black (resp. white) pente-row, which has a left corner at the rightmost black (resp. white) vertex, is said to be its right face. **Definition 3.5**.: Let \(\mathbb{G}\) be the plaic fence with \(n\) horizontal edges, its associated QP \((Q(\mathbb{G}),W(\mathbb{G}))\) is defined as follows. The quiver \(Q(\mathbb{G})\) has vertex set the set of faces of \(\mathbb{G}\). The arrow set of \(Q(\mathbb{G})\) is inductively described as follows, scanning \(\mathbb{G}\) left to right: * If \(\mathbb{G}\) is the empty plaic fence, then the arrow set of \(Q(\mathbb{G})\) is empty. * Choose a vertical edge \(e\subset\mathbb{G}\) at level \(k\) and assume that the arrow set of \(Q(\mathbb{G}_{<e})\) is \(A_{<e}\), where \(\mathbb{G}_{<e}\) is the plaic subfence of \(\mathbb{G}\) consisting of those vertical edges to the left of \(e\). If \(F_{e}\) does not exist, the arrow set of \(Q(\mathbb{G}_{\leq e})\), where \(\mathbb{G}_{\leq e}=\mathbb{G}_{<e}\cup\{e\}\), is defined to be \(A_{<e}\). If \(F_{e}\) exists, the arrow set of \(Q(\mathbb{G}_{\leq e})\) is defined to be \(A_{<e}\) union the following possible arrows: * Let \(d\) be the left vertical edge of \(F_{e}\). If \(F_{d}\) exists, then we add an arrow \([de]\) from \(F_{d}\) to \(F_{e}\). * Let \(d^{\uparrow}\) be the first vertical edge in \(\mathbb{G}\) at level \((k+1)\)st to the right of \(d\). If \(d^{\uparrow}\) and \(F_{d^{\uparrow}}\) exist, then we add an arrow \([ed^{\uparrow}]\) from \(F_{e}\) to \(F_{d^{\uparrow}}\). See Figure 23 (left) for such an arrow, marked with a pink (\(Z\)) pattern. Figure 22. (Left) A black pente-row. (Right) A white pente-row. 3. Let \(d^{\downarrow}\) be the first vertical edge in \(\mathbb{G}\) at level \((k-1)\)st to the right of \(d\). If \(d^{\downarrow}\) and \(F_{d^{\downarrow}}\) exist, then we add an arrow \([ed^{\downarrow}]\) from \(F_{e}\) to \(F_{d^{\downarrow}}\). See Figure 23 (left) for such an arrow, marked with a red \((S)\) pattern. If the hypotheses in these cases are not met, no arrows are added: e.g. if \(F_{e}\) or \(F_{d}\) do not exist in case (a), we do not add any arrows at that stage. See Figure 23 (right) for an instance of \(Q(\mathbb{G})\). Note that \(Q(\mathbb{G})\) around a black pente-row of \(\mathbb{G}\) has a (planar) clockwise cycle, whereas \(Q(\mathbb{G})\) around a white pente-row has a (planar) counter-clockwise cycle. The lengths of these cycles in the quiver are the lengths of the rows plus two. The potential \(W(\mathbb{G})\) is similarly described inductively. If \(\mathbb{G}\) is the empty plabic fence, then \(W(\mathbb{G})=0\). For each vertical edge \(e\) added to the right of \(\mathbb{G}_{<e}\) as above, the potential is defined as \[W(\mathbb{G}_{\leq e})=W(\mathbb{G}_{<e})+P_{black}(e)-P_{white}(e),\] where the monomials in \(P_{black}(e)\) and \(P_{white}(e)\) are described as follows. By definition, \(P_{black}(e)\), resp. \(P_{white}(e)\), is the (cyclic) monomial in the arrows of \(Q(\mathbb{G})\) encoding the planar cycle in \(Q(\mathbb{G})\) associated to the unique black (resp. white) pente-row with right face equal \(F_{e}\), if such pente-row exists and else it equals zero. Note that monomials in \(P_{white}(e)\) have an overall minus sign in front due to the counter-clockwise orientation of their cycles. **Remark 3.6**.: Note that a pente-row of length one with right face \(F_{e}\) gives rise to a triangle cycle in \(Q(\mathbb{G})\) and thus cubic monomial in the potential. Thus this construction generalizes known quivers with potentials from bipartite graphs, cf. [1, Section 5.1.2]. For instance, the case of \(\beta=w_{0}\), the longest element in the symmetric group \(S_{n}\), recovers the potential associated to the \(n\)-triangulation of a triangle, see [1, Section 1.1] or [1, Section 3.1]. **Remark 3.7**.: For a non-inductive definition of \(Q(\mathbb{G})\), cf. [1, Section 1]. The inductive nature of Definition 3.5 is useful in our inductive proof of Propositions 3.8 and 3.12. Also, there is no need to keep track of frozen vertices in Definition 3.5. There are natural generalizations to iced quivers, e.g.. see [10], [1] or [12]. The main reason to introduce \((Q(\mathbb{G}),W(\mathbb{G}))\) is to be able to describe the QP associated to \(\mathcal{C}(\mathbb{G})\) combinatorially in terms of \(\mathbb{G}\). Indeed, we have: **Proposition 3.8**.: _Let \(\mathbb{G}\) be a plabic fence and \(\mathcal{C}=\mathcal{C}(\mathbb{G})\) its associated curve configuration. Then the curve QP \((Q(\mathcal{C}),W(\mathcal{C}))\) equals the QP \((Q(\mathbb{G}),W(\mathbb{G}))\)._ Proof.: Let us scan the plabic fence \(\mathbb{G}\) left to right and compare \(Q(\mathbb{G})\) and \(Q(\mathcal{C})\) as we do so. Let \(e\subset\mathbb{G}\) be a vertical edge at level \(k\) and \(F=F_{e}\) its associated face.14 Since \(Q(\mathbb{G})\) and the curves in \(\mathcal{C}\) associated to faces at level \(k\) are local in the union of levels \((k-1),k\) and \((k+1)\)st, it suffices to consider the piece of plabic fence in the neighborhood of \(F\) consisting of all faces that share a vertical edge (there is at most one such face) or a piece of a horizontal edge with \(F\). An example of a neighborhood of \(F\) is drawn in Figure 24, with the relevant set of curves in \(\mathcal{C}\) on its left and the quiver on its right. Note that we are scanning left to right and assuming that the vertical edge \(e\) is the rightmost edge of \(\mathbb{G}\) at this stage. For the plabic fence in Figure 24 (left), this is the edge depicted in green in Figure 18. Figure 23. (Left) The two local patterns for arrows in \(Q(\mathbb{G})\) that are not horizontal. (Right) An example of \(Q(\mathbb{G})\). The curve \(\gamma_{F}\) associated to the new vertex of \(Q(\mathbb{G}_{\leq e})\) not in \(Q(\mathbb{G}_{<e})\) intersects the other curves \(\gamma_{F_{i}}\), \(F_{i}\) faces in \(\mathbb{G}_{<e}\), when there is a ribbon twist. (Recall Figure 21.) The curve \(\gamma_{F}\) only traverses twists that respectively connect to: * The face \(F_{d}\), where \(d\) is the left vertical edge of \(F_{e}\), if it exists. * The face \(F_{d^{\uparrow}}\), as in Definition 3.5, if it exists. * The face \(F_{d^{\downarrow}}\), as in Definition 3.5, if it exists. * The region to the right of \(e\) that, by hypothesis at this stage (\(e\) being the rightmost edge we have scanned) is unbounded and thus not a face. The three intersections \(\gamma_{d}\cap\gamma_{e}\), \(\gamma_{d^{\uparrow}}\cap\gamma_{e}\) and \(\gamma_{d^{\downarrow}}\cap\gamma_{e}\) are precisely recorded by the arrows \([de]\), \([ed^{\uparrow}]\) and \([de],[ed^{\downarrow}]\), if they respectively exist. The direction of the arrow is given by the intersection sign, which is positive for \(\gamma_{d}\cap\gamma_{e}\) and negative for \(\gamma_{d^{\uparrow}}\cap\gamma_{e}\) and \(\gamma_{d^{\downarrow}}\cap\gamma_{e}\). Therefore \(Q(\mathbb{G})\) and \(Q(\mathcal{C})\) coincide. In order to compare \(W(\mathbb{G})\) and \(W(\mathcal{C})\) it suffices to note that a polygon bounded by \(\mathcal{C}\) in the conjugate surface \(\Sigma(\mathbb{G})\) must have vertices be as in Figure 25. That is, a polygon can use only one of the two regions at a given ribbon twist and, once a side is chosen, it must be the interior region bounded by the two curves.15 Therefore, polygons bounded by \(\mathcal{C}\) must correspond to pente-rows. Indeed, a pente-row gives a unique polygon bounded by \(\mathcal{C}\) by drawing \(\Sigma(\mathbb{G})\) near the pente-row and cutting at the ribbon twists around the pente-row; each ribbon twist corresponds to a vertex of the polygon. Such cuts give an embedded planar region and there is a unique polygon embedded in it with vertices given by the (locations where we cut the) ribbon twists. Note that we cut as many vertical ribbon twists as the length of the pente-row and we always cut two horizontal ribbon twists, i.e. the polygon has as many sides as the length of the pente-row plus two. The polygon is oriented clockwise, resp. counterclockwise, if the row is black, resp. white. Footnote 15: The other regions at one side of a twist have boundary components of \(\Sigma(\mathbb{G})\) as part of their boundary. Conversely, the conjugate surface \(\Sigma(\mathbb{G})\) has a ribbon twist in any segment (horizontal or vertical) between a black and a white vertex, and no twist if a segment is between two vertices of the same color. Embeddedness of the polygon in \(\Sigma(\mathbb{G})\) implies that it must lie within a region bounded by twists (and no twists on the interior of that region). Since the ends of a polygon must be as in Figure 25, the only possible polygons bounded by \(\mathcal{C}\) must have the form of those around a pente-row. In conclusion, polygons bounded by \(\mathcal{C}\) in \(\Sigma(\mathbb{G})\) must correspond to pente-rows and the potentials \(W(\mathbb{G})\) and \(W(\mathcal{C})\) coincide as we scan \(\mathbb{G}\) left to right. Figure 24. (Left) A neighborhood of a face \(F\) with the conjugate surface depicted and all the curves \(\gamma_{F_{i}}\) for the faces \(F_{i}\) of \(\mathbb{G}\) adjacent to \(F\). (Right). Figure 25. In yellow, pieces of regions in the conjugate surface that have the potential to give a polygon bounded by \(\mathcal{C}\). We conclude this subsection with a combinatorial property satisfied by the quivers \(Q(\mathbb{G})\). This property will be used in the proof of Proposition 3.12. The following proof was explained to us by D. Weng, to whom we are grateful: **Lemma 3.9**.: _Let \(\mathbb{G}\) be a plabic fence and \(Q(\mathbb{G})\) its associated quiver. Consider the rightmost vertical edge \(e\in\mathbb{G}\) and the vertex \(F_{e}\in Q(\mathbb{G})_{0}\) associated to the face \(F_{e}\) immediately to the left of \(e\), if it exists. Then there exists a sequence of vertices \((v_{1},\dots,v_{k})\) in \(v_{i}\in Q(\mathbb{G}_{<e})_{0}\), \(i\in[k]\) and \(k\in\mathbb{N}\) depending on \(e\), such that \(\mu_{k}\cdots\mu_{1}(Q(\mathbb{G}))\) has \(F_{e}\) be a source vertex._ Proof.: Let us consider the set of faces \(F_{1},\dots,F_{k},F_{k+1}\) at the same level as the edge \(e\). We order them left to right as we scan \(\mathbb{G}\), thus \(F_{1}\) is the leftmost face of \(\mathbb{G}\) at that level, \(F_{2}\) is the face adjacent to \(F_{1}\) exactly to its right, and so on until \(F_{k+1}=F_{e}\). The claim is: _Assertion_. \((v_{1},\dots,v_{k})=(F_{1},\dots,F_{k})\) is a sequence that turns \(F_{k+1}=F_{e}\) into a source. Before we prove the assertion, three comments. First, we recall that [15], or [15, Section 5], explain how to associate a quiver to a more general type of plabic fence than that in Definition 3.1.16 Namely, to a plabic fence that also allows vertical edges to have a black vertex at the top and a white vertex at the bottom. See also [15, Section 2] for such objects, their quivers and their properties. These are essentially all rephrasings of the theory of double wiring diagrams, cf. [15, Section 2.4], now in the plabic graph terminology and generalized to the non-reduced case. Just for this proof, we refer to these more general plabic fences also as plabic fences. Footnote 16: These are instances of the general theory of quivers for plabic graphs, see [16]. Now, the reflection move in [15, Section 2.3], see also [15, Section 5.2] or [15, Section 4], is as follows. A reflection move \(r_{k}\) at level \(k\) of a plabic fence \(\mathbb{G}\) is the operation that input \(\mathbb{G}\) and outputs a plabic fence \(r_{k}(\mathbb{G})\) which coincides with \(\mathbb{G}\) everywhere except that the leftmost vertical edge at level \(k\) has been flipped.17 A reflection move does _not_ change the quiver \(Q(\mathbb{G})\), see [15]. Footnote 17: Here flipping means: if it had a black vertex at the top, now it has a black vertex at the bottom, and vice versa. Second, for each face \(F\), let us denote by \(\partial_{-}F\), resp. \(\partial_{+}F\) its left vertical edge, resp. right vertical edge. Suppose that we have a square face in \(\mathbb{G}\) which has a \(\partial_{-}F\) with a black vertex at the top and \(\partial_{+}F\) with a black vertex at the bottom. Then the square move, cf. [15, Section 2.5], is the operation that exchanges \(\partial_{-}F\) and \(\partial_{+}F\), i.e. the new plabic fence \(\mu_{F}(\mathbb{G})\) coincides with \(\mathbb{G}\) everywhere except that \(\partial_{-}F\) now has a black vertex at the bottom and \(\partial_{+}F\) now has a black vertex at the top. A square move induces a mutation of \(Q(\mathbb{G})\) at the vertex \(F\), i.e. \(Q(\mu_{F}(\mathbb{G}))=\mu_{F}(Q(\mathbb{G}))\). Third, the two moves in Figure 26 allow us to slide right a vertical edge with black at the top through vertical edges with black at the bottom to its upper right or lower right. Let us now prove the assertion above, which will conclude the proof for the lemma. Proof of Assertion: Let us consider the initial plabic fence \(\mathbb{G}\) and apply a reflection \(r_{k}(\mathbb{G})\), where \(k\) is the level of \(e\in\mathbb{G}\). We still have \(Q(r_{k}(\mathbb{G}))=Q(\mathbb{G})\). This reflection creates a vertical edge \(\delta\) with black on top at the leftmost part of level \(k\) of \(r_{k}(\mathbb{G})\); all the remaining vertical edges in \(r_{k}(\mathbb{G})\) have black at the bottom. Then we slide \(\delta\) to the right through level \(k\) by either applying the sliding moves in Figure 26, or using square moves. The former has no effect on the quiver, the latter induce quiver mutations. Let us slide \(\delta\) to the right until \(\delta\) is exactly to the left of \(e\), so that (the new) \(F_{e}\) has \(\partial_{-}F_{e}=\delta\) and \(\partial_{+}F_{e}=e\). Denote by \(\mathbb{G}_{\delta,e}\) this plabic fence. Then \(\delta\) will have slid through faces \(F_{1},\dots,F_{k}\) when Figure 26. Two moves on plabic fences that do not change the quiver. They allows us to move a certain type of vertical edge to the right in the presence of certain vertical edges above and below it. going from \(\mathbb{G}\) to \(\mathbb{G}_{\delta,e}\). Thus at this stage we have performed a sequence of mutations at those faces, in this same order, and \(Q(\mathbb{G}_{\delta,e})=\mu_{F_{k}}\cdots\mu_{F_{l}}(Q(\mathbb{G}))\). To conclude, it suffices to note that the quiver \(Q(\mathbb{G}_{\delta,e})\) associated to the plabic fence \(Q(\mathbb{G}_{\delta,e})\) has a unique arrow out of the vertex associated to \(F_{e}\), and therefore \(F_{e}\) is a source, as required. Lemma 3.9 states that we can find a sequence of mutations for \(Q(\mathbb{G})\) so that the vertex corresponding to the face immediately left of rightmost vertical edge of \(\mathbb{G}\) becomes a source vertex, and such that this sequence of mutations never includes a mutation at that particular vertex that we want to turn into a source vertex. **Remark 3.10**.: This is not needed for our applications, but note that the proof of Lemma 3.9 explicitly presents a mutation sequence that turns \(F_{e}\) into a source. ### Non-degeneracy of \((Q(\mathcal{C}(\mathbb{G})),W(\mathcal{C}(\mathbb{G})))\) The notion of non-degeneracy of a QP was introduced in [10, Section 7]. It reads as follows: **Definition 3.11**.: Let \((Q,W)\) be a QP and \(k_{1},\ldots,k_{l}\in Q_{0}\) be a finite sequence of vertices, no two consecutive ones being equal. By definition, \((Q,W)\) is \((k_{1},\ldots,k_{l})\)-non-degenerate if all the QPs \((Q,W)\), \(\mu_{k_{1}}(Q,W),\mu_{k_{2}}\mu_{k_{1}}(Q,W),\ldots,\mu_{k_{l}}\cdots\mu_{k_{2 }}\mu_{k_{1}}(Q,W)\) are \(2\)-acyclic. By definition \((Q,P)\) is non-degenerate if it is \((k_{1},\ldots,k_{l})\)-non-degenerate for every such sequence of vertices. Thus, in a non-degenerate \((Q,W)\) we can choose an arbitrary sequence of vertices in \(Q\) and mutate the QP \((Q,W)\) along that sequence of vertices. That is, it is possible to perform QP-mutations to \((Q,W)\) just by naming the quiver mutations of \(Q\). There are some QPs that are non-degenerate and some that are degenerate, see [10], especially Sections 7 and 8 therein, and, for instance, [11, Section 2.2], where quivers associated to the top positroid variety of a Grassmannians are shown to be non-degenerate. Before establishing non-degeneracy of \((Q(\mathcal{C}),W(\mathcal{C}))\), we recall that [10, Definition 6.10] defines a QP \((Q,W)\) to be rigid if the trace space of its Jacobian algebra is equal to the ground ring \(\mathbb{C}\). Rather than the definition itself, the three following properties about rigid QPs are most relevant to us: * Rigidity is preserved under QP mutation. That is, if \((Q,W)\) is rigid, then \(\mu_{v}(Q,W)\) is rigid for any vertex \(v\in Q_{0}\). This is established in [10, Corollary 6.11]. * If \((Q,W)\) is rigid and \(Q^{\prime}\) is obtained by adding one vertex \(v\) to \(Q\), so that \(Q^{\prime}_{0}=Q_{0}\cup\{v\}\), such that \(v\) is a source (or a sink) in \(Q^{\prime}\), then \((Q^{\prime},W^{\prime})\) with \(W^{\prime}=W\) is also rigid.18 This is proven in [1, Remark 4.4] or cf. [10, Section 8]. Footnote 18: In the quiver literature, this operation is part of what is known as a triangular extension of \(Q(\mathbb{G}_{<e})\) and a point, cf. [1, Definition 3.8]. * A rigid QP \((Q,W)\) is non-degenerate. This is proven in [10, Corollary 8.2]. Let us now show that QP \((Q(\mathcal{C}),W(\mathcal{C}))\) associated to curve configurations \(\mathcal{C}\) such that \(\mathcal{C}=\mathcal{C}(\mathbb{G})\) for a plabic fence are indeed non-degenerate. **Proposition 3.12**.: _Let \(\mathbb{G}\) be a plabic fence and \(\mathcal{C}=\mathcal{C}(\mathbb{G})\) its associated curve configuration. Then the QP \((Q(\mathcal{C}),W(\mathcal{C}))\) is rigid, and thus non-degenerate._ Proof.: By Lemma 3.8, it suffices to argue that \((Q(\mathbb{G}),W(\mathbb{G}))\) as in Definition 3.5 is rigid. As usual, we prove this by scanning the plabic fence \(\mathbb{G}\) left to right. First, the empty quiver \((Q(\mathbb{G}),W(\mathbb{G}))\) is rigid and non-degenerate. Second, we assume that we have scanned until the vertical edge \(e\) and we have rigidity for \(W(\mathbb{G}_{<e})\). Let us consider two cases: * If \([de]\) does not exist, then \(F_{e}\) is a source vertex and \(Q(\mathbb{G}_{\leq e})\) is obtained from \(Q(\mathbb{G}_{<e})\) by adding a source, thus no new cycles appear and the potential is still rigid by the second property above. Similarly, if neither \([ed^{\uparrow}]\) and \([ed^{\downarrow}]\) exist, then \(F_{e}\) is a sink vertex and the same argument applies. * If either \([ed^{\dagger}]\) or \([ed^{\ddagger}]\) exist and so does \([de]\), then \(F_{e}\) is neither a sink nor a source. Nevertheless, we can apply Lemma 3.9 to turn \(F_{e}\) into a source by mutating at vertices in \(Q(\mathbb{G}_{<e})\). Indeed, the sequence of mutations \(\mu_{v}\) from Lemma 3.9 is at an ordered collection of vertices \(v:=(v_{1},\ldots,v_{k})\) which are already vertices in \(Q(\mathbb{G}_{<e})\), i.e. it does not mutate at \(F_{e}\). By hypothesis, \(W(\mathbb{G}_{<e})\) is rigid and, as stated above, rigidity is preserved by QP mutation. Therefore \(\mu_{v}(W(\mathbb{G}_{<e}))\) is also rigid. Since restriction commutes with mutation by [12, Lemma 2.5] and the mutations occur at vertices of \(Q(\mathbb{G}_{<e})\), the mutated potential \(\mu_{v}(W(\mathbb{G}_{\leq e}))\) restricted to the subquiver \(Q(\mathbb{G}_{<e})\) is also rigid, as it equals \(\mu_{v}(W(\mathbb{G}_{<e}))\). Since the quiver \(\mu_{v}(Q(\mathbb{G}_{\leq e}))\) is given by adding a source to \(\mu_{v}(Q(\mathbb{G}_{<e}))\), the resulting potential \(\mu_{v}(W(\mathbb{G}_{\leq e}))\), now without restricting, is still rigid. The original potential \(W(\mathbb{G}_{\leq e})\) is QP mutation-equivalent to this resulting potential, and thus also rigid. In summary, in either case, the QP \((Q(\mathbb{G}),W(\mathbb{G}))\) stays rigid as we scan \(\mathbb{G}\) left to right. Therefore \((Q(\mathcal{C}(\mathbb{G})),W(\mathcal{C}(\mathbb{G})))\) is non-degenerate. Note that there are many QP \((Q,W)\) that might not be rigid (or non-degenerate). The class of quivers with potentials \((Q(\mathbb{G}),W(\mathbb{G}))\) that we introduced and analyzed above is particular enough that rigidity can be argued directly as in Proposition 3.12. This is similar to the fact that the quivers in the Kontsevich-Soibelman class \(\mathcal{P}\)19 are rigid and admit a unique non-degenerate potential up to right-equivalence. In fact, though it will not be needed for our application, it can also be proven that \(W(\mathbb{G})\) is the unique non-degenerate potential for \(Q(\mathbb{G})\) up to right-equivalence. Footnote 19: This is the class generated by the one vertex quiver by triangular extensions and mutations. **Remark 3.13**.: It is likely that the more general class of curve configurations \(\mathcal{C}(\mathfrak{w})\) that we associated to the weaves \(\mathfrak{w}\) in [5] also have \(W(\mathcal{C}(\mathfrak{w}))\) be non-degenerate; see [5, Section 7.3]. In that case, the arguments in Section 4 would also prove Theorem 1.1 for \((-1)\)-closures. This is concludes our construction and study of the QPs \((Q(\mathcal{C}),W(\mathcal{C}))\). We have established in Section 2 the invariance of their right-equivalence classes under triple point moves and local bigon moves, proven that they undergo a QP-mutation when a \(\gamma\)-exchange is applied to \(\mathcal{C}\), and now shown rigidity for those configurations associated to plabic fences. There are two technical pieces that still need justification: arguing that Assumption 1 holds and proving Lemma 2.21. We conclude this section by presenting such proofs. ### The assumption on bigons is satisfied By definition, a cycle \(v_{1}\ldots v_{n}\) in a QP \((Q,W)\) is said to be empty if \(v_{1}\ldots v_{n}\) does not appear as a monomial in \(W\). The following lemma shows that we have no empty cycles for QP associated to curve configurations: **Lemma 3.14**.: _Let \((Q,W)\) be a QP and assume that it contains an empty cycle, i.e. a cycle of arrows in \(Q\) whose corresponding monomial does not appear in the potential \(W\). Then \((Q,W)\) is degenerate._ Proof.: Let us show this by induction on \(n\in\mathbb{N}\), where \(n\) is the number of arrows of an empty \(n\)-cycle \(v_{1}\ldots v_{n}\). The base case is \(n=2\): suppose there is a \(2\)-cycle \(v_{1}v_{2}\) in \(Q\) such that \(v_{1}v_{2}\) is not a monomial in \(W\). By Definition 3.11, without mutating at all, the quiver in the reduced part of \((Q,W)\) still has the \(2\)-cycle \(v_{1}v_{2}\) and thus \((Q,W)\) is degenerate. The induction step is as follows. Consider an empty \(n\)-cycle \(v_{1}\ldots v_{n}\) for \((Q,W)\) and assume that the existence of an empty \(n\)-cycle in a QP implies its degeneracy. Mutation of \((Q,W)\) at the vertex \(v=h(v_{1})=t(v_{2})\), creates the \((n-1)\)-cycle \([v_{1}v_{2}]v_{3}\ldots v_{n}\). By Definition 2.19, the potential \(\mu_{v}(W)\) for the mutated QP \(\mu_{v}(Q,W)\) is obtained by substituting \(v_{1}v_{2}\) by \([v_{1}v_{2}]\) in \(W\) and adding the cubic term \([v_{1}v_{2}]v_{1}^{*}v_{2}^{*}\). Therefore, \(\mu_{v}(W)\) contains the monomial \([v_{1}v_{2}]v_{3}\ldots v_{n}\) if and only if \(W\) contains the monomial \(v_{1}\ldots v_{n}\). By assumption, \(W\) does not contain \(v_{1}\ldots v_{n}\) and thus \([v_{1}v_{2}]v_{3}\ldots v_{n}\) is an empty \((n-1)\)-cycle in \(\mu_{v}(Q,W)\). By induction, \(\mu_{v}(Q,W)\) is degenerate and thus so is \((Q,W)\). Let us now use Lemma 3.14 to prove that Assumption 1 is satisfied for our non-degenerate configurations. The precise statement reads as follows: **Proposition 3.15**.: _Let \(\mathcal{C}\) be a curve configuration and \(B\subset\Sigma\) a local bigon bounded by \(\mathcal{C}\). Suppose that \((Q(\mathcal{C}),W(\mathcal{C}))\) is a non-degenerate potential. Then \(\rho(B)\) and \(\lambda(B)\) are different regions._ Proof.: By contradiction, suppose that there exists a local bigon \(B\) such that \(\rho(B)=\lambda(B)\). We will now argue that either: 1. The potential \((Q,W)\) is degenerate, or 2. The homology classes in \(H_{1}(\Sigma;\mathbb{Z})\) of the curves \(\gamma_{1},\ldots,\gamma_{b}\in\mathcal{C}\) do not span, because there is a sub-collection of them that give linearly dependent homology classes. Let us consider a neighborhood of such local bigon \(B\) with \(\rho(B)=\lambda(B)\). It is as depicted in Figure 27. Namely, the region \(\rho(B)=\lambda(B)\) is bounded by: 1. A collection of curves \(\gamma_{2},\ldots,\gamma_{s}\in\mathcal{C}\), \(s\in\mathbb{N}\), with intersection pattern exactly given by \(\gamma_{i}\) intersects \(\gamma_{i+1}\) negatively, at a point \(g_{i+1,i}\) and \(\gamma_{g}\) intersects \(\gamma_{1}\) negatively at \(g_{1,s}\). 2. A collection of curves \(\tau_{2},\ldots,\tau_{l}\in\mathcal{C}\), \(l\in\mathbb{N}\), with intersection pattern exactly given by \(\tau_{i}\) intersects \(\tau_{i+1}\) positively, at a point \(t_{i,i+1}\) and \(\tau_{l}\) intersects \(\gamma_{1}\) negatively at \(t_{l,1}\). Figure 27 depicts a case with \(s=4\) and \(t=3\). Note that some curve \(\gamma_{i}\), resp. some curve \(\tau_{i}\), might equal some of the other curves \(\gamma_{j}\), resp. the other curves \(\tau_{j}\). The argument that now follows works in that case as well. Now consider the resolution of the bigon \(B\) with a local bigon move. This yields the local curve configuration as in Figure 28. The arrows \(g_{2,1}\ldots g_{i+1,i}\ldots g_{1,s}\) in the quiver form a cycle \(G\). Similarly, the arrows \(t_{1,2}\ldots t_{i,i+1}\ldots t_{s,1}\) in the quiver form a cycle \(T\). Consider the cycles \(G\) and \(T\) in the quiver. If either \(G\) or \(T\) are empty, then Lemma 3.14 implies that \((Q(\mathcal{C}),W(\mathcal{C}))\) is degenerate. This contradicts the assumption. Therefore, both \(G\) and \(T\) must bound a polygon. From the specific configuration we are studying, as in Figure 28, the polygon for \(G\) must be above \(G\) and the polygon for \(T\) must be below \(T\). Pieces of such a polygon bounded by \(G\), above \(G\), are depicted in Figure 28 in green, and pieces of such a polygon bounded by \(T\), below \(T\), are depicted in Figure 28 in blue. Consider now the smooth oriented representative \(\overline{\gamma}\) of \([\gamma_{1}+\gamma_{2}+\ldots+\gamma_{t}]\) in \(H_{1}(\Sigma;\mathbb{Z})\) given by the oriented \(\infty\)-resolutions of the crossings \(g_{21},g_{32},\ldots,g_{t,t-1}\). Here the \(\infty\) resolution is such that the strand coming from the north-west continues to the north-east strand.20 Similarly, let \(\overline{\tau}\) be the smooth Figure 28. A curve configuration \(\mathcal{C}\) after performing a local bigon move to Figure 27. The resulting region \(\lambda(B)=\rho(B)\) is not simply-connected. Figure 27. A configuration \(\mathcal{C}\) near bigon \(B\), bounded by \(\gamma_{1},\tau_{1}\), with \(\lambda(B)=\rho(B)\). oriented representative of \([\tau_{1}+\tau_{2}+\ldots+\tau_{l}]\) in \(H_{1}(\Sigma;\mathbb{Z})\) given by the oriented \(\infty\)-resolutions of the crossings \(t_{12},t_{12},\ldots,t_{l-1,l}\). Let \(c^{+}\subset\Sigma\), resp. \(c^{-}\subset\Sigma\), be a smooth embedded core curve horizontally traversing the yellow region in Figure 28 from left to right, resp. from right to left. Now, the curve \(\overline{\gamma}\) is homologous to \(c^{+}\). Indeed, the existence of a polygon above \(G\) shows that the connected components of \(\overline{\gamma}\) not containing \(\gamma_{1}\) are null-homologous. The component containing \(\gamma_{1}\) has an immersed point at \(g_{1t}\). By construction, this immersed point bounds a region above it, and thus it is homologous to \(c^{+}\). Similarly, the curve \(\overline{\tau}\) is homologous to \(c^{-}\). This shows that \([\overline{\gamma}]=[c^{+}]=-[c^{-}]=-[\overline{\tau}]\). Therefore \([\overline{\gamma}]+[\overline{\tau}]=0\) and thus the set of curves in \(\mathcal{C}\) does not span \(H_{1}(\Sigma;\mathbb{Z})\), as there are exactly \(b_{1}(\Sigma)\) of them and there is a non-empty subset of linearly dependent classes. Proposition 3.12 and Proposition 3.15 show that Assumption 1 is always satisfied for the curve configurations \(\mathcal{C}(\mathbb{G})\) associated to plabic fences \(\mathbb{G}\). ### Proof of Lemma 2.21 For Part (i), if \(\mathcal{C}\) bounds bigons, we obtain \(\mathcal{C}_{red}\) by iteratively applying Theorem 2.9 to remove them. Lemmas 2.5 and 2.7 imply that \((Q(\mathcal{C}),W(\mathcal{C}))\) undergoes a sequence of right-equivalences and local reductions. Note that Lemma 2.7 implies that these reductions exist. By Definition 2.6, each local reduction can be understood as splitting off a trivial direct summand from \((Q(\mathcal{C}),W(\mathcal{C}))\). Indeed, an \(ab\)-reduction of \((Q,W)\) yields a decomposition \((Q_{ab},W_{ab})\oplus(Q^{\prime},W^{\prime})\), where \(Q_{ab}\) has just two arrows \(a,b\) and \(W_{ab}=ab\), and \((Q^{\prime},W^{\prime})\) is the \(ab\)-reduction of \((Q,W)\). Thus \((Q_{ab},W_{ab})\) is trivial. Therefore, after Theorem 2.9 is iteratively applied until we obtain \(\mathcal{C}_{red}\), \((Q,W)\) undergoes a sequence of local reductions until it becomes right-equivalent to \((Q(\mathcal{C}_{red}),W(\mathcal{C}_{red}))\): in each such local reduction a trivial summand splits off and we are left with \((Q(\mathcal{C}_{red}),W(\mathcal{C}_{red}))\). Since \(\mathcal{C}_{red}\) has no bigons, the quadratic part of \(W(\mathcal{C}_{red})\) vanishes and thus \((Q(\mathcal{C}_{red}),W(\mathcal{C}_{red}))\) is reduced. Therefore, it is the reduced part of \((Q(\mathcal{C}),W(\mathcal{C}))\), up to right-equivalence and Lemma 2.21.(i) follows. For Part (ii), Lemma 3.14, or directly Definition 3.11, implies that \(Q(\mathcal{C})\) has no empty 2-cycles, since \((Q(\mathcal{C}),W(\mathcal{C}))\) is non-degenerate. Therefore, if \(\mathcal{C}\) bounds no bigons, then \(Q(\mathcal{C})\) has no 2-cycles and thus neither does its reduced part. ## 4. A Lagrangian filling for every cluster seed The goal of this section is to prove Theorem 1.1. We use the results developed in Sections 2 and 3. ### Preliminaries Let us consider a positive braid word \(\beta\) on \(n\)-strands and its associated Legendrian link \(\Lambda_{\beta}\subset(\mathbb{R}^{3},\xi_{st})\). By definition, \(\Lambda_{\beta}\subset(\mathbb{R}^{3},\xi_{st})\) is the Legendrian link given by the front rainbow closure of \(\beta\). Figure 29 depicts such a rainbow closure, i.e. a front for \(\Lambda_{\beta}\) in the \((x,z)\)-plane \(\mathbb{R}^{2}\). The box with the label \(\beta\) contains exactly the crossings of \(\beta\). The ideal contact boundary \((T^{\infty}\mathbb{R}^{2},\lambda_{st})\) of the cotangent bundle \((T^{*}\mathbb{R}^{2},\lambda_{st})\) is contactomorphic to the 1-jet space \((J^{1}S^{1},\xi_{st})\), where the Legendrian zero section \(S^{1}\subset J^{1}S^{1}\) is the fiber of the projection \(T^{\infty}\mathbb{R}^{2}\longrightarrow\mathbb{R}^{2}\) onto the base \(\mathbb{R}^{2}\). Let us consider a Legendrian embedding \(\iota_{0}:S^{1}\longrightarrow(\mathbb{R}^{3},\xi_{st})\) of the (unique) max-tb Legendrian unknot \(\Lambda_{0}\subset(\mathbb{R}^{3},\xi_{st})\). By the Weinstein neighborhood theorem [10, Section 2.5], any Legendrian link \(\Lambda\subset(J^{1}S^{1},\xi_{st})\) can be satelliteed along the Legendrian embedding \(\iota_{0}\), as there is a neighborhood of the max-tb Legendrian unknot \(\Lambda_{0}\subset(\mathbb{R}^{3},\xi_{st})\) contactomorphic to \((J^{1}S^{1},\xi_{st})\), where the contactomorphism extends \(\iota_{0}\). We denote the resulting Legendrian link in \((\mathbb{R}^{3},\xi_{st})\) by \(\iota_{0}(\Lambda)\). If we consider the Legendrian \(\Lambda(\alpha)\subset(J^{1}S^{1},\xi_{st})\) given by the braid diagram of a positive braid word \(\alpha\) in \(n\)-strands, then \(\iota_{0}(\Lambda(\beta\Delta^{2}))\cong\Lambda_{\beta}\) Figure 29. The rainbow closure front projection of the positive braid word \(\beta\). are Legendrian isotopic in \((\mathbb{R}^{3},\xi_{st})\), where \(\Delta\) is the half-twist on \(n\) strands. See [13, Section 2.2] for further details. Note that the class of Legendrian links \(\Lambda_{\beta}\) is comprehensive, as it includes the max-tb representatives of all algebraic links and also Legendrian representatives of infinitely many satellite and hyperbolic links, cf. [14, Section 6]. Let \(L\subset(\mathbb{D}^{4},\lambda_{st})\) be an exact oriented Lagrangian filling.21 Consider an arbitrary but fixed convex neighborhood \(\mathcal{O}p(L)\) which is symplectomorphic to a convex neighborhood of the zero section in \((T^{*}L,\lambda_{st})\) and the hypersurface \(\partial\mathcal{O}p(L)\subset(\mathbb{D}^{4},\lambda_{st})\) is a contact hypersurface contactomorphic to the ideal contact boundary \((T^{\infty}L,\lambda_{st})\) of \((T^{*}L,\lambda_{st})\). An embedded (co)oriented connected curve \(\gamma\subset L\) lifts to a Legendrian knot \(\Lambda_{\gamma}\subset\partial\mathcal{O}p(L)\), as it defines a front under the Legendrian projection \((T^{\infty}L,\lambda_{st})\longrightarrow L\). Since there is a canonical correspondence between oriented and co-oriented curves in an oriented surface, we always discuss oriented curves rather than co-oriented curves. Following [13], we introduce the following: Footnote 21: Of some Legendrian link \(\partial L\) in the contact boundary of \((\mathbb{D}^{4},\lambda_{st})\). **Definition 4.1** (\(\mathbb{L}\)-compressing systems).: Let \(L\subset(\mathbb{D}^{4},\lambda_{st})\) be an exact oriented Lagrangian filling and \(\gamma\subset L\) an embedded oriented curve. By definition, \(\gamma\) is said to be \(\mathbb{L}\)-compressible if there exists a properly embedded Lagrangian \(2\)-disk \(D\subset(T^{*}\mathbb{R}^{2}\setminus\mathcal{O}p(L))\) such that \(\partial\overline{D}\cap\partial\mathcal{O}p(L)=\Lambda_{\gamma}\subset \mathbb{R}^{4}\) and the union of \(\overline{D}\cup\nu_{\gamma}\) is a smooth Lagrangian disk, where \(\nu_{\gamma}\subset\mathcal{O}p(L)\) is the Lagrangian conormal cone of \(\gamma\). This Lagrangian disk is said to be an \(\mathbb{L}\)-compressing disk for \(\gamma\). A collection \(\Gamma=\{\gamma_{1},\dots,\gamma_{b}\}\) of such curves in \(L\), with a choice of \(\mathbb{L}\)-compressing disks \(\mathscr{D}=\{D_{1},\dots,D_{b}\}\) for each curve, is said to be an \(\mathbb{L}\)-compressing system for \(L\) if \(D_{i}\cap D_{j}=\emptyset\) for all \(i,j\in[b]\) and the (homology classes of the) curves in \(\Gamma\) form a basis of \(H_{1}(L;\mathbb{Z})\). We often abuse notation and refer to the collection \(\mathscr{D}\) as the \(\mathbb{L}\)-compressing system for \(L\). ### Curve configurations and \(\mathbb{L}\)-compressing systems Let \(L\subset(\mathbb{D}^{4},\lambda_{st})\) be a Lagrangian filling and \(\Gamma\) be a complete \(\mathbb{L}\)-compressing system. By definition, the curve configuration \(\mathcal{C}(\Gamma)\) associated to \(\Gamma\) is the configuration of oriented closed embedded curves \(\Gamma\) in \(L\). If \(\mathscr{D}\) is the collection of \(\mathbb{L}\)-compressing disks associated to \(\Gamma\), we also write \(\mathcal{C}(\mathscr{D})\) for \(\mathcal{C}(\Gamma)\). Note that \(\mathcal{C}(\Gamma)\) is considered as a collection of smooth oriented curves in a smooth surface, with no need to record the Lagrangian condition on \(L\) and the disks in \(\mathscr{D}\). The notation \(\mathcal{C}(\Gamma)\), instead of just \(\Gamma\), is in order to emphasize the smooth embedded curves, rather than the symplectic topological aspects of \(\Gamma\). **Remark 4.2**.: Suppose that a sequence of triple point moves and local bigons moves is applied to an \(\mathbb{L}\)-compressing system \(\mathcal{C}(\Gamma)\). This yields a configuration \(\mathcal{C}^{\prime}\) in \(L\). Front homotopies, which include triple point moves and local bigon moves, lift to Legendrian isotopies in the ideal contact boundary. The trace of a Legendrian isotopy yields an invertible Lagrangian concordance in the symplectization. By concatenating the disks associated to \(\Gamma\) with this Lagrangian concordance, we obtain an \(\mathbb{L}\)-compressing system \(\Gamma^{\prime}\) for \(L\) such that \(\mathcal{C}^{\prime}=\mathcal{C}(\Gamma^{\prime})\). We will consider two such configurations \(\mathcal{C},\mathcal{C}^{\prime}\) equivalent and two such \(\mathbb{L}\)-compressing systems \(\Gamma,\Gamma^{\prime}\) equivalent. ### \(\mathbb{L}\)-compressing systems for \(\Lambda_{\beta}\) Consider the plabic fence \(\mathbb{G}(\beta)\) associated to \(\beta\), as introduced in Section 3. Then we have the following facts: 1. The alternating strand diagram of \(\mathbb{G}(\beta)\) is a front for \(\Lambda(\beta\Delta^{2})\subset(J^{1}S^{2},\xi_{st})\). This is proven in [13, Section 2] and see also [15, Section 5.1]. Thus, after including \((J^{1}S^{1},\xi_{st})\) into \((\mathbb{R}^{3},\xi_{st})\) as a neighborhood of the max-tb Legendrian unknot, it is Legendrian isotopic to the Legendrian link \(\Lambda_{\beta}\subset(\mathbb{R}^{3},\xi_{st})\). 2. The conjugate surface \(\Sigma(\mathbb{G}(\beta))\) gives rise to a unique22 embedded exact Lagrangian filling \(L_{\beta}\) of \(\Lambda_{\beta}\). By construction, \(L_{\beta}\) is smoothly isotopic to the (smooth) surface \(\Sigma(\mathbb{G}(\beta))\). Uniqueness is proven in [12, Prop. 2.4], existence in [15, Section 4.2]. In particular, it gives an oriented embedded exact Lagrangian filling \(L_{\beta}\subset(\mathbb{R}^{4},\lambda_{st})\) in the symplectization of \((\mathbb{R}^{3},\xi_{st})\), after we have identified the standard cotangent bundle \((T^{*}\mathbb{R}^{2},\omega_{st})\) with the symplectic Darboux \((\mathbb{R}^{4},\omega_{st})\). 3. The conjugate surface \(\Sigma(\mathbb{G}(\beta))\) also gives an \(\mathbb{L}\)-compressing system \(\Gamma(\beta)\) for \(L\). The existence of such \(\mathbb{L}\)-compressing system is proven in [CW, Section 3], see also [Cas22, Section 2] and [CL22, Section 4.2]. By construction, the configuration of curves in \(\mathcal{C}(\Gamma(\beta))\) coincides with the configuration of curves in \(\mathcal{C}(\mathbb{G}(\beta))\). The Lagrangian \(\mathbb{L}\)-compressing disks, showing that this is indeed an \(\mathbb{L}\)-compressing system, are constructed in [CW, Section 3].23 Let \(\mathscr{D}_{\beta}\) denote its associated collection of \(\mathbb{L}\)-compressing Lagrangian disks. Footnote 23: Intuitively, the corresponding \(\mathbb{L}\)-compressing Lagrangian disks are the faces of \(\mathbb{G}\), which are disjoint Lagrangian pieces (disks) of the Lagrangian zero section \(\mathbb{R}^{2}\) in \(T^{*}\mathbb{R}^{2}\). See Figure 24 (left). The relevant fact about the \(\mathbb{L}\)-compressing system \(\Gamma(\beta)\) is that it can be used to produce new Lagrangian fillings from \(L_{\beta}\). **Remark 4.3**.: For context, let \(\beta\) be a positive braid word and \(\Lambda_{\beta}\subset(\mathbb{R}^{3},\xi_{st})\) its associated Legendrian link. The union \(\mathbb{L}_{\beta}\subset(\mathbb{R}^{4},\lambda_{st})\) of the embedded exact Lagrangian filling \(L_{\beta}\) and all the closures of the Lagrangian disks of the collection \(\mathscr{D}_{\beta}\) (extended by the Lagrangian conormal cones of curves in \(\Gamma(\beta)\)) is an arboreal Lagrangian skeleton for the Weinstein pair given by \((\mathbb{D}^{4},\lambda_{st})\) and a Weinstein ribbon of \(\Lambda_{\beta}\), see [Eli18, Section 2]. ### Lagrangian disk surgery Lagrangian disk surgery was introduced in [Yau17], following closely the Lagrange surgery defined in [Pol91]. In our context, it is used as follows. Consider a Legendrian link \(\Lambda\subset(\mathbb{R}^{3},\xi_{st})\), seen as \((S^{3},\xi_{st})\) minus a point, and an embedded exact Lagrangian filling \(L\subset(\mathbb{D}^{4},\lambda_{st})\) in the standard Darboux 4-ball symplectic filling \((S^{3},\xi_{st})\). Suppose that there exists a properly embedded Lagrangian disk \(D\subset\mathbb{D}^{4}\setminus L\) such that \(\partial\overline{D}\subset\operatorname{int}(L)\) is a smooth embedded connected curve, where \(\operatorname{int}(L)\) in the interior of \(L\).24 Then Lagrangian disk surgery is an operation that inputs the pair \((L,D)\) and outputs another pair \((L^{\prime},D^{\prime})\), with the same properties: \(L^{\prime}\) is an embedded exact Lagrangian filling of \(\Lambda\) and \(D^{\prime}\) is an embedded Lagrangian disk in the complement of \(L^{\prime}\) with embedded boundary on \(L^{\prime}\). In this process, it is crucial that \(D\) is an embedded Lagrangian disk, and not just immersed. Two facts about Lagrangian disk surgery are: Footnote 24: In the discussion of Subsection 4.3, these Lagrangian disks are obtained by considering the union of the disks \(D_{\gamma}\in\mathscr{D}\) in an \(\mathbb{L}\)-compressing system \(\mathscr{D}\) and concatenating them with (a piece of) the Lagrangian conormal cone in \(T^{*}L\) of the corresponding (co)oriented curve \(\gamma\). * The Lagrangians \(L\) and \(L^{\prime}\) are smoothly isotopic, relative to their boundaries. The Lagrangian disk surgery of \(L^{\prime}\) along \(D^{\prime}\) yields \((L,D)\) back, up to compactly supported Hamiltonian isotopy. The first item above precisely indicates that we can potentially produce a new Lagrangian filling by using a given Lagrangian filling and a Lagrangian disk as above. See [CW, Pol91, Yau17] and references therein for these facts and more details. **Remark 4.4**.: Lagrangian disk surgery is _not_ known to exist if the boundary \(\partial\overline{D}\subset L\) is an immersed curve, rather than embedded. Similarly, the disk \(D\) must be embedded.25 Footnote 25: It is not just a lack of available constructions, [CW, Section 4.10] presents examples of immersed disks that one cannot perform Lagrangian disk surgery to, due to the existence of frozen vertices coming from absolute 1-cycles in \(L\). ### Effect of Lagrangian surgery on curve configurations Let \(L\subset(\mathbb{D}^{4},\lambda_{st})\) be an exact Lagrangian filling and \(\Gamma\) an \(\mathbb{L}\)-compressible system for \(L\). Consider a disk \(D\in\mathscr{D}\). Lagrangian disk surgery on \(D\) leads to another exact Lagrangian filling \(\mu_{D}(L)\subset(\mathbb{D}^{4},\lambda_{st})\) endowed with a curve configuration \(\mu_{D}(\Gamma)\) and a collection of Lagrangian disks \(\mu_{D}(\mathscr{D})\) bounding the curves in \(\mu_{D}(\Gamma)\). There is a natural bijection between the disks in \(\mathscr{D}\) and those in \(\mu_{D}(\mathscr{D})\) and a diffeomorphism between \(L\) and \(\mu_{D}(L)\), as stated above. Now, the configuration \(\mu_{D}(\Gamma)\) might not be an \(\mathbb{L}\)-compressible system because it might contain immersed curves. In general, these new curve configurations \(\mu_{D}(\Gamma)\) obtained by Lagrangian disk surgery on a disk associated to \(\Gamma\) can be understood via the following: **Lemma 4.5**.: _Let \(L\subset(\mathbb{D}^{4},\lambda_{st})\) be an exact Lagrangian filling, \(\Gamma\) an \(\mathbb{L}\)-compressible system for \(L\) with \(\mathbb{L}\)-compressing disks \(\mathscr{D}\), and \(\mathcal{C}(\Gamma)\) its associated configuration in \(L\). Consider a disk \(D\in\mathscr{D}\) with boundary the lift of \(\gamma\in\mathcal{C}(\Gamma)\). Then there is a natural identification between the configuration \(\mu_{D}(\Gamma)\) and the \(\gamma\)-exchange of \(\Gamma\)._ Proof.: Lagrangian disk surgery occurs in a neighborhood of \(D\) in \((\mathbb{D}^{4},\lambda_{st})\). It is shown in [14, Section 4.8] that it can be local modeled by the weave mutation in Figure 30. For this proof, we assume familiarity with [14, Section 2], or [15, Section 3]. It suffices to understand how Lagrangian disk surgery along the disk \(D\) bounding \(\gamma\) affects the boundary of the other disks in \(\mathscr{D}\). There are two cases: positive and negative intersections, represented by the segments \(\tau_{+}\) and \(\tau_{-}\) in Figure 11 (left). In the weave diagram, the segments \(\tau_{+}\) and \(\tau_{-}\) in Figure 11 (left) can be represented by the weave lines \(\tau_{+}\) and \(\tau_{-}\) in the second row of Figure 30.26 In this situation, before the mutation, \(\tau_{+}\cap\gamma=+1\), \(\tau_{-}\cap\gamma=-1\) and \(\tau_{+}\cap\tau_{-}=0\). Note that smooth curves are encoded by their relative homology class in the context of curves in surfaces, homology classes are entirely determined by their intersections and thus we record those. The 2-weave in the left of Figure 30 (upper left) represents a Lagrangian cylinder, and the left parts of the other three figures indicate how to draw \(\tau_{\pm}\) and \(\gamma\) in that cylinder. It suffices to understand how \(\tau_{+},\tau_{-}\) and \(\gamma\) in the weave change under weave mutation. This is drawn in the right parts of the upper right and the second row of Figure 30. The resulting curves \(\mu_{\gamma}(\tau_{+})\) and \(\mu_{\gamma}(\tau_{-})\) are shown, also in yellow and green respectively; the curve \(\mu_{\gamma}(\gamma)\) is drawn in pink. By using the intersection numbers on weaves, cf. [14, Section 2] or [13, Section 4.4], we obtain that \(\mu_{\gamma}(\tau_{+})\cap\mu_{\gamma}(\gamma)=-1\), \(\mu_{\gamma}(\gamma)\cap\mu_{\gamma}(\tau_{-})=-1\) and \(\mu_{\gamma}(\tau_{+})\cap\mu_{\gamma}(\tau_{-})=1\). Therefore the curves change exactly according to a \(\gamma\)-exchange. Footnote 26: The segment \(\tau_{-}\) remains green, while \(\tau_{+}\) is now drawn in yellow because the weave is typically drawn in blue. **Remark 4.6**.: Lemma 4.5 could be proven using other models, such as plabic fences and Lagrangian conjugate surfaces, cf. [14, Section 3] or [16, Section 5.2]. We also refer the reader to [16, Section 2] for an explanation using a conical model and the discussion in [22, Section 4]. Lemma 4.5 clarifies the combinatorics of \(\Gamma\) that might lead to immersed curves for \(\mu_{D}(\Gamma)\). Indeed, the existence of a 2-cycle in \(Q(\Gamma)\) is equivalent to the fact that \(\mu_{D}(\Gamma)\), obtained by a \(\gamma\)-exchange, has (at least) one immersed curve. Indeed, Proposition 2.22 implies that \(\gamma\)-exchange leads to a quiver mutation, thus \(\mu_{\gamma}Q(\Gamma)=Q(\mu_{D_{\gamma}}(\Gamma))\), and \(\mu_{D}(\Gamma)\) has an immersed curve if and only if \(Q(\mu_{D}(\Gamma))\) contains a loop.27 Footnote 27: Technically, here \(\mu_{v}Q\) refers to mutation of a quiver with 2-cycles. This is defined exactly the same as mutation of quivers, in the sense that the two arrows \(a,b\in Q_{1}\), \(h(a)=t(b)=v\), in a 2-cycle lead to the composed arrow \([ab]\in(\mu_{v}Q)_{1}\), which is a loop. ### Iteration of Lagrangian disk surgeries and QP non-degeneracy Let \(L\subset(\mathbb{D}^{4},\lambda_{st})\) be an embedded exact Lagrangian filling and \(\Gamma\) an \(\mathbb{L}\)-compressing system for \(L\). Suppose that the configuration \(\mathcal{C}(\Gamma)\) is reduced and \((Q(\mathcal{C}(\Gamma)),W(\mathcal{C}(\Gamma)))\) is non-degenerate. In particular, there are no 2-cycles in \(Q(\mathcal{C}(\Gamma))\). Figure 30. (Upper left) Lagrangian disk surgery in the 2-weave model along a short l-cycle \(\gamma\). (Upper right) The short l-cycle \(\gamma\) depicted in pink, drawn before and after \(\mu_{\gamma}\). (Lower left) A weave line in yellow smoothly representing a relative homology class \(\tau_{+}\) with unique geometric intersection \(+1\) with \(\gamma\), drawn before and after \(\mu_{\gamma}\). (Lower right) A weave line in green smoothly representing a relative homology class \(\tau_{-}\) with a unique geometric intersection \(-1\) with \(\gamma\), also drawn before and after \(\mu_{\gamma}\). Consider an \(\mathbb{L}\)-compressing disk \(D\in\mathscr{D}(\Gamma)\) with boundary \(\gamma\in\mathcal{C}(\Gamma)\) and perform Lagrangian disk surgery on \((L,D)\). This produces a new Lagrangian filling \((L^{\prime},D^{\prime})\) with an \(\mathbb{L}\)-compressing disk \(D^{\prime}\). The \(\mathbb{L}\)-compressing system \(\Gamma\) for \(L\) yields an \(\mathbb{L}\)-compressing system \(\Gamma^{\prime}\) for \(L^{\prime}\), whose configuration of curves \(\mathcal{C}(\Gamma^{\prime})\) is as described by Lemma 4.5. We wish to be able to iterate this procedure arbitrarily: given any curve \(\gamma_{1}\in\mathcal{C}(\Gamma^{\prime})\) with associated \(\mathbb{L}\)-compressing disk \(D_{1}\), we want to be able to perform Lagrangian disk surgery to \(L^{\prime}\) along \(D_{1}\). The two problems at this stage, which are well-known, are: 1. It might be that \(\mathcal{C}(\Gamma^{\prime})\) does not bound any bigons but there are curves \(\gamma_{i},\gamma_{j}\in\mathcal{C}(\mathscr{D}_{1})\) with two points of geometric intersection, one positive and one negative, and these two intersection points do not bound a bigon. 2. It might be that \(\mathcal{C}(\Gamma^{\prime})\) is a non-reduced configuration, bounding bigons. That is, there are curves \(\gamma_{i},\gamma_{j}\in\mathcal{C}(\mathscr{D}_{1})\) with two points of geometric intersection, one positive and one negative, and these two intersection points bound a bigon. In either of these two cases, if the curve \(\gamma_{1}\subset L^{\prime}\) at which we must mutate was precisely one such \(\gamma_{i}\), then Lagrangian disk surgery along \(D_{1}\) would make \(\gamma_{j}\) immersed. It would produce a Lagrangian filling \(\mu_{D_{1}}(L^{\prime})\) but the resulting collection of curves would have at least an immersed curve. In that case, we can no longer iterate our sequence of Lagrangian disk surgeries. This is an obstruction that we cannot bypass merely with the information of \(Q(\mathcal{C}(\Gamma))\). In fact, \(Q(\mathcal{C}(\Gamma))\) is not able to distinguish the two cases \((i)\) and \((ii)\) above, as both are 2-cycles in \(Q(\mathcal{C}(\Gamma))\). The core contribution of this manuscript is that we can solve this problem by using the curve potential \(W(\mathcal{C}(\Gamma))\). This potential \(W(\mathcal{C}(\Gamma))\) must be precisely the one we constructed and studied in Sections 2 and 3. Another choice of potential \(W\) for \(Q(\mathcal{C}(\Gamma))\), unrelated to the specific geometry of polygons for curve configurations \(\mathcal{C}(\Gamma)\) would be of no use. Indeed, by construction, \(W(\mathcal{C}(\beta))\) keeps track of polygons and, in particular, bigons. This is used when solving problem \((ii)\) above. By Proposition 3.12, \((Q(\mathcal{C}(\Gamma)),W(\mathcal{C}(\Gamma)))\) is non-degenerate if \(\Gamma\) is an \(\mathbb{L}\)-compressing system of the form \(\Gamma=\Gamma(\beta)\). This is used for solving problem \((i)\) above. Consider the curve quiver with potential \((Q(\mathcal{C}(\beta)),W(\mathcal{C}(\beta)))\) constructed in Section 3. We have the following two properties: 1. The QP \((Q(\mathcal{C}(\beta)),W(\mathcal{C}(\beta)))\) is reduced, as defined in Section 2.3.2. Indeed, the construction of the QP \((Q(\mathbb{G}),W(\mathbb{G}))\) in Section 3.2 implies that is reduced for any plaic fence \(\mathbb{G}\). By Proposition 3.8, \((Q(\mathcal{C}(\beta)),W(\mathcal{C}(\beta)))\) coincides with \((Q(\mathbb{G}(\beta)),W(\mathbb{G}(\beta)))\), where \(\mathbb{G}(\beta)\) is the plaic fence associated to \(\beta\). Thus \((Q(\mathcal{C}(\beta)),W(\mathcal{C}(\beta)))\) is reduced as well. 2. The QP \((Q(\mathcal{C}(\beta)),W(\mathcal{C}(\beta)))\) is a curve QP, as in Definition 2.4. By Proposition 3.12, it is a non-degenerate QP. In particular, there are never 2-cycles in any QP \((Q,W)\) which is related to \((Q(\mathcal{C}(\beta)),W(\mathcal{C}(\beta)))\) by a sequence of QP mutations. Let us use these properties, along with the results in Sections 2 and 3, and prove the following result. **Proposition 4.7**.: _Let \(L\subset(\mathbb{D}^{4},\lambda_{st})\) be an embedded exact Lagrangian filling and \(\Gamma\) an \(\mathbb{L}\)-compressing system for \(L\). Suppose that the configuration \(\mathcal{C}(\Gamma)\) is reduced and \((Q(\mathcal{C}(\Gamma)),W(\mathcal{C}(\Gamma)))\) is non-degenerate. If \(D\in\mathscr{D}(\Gamma)\) is an \(\mathbb{L}\)-compressing disk with boundary \(\gamma\in\mathcal{C}(\Gamma)\) then:_ 1. _The Lagrangian filling_ \(\mu_{D}(L)\) _obtained from_ \(L\) _by Lagrangian disk surgery on_ \(D\) _admits an_ \(\mathbb{L}\)_-compressing system_ \(\mu_{D}(\Gamma)\) _such that its associated QP_ \((Q(\mu_{D}(\Gamma)),W(\mu_{D}(\Gamma)))\) _is reduced._ 2. _The quiver with potential associated to_ \(\mu_{D}(\Gamma)\) _satisfies_ \[(Q(\mathcal{C}(\mu_{D}(\Gamma))),W(\mathcal{C}(\mu_{D}(\Gamma)))=(\mu_{ \gamma}Q(\mathcal{C}(\Gamma)),\mu_{\gamma}W(\mathcal{C}(\Gamma)))\] _and the quiver_ \(Q(\mathcal{C}(\mu_{D}(\Gamma)))\) _contains no 2-cycles._ _In addition, the curve configuration associated to \(\mu_{D}(\Gamma)\) is a reduction of the \(\gamma\)-exchange of \(\mathcal{C}(\Gamma)\)._ Proof.: Let us prove \((i)\). First, the quiver \(Q(\mathcal{C}(\Gamma))\) has no 2-cycles. Indeed, \(\mathcal{C}(\Gamma)\) is reduced by hypothesis, and thus there are no 2-cycles coming from the trivial part of \((Q(\mathcal{C}(\Gamma)),W(\mathcal{C}(\Gamma)))\). By hypothesis as well, \((Q(\mathcal{C}(\Gamma)),W(\mathcal{C}(\Gamma)))\) is non-degenerate, and thus Definition 3.11 implies that there are no empty 2-cycles either. By Lemma 4.5, the configuration of curves \(\mathcal{C}(\Gamma)\) undergoes a \(\gamma\)-exchange under Lagrangian disk surgery. Since \(Q(\mathcal{C}(\Gamma))\) has no \(2\)-cycles, all curves in the resulting configuration are embedded. None of them is immersed. Therefore, Lagrangian disk surgery along \(D\) produces a new \(\mathbb{L}\)-compressing system \(\Gamma^{\prime}\), such that \(\mathcal{C}(\Gamma^{\prime})\) is the \(\gamma\)-exchange of \(\mathcal{C}(\Gamma)\). Second, at this stage, \(\mathcal{C}(\Gamma^{\prime})\) might not be a priori reduced. By Theorem 2.9, there exists a reduction \(\mu_{D}(\Gamma)\) of \(\mathcal{C}(\Gamma^{\prime})\). Since \((Q(\mathcal{C}(\Gamma)),W(\mathcal{C}(\Gamma)))\) is non-degenerate, Lemma 2.21 applies and shows that \[(Q(\mu_{D}(\Gamma)),W(\mu_{D}(\Gamma)))=(Q(\Gamma)_{red},W(\Gamma)_{red}).\] This implies that \((Q(\mu_{D}(\Gamma)),W(\mu_{D}(\Gamma)))\) is reduced. This implies \((i)\). By construction, \(\mu_{D}(\Gamma)\) is a reduction of \(\mathcal{C}(\Gamma^{\prime})\), which itself is the \(\gamma\)-exchange of \(\mathcal{C}(\Gamma)\). The last sentence of the statement is thus proven as well. For item \((ii)\), Proposition 2.22 implies the equality between the QPs. By item \((i)\), the quiver with potential \((Q(\mathcal{C}(\mu_{D}(\Gamma))),W(\mathcal{C}(\mu_{D}(\Gamma)))\) is reduced and thus the non-degeneracy of \((Q(\mathcal{C}(\Gamma)),W(\mathcal{C}(\Gamma)))\) implies that \(Q(\mathcal{C}(\mu_{D}(\Gamma)))\) has no \(2\)-cycles. ### The ring of regular functions \(\mathbb{C}[X(\Lambda_{\beta},T)]\) Let \(\beta\) be a positive braid word on \(n\)-strands. Consider the Legendrian link \(\Lambda_{\beta}\subset(\mathbb{R}^{3},\xi_{st})\) and a set of marked points \(T\subset\Lambda_{\beta}\), one per component. Let \(A_{\Lambda_{\beta}}\) be Legendrian contact dg-algebra of \((\Lambda_{\beta},T)\) with \(\mathbb{Z}\)-coefficients. We refer to [12, Section 5.1] or the survey [11] for details on \(A_{\Lambda_{\beta}}\). Consider its augmentation variety \(X(\Lambda_{\beta},T)\), which is the space of all dg-algebra maps from \(A_{\Lambda_{\beta}}\) to \(\mathbb{C}\), the latter considered as a dg-algebra in grading \(0\) and zero differential. Following [12, Section 5], the space \(X(\Lambda_{\beta},T)\), which is naturally an affine variety, can be explicitly described as follows. For \(k=[n-1]\), define the \(n\times n\) matrix \(P_{k}(a)\), as a function of an input \(a\), as: \[(P_{k}(a))_{ij}=\begin{cases}1&i=j\text{ and }i\neq k,k+1\\ 1&(i,j)=(k,k+1)\text{ or }(k+1,k)\\ a&i=j=k+1\\ 0&\text{otherwise.}\end{cases}\] Namely, \(P_{k}(a)\) is the identity matrix except for the \(2\times 2\) submatrix given by rows and columns \(k\) and \(k+1\), which is \(\left(\begin{smallmatrix}0&1\\ 1&a\end{smallmatrix}\right)\). Consider the full twist \(\Delta^{2}\) in \(n\)-strands and set \(\ell=\ell(\beta\Delta^{2})\) for the length of \(\beta\Delta^{2}\). Given a braid word \(\beta\Delta^{2}=\sigma_{k_{1}}\cdots\sigma_{k_{\ell}}\), where \(\sigma_{i}\in\mathrm{Br}_{n}\) are Artin generators in \(n\)-strands, let \(c=|\pi_{0}(\Lambda_{\beta\Delta^{2}})|\) and choose \(i_{1},\ldots,i_{c}\in\mathbb{N}\) such that the \(i_{j}\)-th strand in \(\beta\Delta^{2}\) is in its \(j\)-th connected component, \(j\in[c]\). Define \(D(\mathbf{t}_{\beta\Delta^{2}})\) to be the diagonal \(n\times n\) matrix with \((k,k)\)-entry equal to \(t_{j}\) if \(k=i_{j}\) and \(1\) otherwise, let \(\mathbf{1}\) be the \(n\times n\) identity matrix and set \[P_{\beta\Delta^{2}}(z_{1},\ldots,z_{\ell};t_{1},\ldots,t_{c}):=P_{k_{1}}(z_{1} )P_{k_{2}}(z_{2})\cdots P_{k_{\ell}}(z_{\ell})D(\mathbf{t}_{\beta\Delta^{2}}).\] Then, by [12, Proposition 5.2], \(X(\Lambda_{\beta},T)\) is isomorphic to the affine variety: \[X(\Lambda_{\beta},T)\cong\{(z_{1},\ldots,z_{\ell};t_{1},\ldots,t_{c}):\mathbf{ 1}+P_{\beta\Delta^{2}}(z_{1},\ldots,z_{\ell};t_{1},\ldots,t_{c})=0\}\subset \mathbb{C}^{\ell}\times(\mathbb{C}^{\times})^{c}.\] The next result is proven in [13, Theorem 1.1]. It also follows from [14, Theorem 1.1], after noticing that \(X(\Lambda_{\beta},T)\) is isomorphic to a decorated moduli space of sheaves in \(\mathbb{R}^{2}\) singularly supported in a front for \(\Lambda_{\beta}\). **Theorem 4.8** ([14]).: _Let \(\beta\) be a positive braid word on \(n\)-strands, \(\Lambda_{\beta}\subset(\mathbb{R}^{3},\xi_{st})\) its associated Legendrian link and a set of marked points \(T\subset\Lambda_{\beta}\), one per component. Then the coordinate ring of regular functions \(\mathbb{C}[X(\Lambda_{\beta},T)]\) is a cluster algebra. In addition, it has the following properties:_ 1. _Let_ \((L_{\beta},\Gamma_{\beta})\) _be the pair of an embedded exact Lagrangian filling_ \(L_{\beta}\) _of_ \(\Lambda_{\beta}\) _and an_ \(\mathbb{L}\)_-compressing system_ \(\Gamma_{\beta}=\Gamma(\mathbb{G}(\beta))\) _associated to the plaic fence_ \(\mathbb{G}(\beta)\)_. Then there exists a canonical cluster seed_ \(\mathfrak{c}(L_{\beta},\Gamma_{\beta})\) _associated to_ \(L_{\beta}\) _and it has quiver_ \(Q(\mathbb{G}(\beta))\)_._ 2. _Let_ \((L,\Gamma)\) _be an exact Lagrangian filling with an_ \(\mathbb{L}\)_-compressing system obtained from the Lagrangian filling_ \((L_{\beta},\Gamma_{\beta})\) _by a sequence of Lagrangian disk surgeries along an ordered collection of curves_ \(\gamma_{1},\ldots,\gamma_{k}\in\Gamma_{\beta}\)_._28 _Then_ _._ 1. _There is a canonical cluster seed_ \(\mathfrak{c}(L,\Gamma)\) _associated to_ \((L,\Gamma)\) _and the cluster variables in the cluster seed_ \(\mathfrak{c}(L,\Gamma)\) _are computed by microlocal merodromies along the Poincare dual relative cycles of the curves in the_ \(\mathbb{L}\)_-compressing system_ \(\Gamma\)_. In particular, cluster variables are indexed by the curves in_ \(\Gamma\)_._ 2. _The cluster seed_ \(\mathfrak{c}(L,\Gamma)\) _is_ \(\mu_{v_{k}}\cdots\mu_{v_{1}}(\mathfrak{c}(L_{\beta},\Gamma_{\beta}))\)_, the cluster seed obtained by the sequence of cluster mutations along the vertices_ \(v_{i}\in Q(\mathbb{G}(\beta))_{0}\) _associated to_ \(\gamma_{i}\in\Gamma(\mathbb{G}(\beta))\)_,_ \(i\in[k]\)_._ Technically, [CW] shows that \(\mathbb{C}[X(\Lambda_{\beta},T)]\) is an upper cluster algebra with the above properties. That said, it is locally acyclic, see e.g. [5, Theorem 7.13], and thus the cluster algebra coincides with the upper cluster algebra, as proven in [10]. For Theorem 1.1, it suffices to focus on the mutable part of the cluster algebra \(\mathbb{C}[X(\Lambda_{\beta},T)]\): it is well-established that there are \(b_{1}(L)\) mutable vertices in each seed of \(\mathbb{C}[X(\Lambda_{\beta},T)]\), where \(b_{1}(L)\) coincides for all embedded exact orientable Lagrangian fillings \(L\), as the frozen variables have only to do with the set of marked points \(T\), cf. [11]. **Remark 4.9**.: Note that the class of Legendrian links \(\Lambda_{\beta}\) is such that \(\mathbb{C}[X(\Lambda_{\beta},T)]\) has been proven to be a cluster algebra. This is unknown for a general Legendrian link \(\Lambda\subset(\mathbb{R}^{3},\xi_{st})\). Among other reasons, this is due to the lack of definition of a cluster structure on a general derived stack, which is a matter of homotopical algebraic geometry. That said, the technique developed to prove Theorem 1.1 and Corollary 1.3 should likely apply to general Legendrian links \(\Lambda\subset(\mathbb{R}^{3},\xi_{st})\) once it is understood how to make sense of cluster structures on \(X(\Lambda,T)\) and they are proven to exist. ### Proof of Theorem 1.1 First, we choose the Lagrangian filling \(L\) and the \(\mathbb{L}\)-compressing system \(\Gamma\) in the statement to be \(L:=L_{\beta}\) and \(\Gamma:=\Gamma(\beta)\). This filling and \(\mathbb{L}\)-compressing system were both introduced in Subsection 4.3. As stated there, the curve configurations \(\mathcal{C}(\Gamma(\beta))=\mathcal{C}(\mathbb{G}(\beta))\) coincide. Here \(\mathbb{G}(\beta)\) is the plaic fence associated to \(\beta\), as described in Section 3, and \(\mathcal{C}(\mathbb{G}(\beta))\) is the associated configuration, as constructed in Section 3.1. We shorten notation to \(\mathcal{C}(\beta):=\mathcal{C}(\mathbb{G}(\beta))\). Consider the cluster seed \(\mathfrak{c}(L_{\beta},\Gamma_{\beta})\) in \(\mathbb{C}[X(\Lambda_{\beta},T)]\) associated to \((L_{\beta},\Gamma_{\beta})\). By construction, the vertices of the quiver in \(\mathfrak{c}(L_{\beta},\Gamma_{\beta})\) are given by curves in \(\mathcal{C}(\beta)\) and the arrows record signed intersections, cf. [10] or [11, Section 4]. Thus the quiver in the seed \(\mathfrak{c}(L_{\beta},\Gamma_{\beta})\) coincides with the quiver \(Q(\mathcal{C}(\beta))\) associated to the curve configuration \(\mathcal{C}(\beta)\) in \(L_{\beta}=\Sigma(\mathbb{G}(\beta))\). Note that the curves in \(\mathcal{C}(\beta)\) are smooth, oriented, embedded and form a basis of \(H_{1}(L_{\beta},\mathbb{Z})\). Therefore, \(\mathcal{C}(\beta)\) is indeed a curve configuration. It follows from Properties (P1) and (P2) in Subsection 4.6, or direct inspection, that \(\mathcal{C}(\beta)\) is reduced and \(Q(\mathcal{C}(\beta))\) contains no \(2\)-cycles. Equally important, Proposition 3.12 implies that the QP \((Q(\mathcal{C}(\beta)),W(\mathcal{C}(\beta)))\) is non-degenerate. Let \((v_{1},\ldots,v_{\ell})\) be the sequence of mutable vertices in \(Q(\mathcal{C}(\beta))\) given to us in item \((i)\) of the statement. Let us construct an embedded exact Lagrangian filling \(L_{k}\) for the seed \(\mu_{v_{k}}\ldots\mu_{v_{1}}(\mathfrak{c}(L,\Gamma))\) along with an \(\mathbb{L}\)-compressing system for \(L_{k}\). Let us denote by \(\mathscr{D}_{0}=\mathscr{D}_{\beta}\) the collection of \(\mathbb{L}\)-compressing disks associated to \(\Gamma(\beta)\). Now, the vertex \(v_{1}\) corresponds to an \(\mathbb{L}\)-compressible curve \(\gamma_{1}\in\mathcal{C}(\Gamma(\beta))\). In order to obtain a Lagrangian filling in the seed \(\mu_{1}:=\mu_{v_{1}}(\mathfrak{c}(L_{\beta},\Gamma_{\beta}))\) we apply Proposition 4.7 with \(L=L_{\beta}\) and \(D=D_{v_{1}}\in\mathscr{D}_{0}\) the \(\mathbb{L}\)-compressing disk associated to \(\gamma_{1}\). The hypothesis of reducedness and non-degeneracy of the initial QP are indeed satisfied by the previous paragraph. Proposition 4.7 now produces an \(\mathbb{L}\)-compressing system \(\mathscr{D}_{1}\) for the Lagrangian filling \(L_{1}:=\mu_{D_{v_{1}}}(L_{\beta})\) whose associated QP is reduced and non-degenerate. In addition, the Lagrangian disks in \(\mathscr{D}_{1}\) are in a specified bijection with the disks in \(\mathscr{D}_{0}\). This bijection is established as follows. The \(\mathbb{L}\)-compressing disks are indexed by the curves in the respective configurations. Lemma 4.5 implies that the curves undergo a \(\gamma\)-exchange under Lagrangian disk surgery. By construction, curves in a configuration before and after a \(\gamma\)-exchange are in a specified bijection, as the vertices of the corresponding quiver are identified (via the identity), cf. Section 2.3.1. A reduction of a configuration also determines a unique bijection, as Lemmas 2.5 and 2.7 show that the vertices of the quiver remain (identically) the same under triple point moves and local bigon moves. Therefore, the Lagrangian disks in \(\mathscr{D}_{1}\) are indeed in a specified bijection with the disks in \(\mathscr{D}_{0}\). Consider the next vertex \(v_{2}\) of \(Q(\mathcal{C}(\beta))\) at which we must mutate. By the bijection above, this specifies a unique \(\mathbb{L}\)-compressing disk \(D_{v_{2}}\) in \(\mathscr{D}_{1}\) in the \(\mathbb{L}\)-compressing system of \(\mu_{D_{v_{1}}}(L_{\beta})\). Since the QP associated to \(\mathscr{D}_{1}\) is reduced and non-degenerate, we can apply Proposition 4.7 to \(\mu_{D_{v_{1}}}(L_{\beta})\) and \(\mathscr{D}_{1}\). This produces a Lagrangian filling \(L_{2}=\mu_{D_{v_{2}}}\mu_{D_{v_{1}}}(L_{\beta})\) in the cluster seed \(\mu_{v_{2}}\mu_{v_{1}}(\mathfrak{c}(L,\Gamma))\) with an \(\mathbb{L}\)-compressing system \(\mathscr{D}_{2}\). Since the QP associated to \(\mathscr{D}_{2}\) is again reduced and non-degenerate, we can iteratively apply Proposition 4.7 without any constraints. By Theorem 4.8, this procedure indeed constructs the required Lagrangian fillings \(L_{k}=\mu_{D_{v_{k}}}\ldots\mu_{D_{v_{2}}}\mu_{D_{v_{1}}}(L_{\beta})\) with \(\mathbb{L}\)-compressing systems \(\Gamma_{k}=\mathscr{D}_{k}=\mu_{D_{v_{k}}}\ldots\mu_{D_{v_{2}}}\mu_{D_{v_{1}}} (\mathscr{D}_{\beta})\) in the required cluster seeds. This proves item \((i)\) of the statement. The construction we used, applying Proposition 4.7, implies item \((ii)\). ### A related statement Let \(\Lambda\subset(\mathbb{R}^{3},\xi_{st})\) be a Legendrian link and \(L\subset(\mathbb{R}^{4},\lambda_{st})\) an embedded exact Lagrangian filling. Suppose that \(\Gamma\) is a collection of \(\ell\) oriented simple \(\mathbb{L}\)-compressible curves in \(L\) which are linearly independent in \(H_{1}(L;\mathbb{Z})\), but not necessarily spanning. Let us refer to such a collection as a partial \(\mathbb{L}\)-compressing system of rank \(\ell\). A curve QP \((Q(\Gamma),W(\Gamma))\) is still defined.29 If we want to emphasize the dependence on \(L\), we write \(Q(L,\Gamma)=Q(\Gamma)\). If such QP \((Q(\Gamma),W(\Gamma))\) is non-degenerate, one may proceed as above and conclude a statement in line with Theorem 1.1 for more general Legendrian links \(\Lambda\subset(\mathbb{R}^{3},\xi_{st})\). Following the same steps in the proof of Theorem 1.1, we obtain the following: Footnote 29: In fact, \((Q(\Gamma),W(\Gamma))\) is defined for any collection of oriented simple curves, even if they are not linearly independent in \(H_{1}(L;\mathbb{Z})\). Linear independence is only used in proving Proposition 3.15. Thus one may proceed by just assuming that the assumptions in this proposition, non-degeneracy and the bigon condition, hold instead. **Theorem 4.10**.: _Let \(\Lambda\subset(\mathbb{R}^{3},\xi_{st})\) be a Legendrian link, \(T\subset\Lambda\) a set of marked points with one marked point per component and \(X(\Lambda,T)\) the affine scheme given by the spectrum of \(H^{0}\) of the Legendrian contact dg-algebra of \(\Lambda\subset(\mathbb{R}^{3},\xi_{st})\). Suppose that there exists an orientable embedded exact Lagrangian filling \(L\subset(\mathbb{R}^{4},\lambda_{st})\) of \(\Lambda\) and a partial \(\mathbb{L}\)-compressing system \(\Gamma\) for \(L\) of rank \(\ell\) such that \((Q(\Gamma),W(\Gamma))\) is non-degenerate. Then:_ 1. _If_ \(\mu_{v_{\ell}}\ldots\mu_{v_{1}}\) _is any sequence of mutations, where_ \(v_{1},\ldots,v_{\ell}\) _are mutable vertices of the quiver_ \(Q(L,\Gamma)\)_, then there exists a sequence of embedded exact Lagrangian fillings_ \(L_{k}\) _of_ \(\Lambda\) _with associated quivers_ \(Q(L_{k},\Gamma_{k})=\mu_{v_{k}}\ldots\mu_{v_{1}}(Q(L,\Gamma))\)_, for all_ \(k\in[\ell]\)_._ 2. _Each embedded exact Lagrangian filling_ \(L_{k}\) _is equipped with a partial_ \(\mathbb{L}\)_-compressing system_ \(\Gamma_{k}\) _of rank_ \(\ell\) _such that Lagrangian disk surgery on_ \(L_{k}\) _along any Lagrangian disk in_ \(\mathscr{D}(\Gamma_{k})\) _yields an_ \(\mathbb{L}\)_-compressing system. Furthermore,_ \(\Gamma_{k+1}\) _is equivalent to such a partial_ \(\mathbb{L}\)_-compressing system via a sequence of triple point moves and local bigon moves._ _In addition, if there exists a sub-algebra \(A\subset\mathbb{C}[X(\Lambda,T)]\) which is a cluster algebra and \((L,\Gamma)\) defines a cluster seed \(\mathfrak{c}(L,\Gamma)\) for \(A\), then \(\mathfrak{c}(L_{k},\Gamma_{k})=\mu_{v_{k}}\ldots\mu_{v_{1}}(\mathfrak{c}(L, \Gamma))\) in \(A\subset\mathbb{C}[X(\Lambda,T)]\), for all \(k\in[\ell]\). In particular, the map \(\mathfrak{C}:\text{Lag}^{c}(\Lambda)\longrightarrow\text{Seed}(A)\) is surjective, i.e. there exists an embedded exact Lagrangian filling endowed with an \(\mathbb{L}\)-compressing system realizing every cluster seed in \(A\). _ The advantage of Theorem 4.10 is that it applies to general \(\Lambda\subset(\mathbb{R}^{3},\xi_{st})\). The disadvantage is that, in a given instance being studied, one must construct a partial \(\mathbb{L}\)-compressing system \(\Gamma\) and verify the hypothesis that \((Q(\Gamma),W(\Gamma))\) is non-degenerate. The existence of a sub-algebra \(A\) that is cluster can be sometimes established using the Starfish lemma [11, Prop. 6.4.1]. In the case of Theorem 1.1, we have built above a specific \(\mathbb{L}\)-compressing system \(\Gamma=\Gamma_{\beta}\) for any \(\Lambda_{\beta}\subset(\mathbb{R}^{3},\xi_{st})\) and proven, with Proposition 3.12, the non-degeneracy of its associated QP. For such links, it follows from [12] that such a sub-algebra \(A\) exists. In fact, we can take \(A=\mathbb{C}[X(\Lambda_{\beta},T)]\), cf. [11, CGG\({}^{+}\)22].
2309.17170
A Vision-Guided Robotic System for Grasping Harvested Tomato Trusses in Cluttered Environments
Currently, truss tomato weighing and packaging require significant manual work. The main obstacle to automation lies in the difficulty of developing a reliable robotic grasping system for already harvested trusses. We propose a method to grasp trusses that are stacked in a crate with considerable clutter, which is how they are commonly stored and transported after harvest. The method consists of a deep learning-based vision system to first identify the individual trusses in the crate and then determine a suitable grasping location on the stem. To this end, we have introduced a grasp pose ranking algorithm with online learning capabilities. After selecting the most promising grasp pose, the robot executes a pinch grasp without needing touch sensors or geometric models. Lab experiments with a robotic manipulator equipped with an eye-in-hand RGB-D camera showed a 100% clearance rate when tasked to pick all trusses from a pile. 93% of the trusses were successfully grasped on the first try, while the remaining 7% required more attempts.
Luuk van den Bent, Tomás Coleman, Robert Babuska
2023-09-29T12:07:08Z
http://arxiv.org/abs/2309.17170v1
# A Vision-Guided Robotic System for Grasping Harvested Tomato Trusses in Cluttered Environments ###### Abstract Currently, truss tomato weighing and packaging require significant manual work. The main obstacle to automation lies in the difficulty of developing a reliable robotic grasping system for already harvested trusses. We propose a method to grasp trusses that are stacked in a crate with considerable clutter, which is how they are commonly stored and transported after harvest. The method consists of a deep learning-based vision system to first identify the individual trusses in the crate and then determine a suitable grasping location on the stem. To this end, we have introduced a grasp pose ranking algorithm with online learning capabilities. After selecting the most promising grasp pose, the robot executes a pinch grasp without needing touch sensors or geometric models. Lab experiments with a robotic manipulator equipped with an eye-in-hand RGB-D camera showed a 100% clearance rate when tasked to pick all trusses from a pile. 93% of the trusses were successfully grasped on the first try, while the remaining 7% required more attempts. ## I Introduction During the last decades, crop production has significantly increased in volume and efficiency thanks to mechanization and automation [1]. However, a substantial amount of manual work is still required in the difficult-to-automate processes such as crop harvesting, manipulation or packaging. This presents a serious problem, given the rising demand for food and the decreasing number of people willing to work in agriculture [2]. This paper's focus is on the automated handling of truss tomatoes, also known as vine tomatoes. A tomato truss refers to the bundle of tomatoes that are still attached to the fruiting stem after harvesting. We focus on grasping trusses from a crate in which they are transported from the harvesting location; see Figure 1. The purpose is to inspect the tomatoes for damage, weigh them, and finally place them on a transportation belt for automatic packaging. The main challenge is to identify a suitable grasping pose, given the trusses' diverse and unpredictable shapes and the cluttered conditions in the crate. The grasping pose must guarantee safe handling of the tomato truss without damaging it. This work's main contribution is the development, implementation and validation of a learning-based perception method to identify suitable grasp poses so that the trusses can be reliably grasped. We introduce a grasp pose ranking algorithm to select the most suitable grasp pose out of several candidate poses and to adapt the selection model based on the success or failure of the executed grasp. Extensive lab experiments have been carried out to validate the approach using a Franka Emika Panda manipulator equipped with the Intel Realsense D405 RGB-D camera. More than 1300 grasp attempts have been carried out within these experiments on real tomato trusses. The data acquired have been used to develop and train the deep-learning models and to validate the approach. To the best of our knowledge, such experiments have never been documented in the literature. The remaining sections of this paper are structured as follows: Section II provides an overview of the related research on grasping tomato trusses. Section III describes the proposed computer vision method for finding suitable grasp poses. Section IV provides validation experiments done in a lab environment to test the proposed method. The results are analyzed and discussed in Section V, and Section VI concludes the paper. ## II Related Work Although numerous studies are devoted to the detection and grasping of tomatoes [3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18], the majority focuses on the harvesting or grasping of single tomatoes instead of trusses. Grasping the entire truss by just one tomato results in a high chance of the stem detaching from the tomato, which makes all these methods infeasible. Instead, the truss should be grasped by the peduncle. The first step to finding suitable grasp poses commonly relies on identifying the stem, which is considered a segmentation problem. Common methods are based on color, where thresholds are set by hand [10] or with the use of adaptive thresholding methods, like Otsu [11] or k-means thresholding [17]. Color-based methods usually achieve poor results in varying lighting conditions and cannot deal with a cluttered environment. More recently, deep learning methods have been proposed. Rong et al. [16] use a YOLO network to first identify tomato trusses in an image, and then a second YOLO network is used to produce masks for the part of the stem above the tomatoes where grasping and cutting is feasible. Although good results Fig. 1: Harvested tomato trusses are stacked in a crate before they enter the packaging process. have been reported, the downside of this approach is that generating segmentation masks for training is time and labor intensive. Other recent methods include deep learning-based 6D-pose estimation. Kim et al. [9] find the 6D-pose of tomato and stem to harvest individual tomatoes by using a single known 3D model. A single model would likely not work for truss tomatoes since they have more variance in their appearance. Also, this method requires training within a simulator to determine ground truth 6D poses. Zhang et al. [14, 15] focus on trusses where the 6D-pose is found using keypoints. The tomato trusses are modeled with 11 keypoints: six for the tomatoes and five for the peduncle and stem. However, having a fixed number of keypoints is not suitable when dealing with different types of trusses, that can have varying quantities of tomatoes. Furthermore, many of the methods used in harvesting are not directly applicable when it comes to grasping out of a crate. This is because they rely on the assumption that the trusses in a harvesting environment are hanging vertically. Then, the grasping position can be chosen on the stem above the highest tomato and at an angle perpendicular to the direction of gravity. This assumption does not hold for grasping trusses from a crate. To grasp trusses in a horizontal position, De Haan et al [17] proposed a graph-based method that finds a grasp pose closest to the calculated center of mass along the peduncle with sufficient space from junctions; the locations where pedicels are attached to the peduncle. The grasp angle is chosen to be perpendicular to the peduncle. After segmenting the stem, the peduncle is found under the assumption that it is the longest path on the graph with a limited curvature. This method is a claimed improvement over the method from Gray and Pekkeriet [18], which uses a random sample consensus (RANSAC) regressor to identify the peduncle by assuming that it makes up the longest continuous area present in the stem segment. Although this method could suffice, this approach ignores other parts of the stem and tomatoes which results in grasp failures, as reported by the authors. Also, this method of finding the peduncle is sensitive to properly set hyperparameters and might fail for oddly shaped trusses. Lastly, this method requires segmentation of the stem which, for good performance in clutter and varying lighting conditions, should be deep learning based, which again, requires extensive labeling to train. ## III Perception We assume a setting where tomato trusses are placed and stacked on top of each other with the peduncle facing upwards, which is how they are commonly stored in a crate after harvesting. The manipulator's end-effector with the Intel Realsense D405 RGB-D camera is initially positioned approximately \(0.75\pm 0.1\)m above the tomatoes in order to fit the entire crate in the camera's field of view. The problem is a top-down grasping problem, where the grasp pose is defined in 4D (3D position and angle). A major challenge in grasping tomato trusses is the requirement of a more accurate grasping pose than with many other simpler and less delicate objects. With parallel grippers, damage typically occurs when grasping by an individual tomato or the weaker parts of the stem, such as the pedicels or calves. A suitable grasp pose should be located on the peduncle, with as much space as possible from the other parts of the stem and tomatoes, as shown in Figure 2. Also, when identifying a suitable grasping pose, it is crucial to consider the possibility of other trusses obstructing the end-effector. A truss should not be grasped if they are overlapped or obstructed by other trusses. To find suitable grasp poses, we propose a three-stage perception method consisting of i) tomato truss detection, ii) grasp pose identification, and iii) grasp pose ranking. The individual stages are discussed below and visualized in Figure 3. ### _Tomato Truss Detection_ This step aims to find an unobstructed tomato truss. This is achieved by training the model on data in which only unobstructed trusses were labeled. When multiple such trusses are detected, the algorithm selects the one with the lowest average depth from the camera's perspective. ArchitectureWe utilize a variant of the YOLOv5 [19] architecture which outputs an oriented bounding box defined by the coordinates of all its corners \((x_{1},y_{1},x_{2},y_{2},x_{3},y_{3},x_{4},y_{4})\). This format provides a precise fit, which is beneficial in the cluttered environment consid- DatasetWe collected and labeled 225 images to train the model: 200 were used for training and 25 for validation. The images contained different numbers and types of tomato trusses, and we varied the height and the angle from which the image was captured, as well as the background and lighting conditions. All images were resized to 640x640, and black borders were added if needed to preserve the aspect ratio. TrainingThe network was trained for 300 epochs with a learning rate of 0.001 and a batch size of 32. The Adam optimizer was used with a momentum of 0.937 and weight decay of 0.0005. To reduce overfitting and improve Fig. 2: Suitable grasp poses on the peduncle for grasping tomato trusses. The yellow dots represent the positions, and the purple rectangles indicate the orientations of the grasps. generalization, the model weights were pre-trained on COCO 2017 [20], and data augmentations such as variations in the HSV channels, random rotation/translation/scale, and flipping upside-down/left-right were utilized. Performance EvaluationThe model performance evaluated on the validation set resulted in Mean Average Precision (MaP)@0.5 of 0.952 and [email protected]:0.95:0.05 of 0.693. During inference, the Non-Maximum Suppression (NMS) confidence and NMS Intersection Over Union (IOU) thresholds were kept at the standard values of 0.25 and 0.45, respectively. This resulted in the precision of 0.935, the recall of 0.967, and the F1 score of 0.95. ### _Grasp Pose Identification_ The next step is to identify candidate grasp poses on the tomato truss detected. We use a learning-based pose-estimation model, which takes an RGB image as input and directly outputs the candidate grasp poses without the need for segmentation. To get more precise depth information, we first control the arm to approach the truss to get a close-up view. The position of the arm is chosen so that the camera is 0.1 m above the center of the bounding box. The orientation \(\alpha\) of the camera is calculated so that it aligns horizontally with the longest side of the bounding box: \[\alpha=\arctan 2\left(\frac{y_{2}-y_{1}}{x_{2}-x_{1}}\right) \tag{1}\] where \(\arctan 2\) is the four-quadrant inverse tangent. PreprocessingThe closeup view mostly contains a single truss. However, this truss is usually still surrounded by parts of other trusses and parts of the underlying trusses can also be seen in the background. Therefore, two preprocessing steps are applied to the point cloud generated from the RGB-D image: 1) filter out the surrounding trusses by reusing the previously found bounding box of the truss of interest to remove points that lay outside the bounding box; 2) remove the background trusses by fitting a plane to the points remaining after step 1 by using the RANSAC method and removing points that have a distance larger than \(d_{p}\). A suitable value for \(d_{p}\) depends on the size of the tomatoes and is generally in the same range as their diameter. During experiments, \(d_{p}=0.05\,m\) was used. ArchitectureTo identify possible grasp poses on the preprocessed RGB image, we use the Yolov7-Pose [21] architecture, where each bounding box contains a single keypoint for the grasp position. We have extended the network to provide not just the keypoint positions but also their respective orientations. To get a 3D position for the grasp poses, the pixel locations of the keypoints are deprojected, and the grasp angles are taken directly as the keypoint orientations. Dataset and TrainingA dataset of 50 preprocessed images was gathered and hand-annotated to train the model. This dataset was split into 40 training and 10 validation images. The same data augmentation and hyper-parameters were used as for the tomato truss detection model described in Section III-A. Performance EvaluationThe performance of the model is evaluated by comparing the keypoint predictions with the manually annotated ones (ground truth). A keypoint is considered correctly predicted if the distance to the ground truth is less than 0.003 m, which is a little less than half the typical distance between junctions. Figure 4 shows an example image with ground truth and predictions. A precision and recall score of 0.89 and 0.98, respectively, were obtained for the validation set. Figure 5 shows a box plot of the location and angle errors of correctly predicted keypoints. ### _Grasp Pose Ranking_ The last step in the perception method is ranking the identified grasp poses in terms of the expected grasp success. De Haan et al. [17] chose the grasp pose to be as close as possible to the estimated truss' center of mass to retain the truss's horizontal position after lifting it up. This metric fails to account for possible collisions of the gripper with the pedicels or tomatoes. To overcome this limitation, we propose a learning-based metric to measure the suitability of a grasping pose by a number between 0 and 1, estimated on the basis of the success or failure of previous grasp attempts with similar grasp poses. To get the input for the ranking method, for every possible grasping pose, the preprocessed pointcloud gets rotated by Fig. 3: Overview of the method. Steps A, B, and C represent the tomato truss detection, grasp pose identification, and grasp pose ranking. the grasping angle, and points with an \(L_{\infty}\) norm of more than \(d_{r}\) to the grasping position get removed. This distance \(d_{r}\), should be chosen so that all necessary local information remains. During experiments, this was set at \(0.02\,m\). Finally, these resulting pointclouds get projected back into the depth images, which are normalized and have a resolution of 128x128 pixels. To prevent the repetition of unsuccessful grasp attempts, the model is continuously updated online. In this way, the failure of recent grasp attempts can lead to trying alternative grasp poses. To be able to quickly adapt the model to new data, we use a K-Nearest Neighbors (KNN) classifier applied to features extracted by an auto-encoder, as depicted in Figure 6. While the auto-encoder is not updated online, the KNN classifier can easily be refitted from scratch in real time, even on a CPU. Utilizing an auto-encoder introduces an additional advantage over a direct neural network for the classification; although only one of the detected grasp poses can be executed and labeled, all the grasp proposals can be used for training the auto-encoder since this training does not require any labels. DatasetTo train the auto-encoder and KNN, an initial offline training phase is performed where one of the sampled grasping poses gets randomly executed. Labeling is automated by using the load force estimate provided by the manipulator; the change in force before and just after releasing the potentially grasped tomato truss is compared. The grasp is considered successful if this change is more than a predefined threshold. A total of 962 grasps on roughly 50 different trusses were recorded over multiple experiments. This resulted in 4807 unlabeled grasp poses used to train the auto-encoder. This dataset is split into 70% for training and 30% for validation. TrainingTo train the autoencoder, we used the Adam optimizer with a learning rate of 0.0001, weight decay of 0.0001, and \(\beta\) of 0.9. The model was trained for 40 epochs with a batch size of 512 using a mean squared error loss. For the KNN, the number of neighbors was set to 10, and the weights of each were set inversely proportional to their distance. For both, the dataset was increased with augmented images by using a combination of; flipping upside-down/left-right and rotating 180 degrees. Performance EvaluationOf the 290 validation images, a total of 172 true positives (TP), 72 true negatives (TN), 14 false negatives (FN), and 32 false positives (FP) were obtained, which is shown in Table I. This results in an F1 score of 0.88. ## IV Experiments To evaluate the proposed learning-based method, lab experiments were performed. A video1 and the codebase2 are available online. Footnote 1: [https://youtu.be/uFkPiPVTB6VQ](https://youtu.be/uFkPiPVTB6VQ) Footnote 2: [https://github.com/LnukvandenBent/learning_approach_to_robotic](https://github.com/LnukvandenBent/learning_approach_to_robotic), grasping, of_vine, tomato ### _Experimental Setup_ We used the Franka Emika Panda3 manipulator equipped with the Intel Realsense D4054 RGB-D camera, mounted close to the end-effector in the "eye-in-hand" configuration. Custom, 3D-printed slim gripper fingers accommodate the limited space available for grasping and lower the chance of the end-effector getting stuck on parts of the stem. The fingers were covered with grippy, deformable foam to minimize potential damage to the stem and tomatoes. A Cartesian impedance controller [22] is used to move the arm. The physical setup, along with a close-up of the end-effector and the dimensions of the fingers, can be seen in Figure 7. Footnote 3: [https://www.franka.de/research](https://www.franka.de/research) The setup is mounted on a table where one or more tomato trusses are placed and potentially stacked next and on top of each other, depending on the experiment. The trusses were not placed inside a crate since we focus on perception and do not consider collision with the crate walls. We assume that i) the peduncle is facing upward (this is normally the case in practice), ii) the perception system performance is not influenced by the presence of the crate (this can be ensured by proper lighting using special lamps, which is a common industrial practice), and iii) the controller perfectly executes Fig. 4: Example image of how the grasp pose identification network is evaluated; Ground truth grasp poses are shown as an orange dot with a green line for the orientation whilst the predictions are shown in blue. The green circle shows the distance threshold in which a prediction has to be located to be considered correct. \begin{table} \begin{tabular}{c|c c c} & \multicolumn{3}{c}{Predicted} \\ & & 1 & 0 \\ \hline \multirow{2}{*}{ \begin{tabular}{c} 1 \\ 0 \\ \end{tabular} } & 1 & 172 (59.3\%) & 14 (4.8\%) \\ & 0 & 32 (11.0\%) & 72 (24.8\%) \\ \hline \end{tabular} \end{table} TABLE I: Confusion matrix of the proposed grasp pose ranking model on the validation set containing 290 grasps Fig. 5: Boxplots displaying the distance and angle errors of the correctly predicted keypoints on the validation set of the grasp pose identification network. the commanded movement of the arm (which again is a reasonable assumption with current industrial robot arms). ### _Pick and Place Routine_ To evaluate the method, a pick-and-place routine is carried out. It consists of five consecutive steps: (i) localizing a single truss to be grasped (ii) approaching the truss with the eye-in-hand camera to take a close-up image (iii) identifying a suitable grasping pose (iv) reaching towards the peduncle and grasping it (v) lifting the truss and placing it at a desired location We have conducted three types of experiments to evaluate how well the proposed method is able to grasp previously unseen tomato trusses: 1. Grasping in a non-cluttered environment. Here, only one truss was placed on the table at a time. The goal of this experiment is to evaluate the impact of the proposed grasp pose ranking method, compared to randomly selecting one of the candidate poses or selecting the one closest to the center of the truss' bounding box. The center of the bounding box is an approximation of the center of mass, which should provide the most stable grasp. 2. Grasping in a cluttered environment. Trusses are arranged to create a single-layer background, with the target truss positioned on top. Here, the tomato truss detection and preprocessing part of the perception process are tested to see how well they are capable of dealing with a cluttered environment. 3. Pile clearing. Lastly, multiple trusses are randomly stacked next to and on top of each other, which resembles a filled crate after harvesting. In this test, the purpose is to see if the system is able to fully remove all trusses that are present and not get stuck by repeatedly trying the same failing grasp pose. In the first two experiments, if a grasp is successful, the truss is placed back on the surface with a random pose near the center of the workspace by the manipulator. However, if a grasp fails, the truss is not moved by hand and is reattempted as is. The online learning capabilities of the proposed method are disabled to evaluate the performance of the model trained offline. ### _Failure Modes_ Two types of failures were observed during experiments: 1. Perception: the perception system provided an inappropriate grasping pose. 2. Gripper: the truss slips out of the fingers during lifting or mid-air manipulation. This error indicates weakness in the pinch grasping method used. The type of failure was automatically determined during the grasping attempt by checking the width between the fingers after closing but before lifting. A width of (near) zero indicates that there is nothing between the fingers, signifying a failed grasping pose. If something was initially held between the fingers before lifting but not when placing, the error is assumed to be caused by slipping. ### _Results_ The previously described pick and place tasks were executed on 25 different tomato trusses. For the first non-cluttered experiment, each truss was attempted with the three strategies: randomly picking a candidate grasp pose, picking the one closest to the center of the bounding box, Fig. 6: Architecture used for evaluating grasp poses; a KNN classifier on the latent space of an auto-encoder. Fig. 7: Experimental setup, with a closeup of the end-effector. The dimensions of the gripper fingers are shown from the side and front. The red area represents the deformable material. and using the highest-scored pose ranked by the proposed method. Per truss, each strategy was repeated 20/10/10 times for the three methods, respectively, for a total of 1000 attempts. The outcomes displayed respective failure rates of 47.6% (238/500), 20% (50/250), and 7.2% (18/250) and are summarized in Table II. In the remaining experiments, only the proposed grasp pose ranking strategy is used. In the second cluttered environment experiment, all 25 different tomato trusses were tested and repeated 10 times each. For each truss, a new single layer of background trusses was formed using a random selection of the remaining trusses. A failure rate of 4.4% (11/250) was observed here. The pile-clearing experiment was also performed 10 times. Each time, 10 out of the 25 tomato trusses were randomly selected and stacked by hand next to and on top of each other. In this experiment, the online learning of the proposed method was enabled, but we removed the samples from the last attempts after each trial to make sure the trusses were unseen by the system at the beginning of each trial. Unlimited grasping attempts were allowed until the system was able to fully clear the pile in all 10 attempts. 93% (93/100) of the trusses were successfully grasped at the first attempt, 6% took two attempts, and one truss took six attempts. ## V Discussion The results of the experiments in the uncluttered setting show that the proposed ranking-based method for evaluating the candidate grasp poses achieves a lower failure rate compared to selecting the pose randomly or close to the truss center of mass. In the second experiment, a lower failure rate was obtained when grasping tomato trusses in the cluttered setting compared to isolated trusses. Intuitively, the latter should be an easier task, which likely means that the difference is not statistically significant. However, these results show that the tomato truss detection and preprocessing steps effectively simplify the cluttered problem to essentially grasping in isolation. Most perception failures were a result of inadequate predictions from the grasp pose identification network. Therefore, the overall performance could be further improved by increasing the amount of data used to train this network. After the experiments, no visual damage was observed on the tomatoes. However, after repeatably gripping and releasing the same trusses, some abrasion damage was observed on the skin of the peduncles. This will clearly not be an issue in the industrial setting, where each truss will be handled only once. A limitation of our experiments is that they were performed with only one type of tomato trusses. Although the system was tested and tried successfully on different varieties, no reliable conclusions can be drawn about the success rate of the proposed method for other types of trusses, for instance, cherry tomatoes. The perception steps, tomato truss detection, preprocessing, grasp pose identification, and grasp pose ranking take approximately \(0.05\pm 0.01\), \(2.00\pm 0.24\), \(0.64\pm 0.21\), and \(5.15\pm 0.60\) seconds, respectively, on a computer with an Intel i7-8750H processor and NVIDIA GeForce GTX 1060 GPU. Note that the autoencoder of the grasp pose ranking step was run on the CPU due to GPU memory constraints. The whole 'pick and place' cycle, which includes the perception and returning to start, takes around 30 seconds. ## VI Conclusion and Future Work This paper presents a three-stage vision method for grasping cluttered tomato trusses. First, we utilize an object detection model to identify an unobstructed truss. Next, we extend the Yolov7-pose algorithm to allow an angle to be directly encoded for the keypoints, which is used to identify candidate grasping poses. In the last step, an autoencoder network with a KNN classifier is trained offline and updated online to select the most promising grasp pose, based on the success of previous similar grasping attempts. Pile-clearing experiments conducted on a physical setup using real tomato trusses demonstrated a clearance rate of 100% when allowed to retry after a failed attempt. Of all the trusses, 93% were successfully grasped on the first try, while the remaining 7% required more attempts. Since many of the grasping failures were a result of the peduncle slipping out of the fingers, future work should focus on specialized grippers that can more effectively grip the trusses while avoiding damage to the stem. We further hypothesize that an enhanced gripper design can improve grasp success also when attempting sub-optimal grasp poses, reducing perception errors. Another topic for future research is the extension of our work to become collision-aware, solving the original problem of grasping out of a crate. While the current models have been specifically trained for picking tomato trusses, the proposed method can be applied to a wide range of other objects. We verified this by testing it on bananas and silverware, which included knives, forks, and spoons. The preliminary results were promising and will serve as a basis for our future publications. \begin{table} \begin{tabular}{l l l l l l} \hline \hline grasp pose & trials & failures & grasping & perception \\ selection & trusses\({}^{*}\)attempts & & & & \\ \hline random & \(25*20=500\) & 238 (47.6\%) & 117 (23.4\%) & 121 (24.2\%) \\ center & \(25*10=250\) & 50 (20.0\%) & 35 (14.0\%) & 15 (6.0\%) \\ ranking & \(25*10=250\) & 18 (7.2\%) & 13 (5.2\%) & 5 (2.0\%) \\ \hline \hline \end{tabular} \end{table} TABLE II: Failure rates of grasp attempts when using the proposed learning-based method for selecting the most promising grasp compared to randomly choosing or selecting the one closest to the center of the bounding box \begin{table} \begin{tabular}{l l l l l} \hline \hline scenario & \begin{tabular}{l} trials \\ trusses\({}^{*}\)attempts \\ \end{tabular} & failures & grasping & perception \\ \hline isolated & \(25*10=250\) & 18 (7.2\%) & 13 (5.2\%) & 5 (2.0\%) \\ clutter & \(25*10=250\) & 11 (4.4\%) & 9 (3.6\%) & 2 (0.8\%) \\ \hline \hline \end{tabular} \end{table} TABLE III: Failure rates of for grasping in isolation or clutter